OpenSea arms itself with verified badges and A.I.
Future. OpenSea is rolling out new features to protect users from NFT scams and frauds. What does that mean? More transparency, expansion of its verified badges, and leveraging A.I. to take down copycat collections (that are hoping to make a quick buck). While regulating the community is a bit antithetical to the tenets of Web3, OpenSea may be trying to hedge against potential lawsuits for allowing abuse on its platform… especially while NFT sales are down.
Catch the copycats
The largest NFT marketplace is doing something about its big authenticity deficit.
- It’s expanding its verified badges for sellers and collections, immediately inviting those with over 100 ETH in trading volume (and eventually those with less).
- It’s also utilizing “chatter” on Twitter and Discord to determine who is eligible because, according to VP of product Shiva Rajaraman, it’s “hard to fake an active community.”
- It’s also deploying A.I. to flag “copymints” — fraudulent NFT collections meant to dupe buyers.
- The software would be able to not only catch “identical pixel-by-pixel replicas, but also flips, rotations, filters, or other permutations.”
Fast Company reports that the final moderation will be done by human moderators… but only until the algorithms become sophisticated enough to do it all on their own.
While policing the marketplace for bad actors is necessary (buyers have been scammed out of millions of dollars), OpenSea is well aware that the very act of regulation is controversial within the crypto community. Freedom is foundational to Web3 — freedom to remix, freedom to build on top of, and freedom from gatekeepers. But OpenSea is a business, and a business can’t flourish when customers are having a bad time.
Anne Fauvre-Willis, OpenSea’s head of special projects, knows that it’s inherently problematic for the company to police NFTs, so she said that the focus would be on removing those that “have the intent to deceive.