What’s on the horizon for AI?

Generative AI has had a wild year, with models like DALL-E, Stable Diffusion, and ChatGPT producing wondrous art and lifelike text messages with uncanny skill.

Together with

What’s on the horizon for AI?

 

The Future. Generative AI has had a wild year, with models like DALL-E, Stable Diffusion, and ChatGPT producing wondrous art and lifelike text messages with uncanny skill – perhaps too much skill. Many are terrified that these AI models will harm artists and generate content that misleads people or violates their rights, and have called for regulation. But generative AI appears far too profitable and interesting to disappear completely. So the only question is: what will we allow it to do?

BrAInstorming
TechCrunch summed up the trends we should expect to see in generative AI next year.

  • There will probably be more and better AIs (like Lensa) that can be tricked into creating more convincing NSFW content, such as deepfake nudes or overly sexualized images. Likewise, AIs (like Stable Diffusion) can be tricked into creating misleading or offensive content.
  • Efforts to allow artists to opt-out of their data sets should spike as well, along with AI curation, which requires financial resources, lots of human attention, and input from communities historically harmed by these data sets, like young women and artists.
  • The rise of decentralized computing should make open-source efforts more common. That’s good because open-source data sets can be scrutinized by the public without running into corporate legal issues. Still, AIs owned by large labs and companies will still have a competitive advantage thanks to their huge proprietary data sets.

The long arm of the law
Then there’s the matter of legal regulation, which likely won’t come until 2024. The most momentous law currently in the works is the EU’s AI Act, which will divide AIs into different risk categories that come with different requirements and levels of scrutiny. New York City has also proposed an AI hiring statute that would audit any AI that hires people to check for bias in the code.

In 2023, the U.K. might allow public data sets to be used commercially, and that’s when we’re most likely to see crackdowns – when generative AI is patently used to make money. Until then, the most powerful tool we have at our disposal is social pressure.

Luke Perrotta

TOGETHER WITH CANVA

No design skills needed! 🪄✨

Canva Pro is the design software that makes design simple, convenient, and reliable. Create what you need in no time! Jam-packed with time-saving tools that make anyone look like a professional designer.

Create amazing content quickly with Canva