The Future. Photographers and illustrators are getting more tools to deter AI systems like Midjourney, Stable Diffusion, and DALL-E from using their works in training data without credit or authorization. If they’re able to make enough of a dent in the culture, they could set the criteria for how the general public should view media. But with smartphones making it very easy to use AI to edit photos however people see fit, it’s yet to be seen how important the distinction between original and manipulated works means to the public — apathy with major consequences.
From the moment a photo is captured to the time it’s uploaded, humans are getting new tools to fight against the rise of generative AI.
- Leica’s new M11-P camera can add “Content Credentials” — an image-verifying metadata that Adobe and Microsoft are also using — the moment a photo is taken.
- At $10,000, that may not be a very accessible tool. But camera makers Canon, Nikon, and Sony are also members of the initiative behind Content Credentials, so they could add the capability soon as well.
- Chip-maker Qualcomm said its new high-end smartphone chip can also add similar labeling to photos taken with devices… but it’ll be up to smartphone makers, app developers, and consumers to use the capability.
- A software tool called Nightshade takes things further, allowing artists to add a “poison pill” pixel to images so AI systems can’t read what they are.
- Glaze, another tool from the Nightshade developers, allows artists to upload their images so it can hide the artist’s style — like “making a normally realistic drawing into something cubist.”
While lawsuits against generative AI companies wind their way through the courts and the Copyright Office works on updating its rules, it may come down to rogue artists using these tools in the meantime.