Adobe Adds AI Features to Photoshop and Illustrator, Nvidia Unveils ‘Picasso’ AI Image Generation Service

Share

Two Silicon Valley companies on Tuesday announced new tools that use artificial intelligence to generate images while tackling some of the thorniest legal issues surrounding the technology: copyrights and payments.

Adobe Inc added artificial intelligence to some of its most popular software, including Adobe Photoshop and Adobe Illustrator, to speed the process of generating images and text effects, noting that creators whose work was used by the tools will be able to get paid.

Nvidia Corp unveiled its own service, known as “Picasso,” that uses AI to generate images, videos and 3D applications from text descriptions. Nvidia trained the technology on images licensed from Getty Images, Shutterstock Inc, and Adobe, and plans to pay royalties.

This marks a milestone in the ongoing tension between the rights of copyright holders and emerging technology. Image-generation technology is “trained” on billions of images, but whether that use is legally permitted is not always clear.

Getty Images earlier this year sued Stability AI, creators of the open-source art generation program Stable Diffusion, claiming it had copied more than 12 million images from its database without permission.

“This collaboration (with Nvidia) is a testament to the feasibility of a path of responsible AI development and the unique nature of Getty Images content and data,” Getty Images CEO Craig Peters told Reuters in an email.

“It is in line with our belief that generative AI is an exciting tool that should be based on permission data, visuals, and individual privacy.”

Adobe’s new AI-enhanced feature, called “Firefly,” allows users to use words to describe the images, illustrations or videos that its software will create. Because the AI has been trained on Adobe Stock images, openly licensed content and older content where the copyright has expired, the resulting creations are safe for commercial use, it said.

The company also is advocating for a universal “do not train” tag that would allow photographers to request that their content not be used to train models.

“We’re very interested in making this creator-friendly,” Ely Greenfield, chief technology officer for digital media at Adobe, told Reuters.

If Adobe users ask the system for an image in the style of a particular artist, “it won’t generate an image that is aping that person’s style,” Greenfield said. “You as an artist can merchandise this. If someone wants to use your style, you can actually sell a customer the right to use your style.”

Nvidia’s Picasso AI-image generator is part of a collection of AI-powered cloud products unveiled at its GTC Developer Conference.

“This is the basis of having something that will be interesting to the marketplace,” said Greg Estes, Nvidia’s vice president of developer programs, of working with partners like Getty.

“Because other software providers or enterprises of any kind, they don’t want to be involved (with image-generating AI) not knowing what the provenance is” of the underlying training images, he said.

Jun-Yan Zhu, assistant professor at the Robotics Institute at Carnegie Mellon, said it is not unusual for open-source AI models to train on billions of images. A number of factors, including whether a photographer is famous or whether the training dataset is publicly available, determine whether photographers know their works have been sampled, he added.

Zhu said he hopes photographers and artists may ultimately benefit by using the technology to license their artistic style.

“The livelihoods of content creators depend on respect for intellectual property rights and the value of their creative endeavours,” said Getty’s Peters.

“We believe that innovation and creativity thrive in an environment where artists, photographers, videographers, and creatives everywhere can be fairly compensated for their work, especially when it is used for commercial purposes.” 

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our ethics statement for details.