Article 693N9 Is the Panic Over AI Art Overblown? We Speak With Artists and Experts.

Is the Panic Over AI Art Overblown? We Speak With Artists and Experts.

by
Rachel Cheung
from VICE US - TECH on (#693N9)
Story Image

Zhang Wei, a Chinese freelance illustrator with eight years of experience, took up a gig in October to draw characters for novels. The company needed 65 sketches and offered 120 yuan ($17.70) for each. After Zhang filed his first draft, the client was pleased and paid him right away.

Yet days later, Zhang was told his services were no longer needed-the company had decided to replace him with an AI tool. To add insult to injury, he was even shown the artwork generated with the new technology. It was pretty good," Zhang told Chinese outlet Guokr last month, speaking under a pseudonym. More importantly, with the AI tool, each image cost the company only 2 cents.

Zhang's experience underscores growing fears in the creative industry in China and beyond that artists could lose their jobs to AI text-to-image generators, such as Midjourney, Stable Diffusion, and DALL-E-which have taken the internet by storm in recent months.

On Chinese social media, some artists have supported a campaign on ArtStation, a networking and portfolio site, to protest the presence and proliferation of AI-generated images. They echoed calls to boycott these tools by not sharing their output and not allowing AI companies to use their work to train AI. Others shared hidden watermarks that would supposedly prevent their work from being used to develop new AI models.

One seasoned artist, known as Hua Yecai, even went further and pledged that he would never use AI tools in his illustrations. I produce all my works by myself, one stroke at a time. If a client asks me to use AI to generate art, I'll turn down the job. If it's a company, I'll quit," he wrote on China's Quora-like platform Zhihu, suggesting AI could squeeze out young creatives who have yet to establish themselves.

This sentiment is part of a wider pushback against AI art globally. In Japan, Netflix's use of AI for the background art of a new short film drew backlash from anime workers. In the U.S., artists debated whether an author owns the copyright to a graphic novel made of pictures generated from written prompts. In December, director Guillermo del Toro even slammed animation created with machines as an insult to life itself." This growing resistance to AI-generated content isn't limited to art-some news publishers faulted OpenAI this month for using their articles to train its popular ChatGPT service, and software developers sued Microsoft in November for allegedly pirating human programmers' work.

There's no telling where the AI debate will lead, but few would dispute the technology's ability to disrupt the creative process as we know it.

What actually took your job isn't AI, but the operator who uses the tool to increase their productivity and create artwork more effectively," Xi Qiao, a Canada-based digital artist, told VICE World News.

You no longer need to undergo years of formal training in art to acquire the skills, you only need to write prompts and grasp the rules of these models," Xi said, adding that it is only a matter of time before AI tools produce polished iterations that rival those of humans.

Xi has worked with a team of developers to build Kalos.Art, a platform for AI enthusiasts to showcase and sell their work. To take it further, they're creating a database that compares the practical abilities of different AI models and provides information on prompt engineering, the process of writing input used to generate an image.

Despite the potential of such tools, there remains a heated dispute over the dataset developers used to train the models, a fight which is playing out in court. Stock photo provider Getty Images is suing Stability AI, the developer of Stable Diffusion, for copyright infringement over its use of 12 million photos in the creation of the image generator.

Last month, a trio of artists also launched a class action lawsuit against Stability AI, Midjourney, and the art-sharing hub DeviantArt, which released its own AI-powered tool, DreamUp. The three plaintiffs-Sarah Andersen, Kelly McKernan, and Karla Ortiz-alleged that these groups have violated the rights of millions of artists by using their work to inform the platforms' algorithms without the consent of the original artists. In response to the lawsuit, a spokesperson for Stability AI said the company takes these matters seriously. Anyone that believes that this isn't fair use does not understand the technology and misunderstands the law," the spokesperson said.

The outcomes of these lawsuits have huge implications on ownership in the age of AI, but they could take years to resolve. There's also the bigger question of whether these legal frameworks themselves are too outdated to apply.

The technology is moving a lot more quickly than law than the legal system. And I just don't think there's a matchup between what's actually happening and what copyright and intellectual property law talks about," Brendan Murphy, a lecturer in digital media at the CQUniversity of Australia, told VICE World News.

Ziv Epstein, a researcher at the MIT Media Lab who studies the intersection of humans and technology, said the legal implications come down to three questions. Is the training data something that can be fairly used? If you were to train a model, is the output copyrightable? The third is who would own that copyright-the people making the models, the people inputting the prompts, or those whose artwork in the dataset is closest to the output?

We don't have a clear answer to that," Epstein said. His 2020 study found that the degree to which people anthropomorphize AI-essentially endowing it with human-like characteristics-affects how they allocate credit to human actors involved in the production of an artwork.

There's actually a lot of human labor and human care that goes into these processes," he said. When you anthropomorphize the AI, that actually works to undermine the kind of perceived role of the human and the credit or responsibility of the human."

Viewing AI as tools wielded by humans, instead of agents acting on their own, is a step in the right direction. Yet this is still not entirely accurate, Epstein continued. It's a new medium. It's a diffuse socio-technical system with a lot of human actors and computational processes, all interacting in some very complex way," he said.

According to Epstein, this means it's not sufficient to rely just on courts to iron out these disputes. That's where we need more both technical and social research, understanding how these things work, how people feel about them, and then we can make those decisions based on good science," Epstein said. Because right now, we're just really at the brink of the beginning."

Follow Rachel Cheung on Twitter and Instagram.

External Content
Source RSS or Atom Feed
Feed Location http://motherboard.vice.com/rss
Feed Title VICE US - TECH
Feed Link https://www.vice.com/en%2Ftopic%2Ftech%3Flocale%3Den_us
Reply 0 comments