New artificial intelligence software has worrisome implications – The Ticker

Art produced by artificial intelligence is popping up more and more on peoples feeds without them knowing.

This art can range from simple etchings to surrealist imagery. It can look like a bowl of soup or a monster or cats playing chess on a beach.

While a boom in AI that has the capacity to create art has been electrifying the high tech world, these new developments in AI have many worrisome implications.

Despite positive uses, newer AI systems have the potential to pose as a tool of misinformation, create bias and undervalue artists skills.

In the beginning of 2021, advances in AI created deep-learning models that could generate images simply by being fed a description of what the user was imagining.

This includes OpenAIs DALL-E 2, Midjourney, Hugging Faces Craiyon, Metas Make-A-Scene, Googles Imagen and many others.

With the help of skillful language and creative ideation, these tools marked a huge cultural shift and eliminated technical human labor.

A San Francisco based AI company launched DALL-E paying homage to WALL-E, the 2008 animated movie, and Salvador Dal, the surrealist painterlast year, a system which can create digital images simply by being fed a description of what the user wants to see.

However, it didnt immediately capture the publics interest.

It was only when OpenAI introduced DALL-E 2, an improved version of DALL-E, that the technology began to gain traction.

DALL-E 2 was marketed as a tool for graphic artists, allowing them shortcuts for creating and editing digital images.

Similarly, restrictive measures were added to the software to prevent its misuse.

The tool is not yet available to everyone. It currently has 100,000 users globally, and the company hopes to make it accessible to at least 1 million in the near future.

We hope people love the tool and find it useful. For me, its the most delightful thing to play with weve created so far. I find it to be creativity-enhancing, helpful for many different situations, and fun in a way I havent felt from technology in a while, CEO of OpenAI Sam Altman wrote.

However, the new technology has many alarming implications. Experts say that if this sort of technology were to improve, it could be used to spread misinformation, as well as generate pornography or hate speech.

Similarly, AI systems might show bias toward women and people of color because the data is being pulled from pools and online text which exhibit a similar bias.

You could use it for good things, but certainly you could use it for all sorts of other crazy, worrying applications, and that includes deep fakes, Professor Subbarao Kambhampati told The New York Times. Kambhampati teaches computer science at Arizona State University.

The company content policy prohibits harassment, bullying, violence and generating sexual and political content. However, users who have access can still create any sort of imagery from the data set.

Its going to be very hard to ensure that people dont use them to make images that people find offensive, AI researcher Toby Walsh told The Guardian.

Walsh warned that the public should generally be more wary of the things they see and read online, as fake or misleading images are currently flooding the internet.

The developers of DALL-E are actively trying to fight against the misuse of their technology.

For instance, researchers are attempting to mitigate potentially dangerous content in the training dataset, particularly imagery that might be harmful toward women.

However, this cleansing process also results in the generation of fewer images of women, contributing to an erasure of the gender.

Bias is a huge industry-wide problem that no one has a great, foolproof answer to, Miles Brundage, head of policy research at OpenAI, said. So a lot of the work right now is just being transparent and upfront with users about the remaining limitations.

However, OpenAI is not the only company with the potential to wreak havoc in cyberspace.

While OpenAI did not disclose its code for DALL-E 2, a London technology startup, Stability AI, shared the code for a similar, image-generating model for anyone to use and rebuilt the program with fewer restrictions.

The companys founder and CEO, Emad Mostaque, told The Washington Post he believes making this sort of technology public is necessary, regardless of the potential dangers. I believe control of these models should not be determined by a bunch of self-appointed people in Palo Alto, he said. I believe they should be open.

Mostaque is displaying an innately reckless strain of logic. Allowing these powerful AI tools to fall into the hands of just anyone will undoubtedly result in drastic, wide-scale consequences.

Technology, particularly software like DALL-E 2, can easily be misused as tools to spread hate and misinformation, and therefore need to be regulated before its too late.

More here:
New artificial intelligence software has worrisome implications - The Ticker

Related Posts

Comments are closed.