From Hollywood to Sheffield, these are the AI stories to read this month – World Economic Forum

AI regulation is progressing across the world as policymakers try to protect against the risks it poses without curtailing AI's potential.

In July, Chinese regulators introduced rules to oversee generative AI services. Their focus stems from a concern over the potential for generative AI to create content that conflicts with Beijings viewpoints.

The success of ChatGPT and similarly sophisticated AI bots have sparked announcements from Chinese technology firms to join the fray. These include Alibaba, which has launched an AI image generator to trial among its business customers.

The new regulation requires generative AI services in China to have a licence, conduct security assessments, and adhere to socialist values. If "illegal" content is generated, the relevant service provider must stop this, improve its algorithms, and report the offending material to the authorities.

The new rules relate only to generative AI services for the public, not to systems developed for research purposes or niche applications, striking a balance between keeping close tabs on AI while also making China a leader in this field.

The use of AI in film and TV is one of the issues behind the ongoing strike by Hollywood actors and writers that has led to production stoppages worldwide. As their unions renegotiate contracts, workers in the entertainment sector have come out to protest against their work being used to train AI systems that could ultimately replace them.

The AI proposal put forward by the Alliance of Motion Picture and Television Producers reportedly stated that background performers would receive one day's pay for getting their image scanned digitally. This scan would then be available for use by the studios from then on.

China is not alone in creating a framework for AI. A new law in the US regulates the influence of AI on recruitment as more of the hiring process is handed over to algorithms.

From browsing CVs and scoring interviews to scraping social media for personality profiles, recruiters are increasingly using the capabilities of AI to speed up and improve hiring. To protect workers against a potential AI bias, New York City's local government is mandating greater transparency about the use of AI and annual audits for potential bias in recruitment and promotion decisions.

A group of AI experts, including Meta, Google, and Samsung, has created a new framework for developing AI products safely. It consists of a checklist with 84 questions for developers to consider before starting an AI project. The World Ethical Data Foundation is also asking the public to submit their own questions ahead of its next conference. Since its launch, the framework has gained support from hundreds of signatories in the AI community.

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forums Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance.

The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

Meanwhile, generative AI is gaining a growing user base, sparked by the launch of ChatGPT last November. A survey by Deloitte found that more than a quarter of UK adults have used generative AI tools like chatbots. This is even higher than the adoption rate of voice-assisted speakers like Amazon's Alexa. Around one in 10 people also use AI at work.

Nearly a third of college students have admitted to using ChatGPT for written assignments such as college essays and high-school art projects. Companies providing AI-detecting tools have been run off their feet as teachers seek help identifying AI-driven cheating. With only one full academic semester since the launch of ChatGPT, AI detection companies are predicting even greater disruption and challenges as schools need to take comprehensive action.

30% of college students use ChatGPT for assignments, to varying degrees.

Image: Intelligent.com

Another area where AI could ring in fundamental changes is journalism. The New York Times, the Washington Post, and News Corp are among publishers talking to Google about using artificial intelligence tools to assist journalists in writing news articles. The tools could help with options for headlines and writing styles but are not intended to replace journalists. News about the talks comes after the Associated Press announced a partnership with OpenAI for the same purpose. However, some news outlets have been hesitant to adopt AI due to concerns about incorrect information and differentiating between human and AI-generated content.

Developers of robots and autonomous machines could learn lessons from honeybees when it comes to making fast and accurate decisions, according to scientists at the University of Sheffield. Bees trained to recognize different coloured flowers took only 0.6 seconds on average to decide to land on a flower they were confident would have food and vice versa. They also made more accurate decisions than humans, despite their small brains. The scientists have now built these findings into a computer model.

Generative AI is set to impact a vast range of areas. For the global economy, it could add trillions of dollars in value, according to a new report by McKinsey & Company. It also found that the use of generative AI could lead to labour productivity growth of 0.1-0.6% annually through 2040.

At the same time, generative AI could lead to an increase in cyberattacks on small and medium-sized businesses, which are particularly exposed to this risk. AI makes new, highly sophisticated tools available to cybercriminals. However, it can be used to create better security tools to detect attacks and deploy automatic responses, according to Microsoft.

Because AI systems are designed and trained by humans, they can generate biased results due to the design choices made by developers. AI may therefore be prone to perpetuating inequalities, and this can be overcome by training AI systems to recognize and overcome their own bias.

Link:

From Hollywood to Sheffield, these are the AI stories to read this month - World Economic Forum

Related Posts

Comments are closed.