Page 946«..1020..945946947948..960970..»

Forget SEO: Why ‘AI Engine Optimization’ may be the future – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

According to founder, investor and longtime industry analyst Jeremiah Owyang, Bill Gates vision of a personal AI is coming.

That future is one that will disrupt SEO and e-commerce and require marketers and creators to move beyond optimizing traditional search engines to optimize AI, he told VentureBeat in an interview. And, it means planning for disruption and developing new strategies now.

The advertising model as we know it getting people to go to your website and view it thats going to breakI dont see how that sustains, he said.

AI agents and foundational models, instead, will capture the ad dollars as advertisers pay to get their messages included in generated responses.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

For example, we may see sponsored sentences in an AI emerge, or ads next to generated content, he said. Marketers and creators, he explained, have to think about how to be discovered beyond the search engine, within the AI itself.

Last week, OpenAI launched its web crawler to fetch real-time info from the web. But soon, web crawling may not be efficient enough as more and more consumers stay in GPT tools to get information rather than going to marketing or news sites, Owyang predicts.

The data schemas are too varied, he said.

So how will chatbots get their data and what does that mean for business who want customers to find them online?

As consumers increasingly use automated tools to go through the marketing funnel, marketers and creators need to consider something that many might think is counterintuitive: That is, you actually want, no need, LLMs to train on your data.

If I was a journalist, I would want my articles ingested by all of the LLMs, he explained, adding that more and more chatbots are including citations, including Bing, You.com and Perplexity. So when people search for that information, I show up first its the same as SEO strategy, he said cautioning that this would not apply to gated content, which employs a different business model.

Sound strange? Well, keep in mind that marketers have been disrupted again and again over the past couple of decades, said Owyang. Since the advent of Google search, for example, they have worked to influence the influencers in order to boost SEO including journalists, financial analysts, industry analysts and media and government relations. Over the past 10 years, theyve added content creators and other influencers to that mix.

Now, Owyang explained, AI is another influencer marketers will have to cater to, by feeding them information.

That means you may need to create a special API that can be adjusted by the foundational models, he said, adding that he could see companies reducing the central nature of their websites, and instead offering an API. We may find that the most efficient way to influence an autonomous agent is to build an autonomous agent.

In an interview with Bill Gates in May during a Goldman Sachs and SV Angel event on AI, Bill Gates said the first company to develop a personal agent to disrupt SEO would have a leg up on competitors.

According to Owyang, thats why Gates along with Nvidia, Microsoft, Reid Hoffman and Eric Schmidt invested in Inflection AI as part of an eye-popping $1.3 billion funding round in June.

In May, the companylaunchedPi, which stands for personal intelligence and was meant to be empathetic, useful and safe that is, acting more personally and colloquially than OpenAIs GPT-4, Microsofts Bing or Googles Bard, while not veering into the super-creepy.

During a panel at the Bloomberg Technology Summit, Inflection CEO Reid Hoffman said that the Pi chatbot takes a more personal, emotional approach compared with ChatGPT. IQ is not the only thing that matters here, he said. EQ matters as well.

In June, Inflection also announced that it would release a new large language model (LLM) to power Pi, calledInflection-1, which it said outperforms OpenAIs GPT-3.5.

Owyang says he imagines a future where every brand has an autonomous agent that will interact with the buyer side agents.

My agents talk to your agent and negotiate which car that I want, which clothes that I want, which restaurants to eat at and even choose the cuisine perhaps the menu with my dietary needs within budget, he explained. Thats the future.

Of course, a chatbot like Pi is still far away from the kind of personal AI agent Bill Gates and Owyang are imagining. And full disclosure: Owyang says he is planning an investment in Inflection AI.

But even now, AI chatbots are already offering recommendations and Owyang said it is becoming clear that if marketers, publishers and creators want to succeed (at least the ones that depend on SEO) they will need to start catering to the wants and needs of AI agents that is, through AI Engine Optimization.

Unlike SEO, AI Engine Optimization is not about waiting for a crawler to come to a website. Now, marketers will likely want two things, he explained. One is to create an API that feeds in real-time information to foundational models.

That standard API protocol hasnt really emerged yet for how that can be done, which is why OpenAIs API is just crawling, for example, he said. But eventually, he predicted, users will ask OpenAI questions before theyll ask Google Search so you need that real-time feed.

Secondly, marketers will want to take the same corporate API with all of its product information and use it to train its own branded AI that would interact with consumers and buyers, whether thats on a website or an app. That would also interact with the buyer side agents that are starting to emerge, he said.

Owyang said that at a recent AI conference in Las Vegas, 2,000 corporate and government leaders were in the room. They all, he insisted, are moving very quickly to explore the possibilities when it comes to building their own LLMs that could, in the future, interact with customers and their AI agents.

The future, he predicted, will go beyond BloombergGPT and Einstein GPT soon, Walmart or Macys could have its own LLM, or even the New York Times.

Many of these companies are getting ready, he said.

The bottom line, Owyang said bluntly in a recent blog post, is this: As we stand on the brink of this seismic shift, the call to action for marketers is clear: We must ready ourselves to not only influence human decision-making but also shape AI behaviors.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

View post:

Forget SEO: Why 'AI Engine Optimization' may be the future - VentureBeat

Read More..

The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly – WIRED

That sounded to me like he was anthropomorphizing those artificial systems, something scientists constantly tell laypeople and journalists not to do. Scientists do go out of their way not to do that, because anthropomorphizing most things is silly, Hinton concedes. But they'll have learned those things from us, they'll learn to behave just like us linguistically. So I think anthropomorphizing them is perfectly reasonable. When your powerful AI agent is trained on the sum total of human digital knowledgeincluding lots of online conversationsit might be more silly not to expect it to act human.

But what about the objection that a chatbot could never really understand what humans do, because those linguistic robots are just impulses on computer chips without direct experience of the world? All they are doing, after all, is predicting the next word needed to string out a response that will statistically satisfy a prompt. Hinton points out that even we dont really encounter the world directly.

Some people think, hey, there's this ultimate barrier, which is we have subjective experience and [robots] don't, so we truly understand things and they dont, says Hinton. That's just bullshit. Because in order to predict the next word, you have to understand what the question was. You can't predict the next word without understanding, right? Of course they're trained to predict the next word, but as a result of predicting the next word they understand the world, because that's the only way to do it.

So those things can be sentient? I dont want to believe that Hinton is going all Blake Lemoine on me. And hes not, I think. Let me continue in my new career as a philosopher, Hinton says, jokingly, as we skip deeper into the weeds. Lets leave sentience and consciousness out of it. I don't really perceive the world directly. What I think is in the world isn't what's really there. What happens is it comes into my mind, and I really see what's in my mind directly. That's what Descartes thought. And then there's the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world? Hinton goes on to argue that since our own experience is subjective, we cant rule out that machines might have equally valid experiences of their own. Under that view, its quite reasonable to say that these things may already have subjective experience, he says.

Now consider the combined possibilities that machines can truly understand the world, can learn deceit and other bad habits from humans, and that giant AI systems can process zillions of times more information that brains can possibly deal with. Maybe you, like Hinton, now have a more fraughtful view of future AI outcomes.

But were not necessarily on an inevitable journey toward disaster. Hinton suggests a technological approach that might mitigate an AI power play against humans: analog computing, just as you find in biology and as some engineers think future computers should operate. It was the last project Hinton worked on at Google. It works for people, he says. Taking an analog approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems cant so easily merge in a Skynet kind of hive intelligence.

Go here to read the rest:

The 'Godfather of AI' Has a Hopeful Plan for Keeping Future AI Friendly - WIRED

Read More..

Artificial intelligence (AI) in the healthcare sector market to Grow by USD 11,827.99 million from 2022 to 2027: The growing demand for reduced…

NEW YORK, Aug. 14, 2023 /PRNewswire/ -- The artificialintelligence (AI) in the healthcare market size is estimated to growat aCAGR of 23.5%between 2022 and 2027. Themarket size is forecast to increase byUSD11,827.99 million.Discover some insights on market size historic period (2017 to 2021) and Forecast (2023 to 2027) before buying the full report-Request a sample report

Technavio has announced its latest market research report titled Global Artificial Intelligence (AI) in Healthcare Sector Market 2023-2027

Artificial Intelligence (AI) in the Healthcare sector market Company AnalysisCompany Landscape - The global artificial intelligence (AI) in the healthcare sector market is fragmented, with the presence of several global as well as regional Companies. A few prominent companies that offer artificial intelligence (AI) in the healthcare sector in the market are Ada Health GmbH, Alphabet Inc., Amazon.com Inc., Atomwise Inc., BenchSci Analytics Inc., CarePredict Inc., Catalia Health, Cyclica, Deep Genomics Inc., Entelai, Exscientia PLC, General Electric Co., Intel Corp., International Business Machines Corp., Koninklijke Philips NV, MaxQ AI, Medtronic Plc, Microsoft Corp., NVIDIA Corp., and Siemens Healthineers AG and others.

What's New? -

Special coverage on the Russia-Ukraine war; global inflation; recovery analysis from COVID-19; supply chain disruptions, global trade tensions; and risk of recession

Global competitiveness and key competitor positions

Market presence across multiple geographical footprints - Strong/Active/Niche/Trivial -Buy the report!

Company Offerings -

Ada Health GmbH:The company offers artificial intelligence in the healthcare sector such as Ada.

Alphabet Inc.:The company offers artificial intelligence in the healthcare sector such as Google Health.

Amazon.com Inc.:The company offers artificial intelligence in the healthcare sector such as Amazon HealthLake.

For details on the companyand its offerings Request a sample report

Artificial Intelligence (AI) In The Healthcare Sector Market - Segmentation Assessment

Segment Overview

Thisartificial intelligence (AI) in healthcare market report extensively coversmarket segmentation by application (medical imaging and diagnostics, drug discovery, virtual assistants, operations management, and others), component (software, hardware, and services), and geography (North America, Europe, APAC, South America, and Middle East and Africa).

Story continues

The market share growth by themedical imaging and diagnosticssegmentwill be significant during the forecast period.Medical imaging is the creation of a visual representation of the body or the functioning of organs or tissues for the purpose of clinical analysis and medical diagnosis. Medical imaging includes X-rays, CT scans, and magnetic resonance imaging. Managing high-resolution imaging data for treatment and diagnosis is a challenge, even for large healthcare facilities and experienced clinicians. In addition, the increasing use of medical imaging data and technological advancements such as AI in healthcare are contributing to the adoption of medical imaging in healthcare practice. Hence, such factors will increase segment growth during the forecast period.

Geography Overview

North Americais estimated tocontribute38%to the growth of the global market duringthe forecast period. The early adoption of the technology and the growing investment from market players such as Microsoft, Google, and IBM are indicative of the growing demand for AI in the region. The US is one of the top countries in the world in terms of the number of AI patents filed. The US and Canada together hold nearly 26% of all AI patent applications worldwide. IBM holds the majority of AI-related patents, followed by Microsoft and Google. Thus, such factors will drive the growth of the market in this region during the forecast period.

For insights on global, regional, and country-level parameters with growth opportunities from 2017 to 2027 -Download a Sample Report

Artificial Intelligence (AI) in the Healthcare Sector Market Market Dynamics

Leading Driver -

The growing demand for reduced healthcare costsis notably driving AI in healthcare market growth.

Optimizing the activities and resources of healthcare providers significantly reduces costs and increases efficiency. The experience of patients and healthcare professionals is improved through affordable, quality treatment and care.

AI can reduce traditional medical costs and improve treatment while allowing patients to meet their own healthcare needs through the use of virtual assistants such as doctors or chatbots that reduce considerable human labor.

Therefore, the demand to minimize healthcare costs will drive the AI market in the healthcare sector during the forecast period.

Key Trend-

The development of precision medicine is a key trend in the AI in healthcare market.

AI uses DL algorithms to process large data sets to understand human genes and identify biological factors that cause disease.

Drug development companies are increasingly using artificial intelligence to accelerate drug discovery in the healthcare sector.

Researchers and scientists are using AI to personalize disease prevention and treatment strategies by analyzing large sets of genetic databases.

In the area of precision medicine, AI is being used by healthcare providers and researchers as well as others such as medicinal product developers,and technology companies which will boost the growth of the market during the forecast period.

Major challenge-

Regulatory challenges to promote the safety and effectiveness of products are challenging AI in healthcare market growth.

AI is recognized as a complex term with more solutions, such as DL, neural networks, and different approaches to setting up each technology. Regulatory standards for software as medical devices (SAMD) have been developed over the past few years.

AI technologies must comply with regulations and data protection requirements in order to be adopted by providers and gain patients' trust. Regulatory compliance helps healthcare professionals to reduce the impact of bias and increase transparency.

Therefore, these regulatory challenges can reduce the adoption of AI in the healthcare sector, which will impede the growth of the AI market in the healthcare sector during the forecast period.

Driver, Trend & Challenges are the factor of market dynamics that states about consequences & sustainability of the businesses, find some insights from a sample report!

What are the key data covered in this Artificial Intelligence (AI) In Healthcare Sector Market report?

CAGR of the market during the forecast period

Detailed information on factors that will drive the growth of artificial intelligence (AI) in the healthcare sector market between 2023 and 2027

Precise estimation of the artificial intelligence (AI) in the healthcare sector market size and its contribution to the market in focus on the parent market

Accurate predictions about upcoming trends and changes in consumer behavior

Growth of artificial intelligence (AI) in the healthcare sector market across North America, Europe, APAC, South America, and Middle East and Africa

A thorough analysis of the market's competitive landscape and detailed information about companies

Comprehensive analysis of factors that will challenge the growth of artificial intelligence (AI) in the healthcare sector market companies

Gain instant access to 17,000+ market research reports.

Technavio's SUBSCRIPTION platform

Related Reports:

The Artificial Intelligence (AI) in Asset Management Market size is estimated to grow at aCAGR of 37.88%between 2022 and 2027. Themarket size is forecast to increase byUSD 10,373.18 million. Furthermore, this Artificial Intelligence (AI) in Asset Management Market report extensively coversmarket segmentation by deployment (on-premises and cloud), industry application (BFSI, retail and e-commerce, healthcare, energy and utilities, and others), and geography (North America, Europe, APAC, Middle East and Africa, and South America).The rapidadoption of artificial intelligence in asset management and the growing importance of asset tracking is notably driving the market growth during the forecast period.

TheGenerative AIMarket sizeis estimated to grow at aCAGR of 32.65%between 2022 and 2027. Themarket size is forecast to increase byUSD34,695.37 million. Furthermore, thisgenerative AI market report extensively coversmarket segmentation by component (software and services), technology (transformers, generative adversarial networks (GANs), variational autoencoder (VAE), and diffusion networks), and geography (North America, APAC, Europe, South America, and Middle East and Africa).

Artificial Intelligence (AI) In Healthcare Sector Market Scope

Report Coverage

Details

Historic period

2017-2021

Forecast period

2023-2027

Growth momentum & CAGR

Accelerate at a CAGR of 23.5%

Market growth 2023-2027

USD 11,827.99 million

Market structure

Fragmented

YoY growth 2022-2023(%)

21.73

Regional analysis

North America, Europe, APAC, South America, and Middle East and Africa

Performing market contribution

North America at 38%

Key countries

US, China, Japan, Germany, and UK

Competitive landscape

Leading Companies, Market Positioning of Companies, Competitive Strategies, and Industry Risks

Key companies profiled

Ada Health GmbH, Alphabet Inc., Amazon.com Inc., Atomwise Inc., BenchSci Analytics Inc., CarePredict Inc., Catalia Health, Cyclica, Deep Genomics Inc., Entelai, Exscientia PLC, General Electric Co., Intel Corp., International Business Machines Corp., Koninklijke Philips NV, MaxQ AI, Medtronic Plc, Microsoft Corp., NVIDIA Corp., and Siemens Healthineers AG

Market dynamics

Parent market analysis, Market growth inducers and obstacles, Fast-growing and slow-growing segment analysis, COVID-19 impact and recovery analysis and future consumer dynamics, Market condition analysis for forecast period.

Customization purview

If our report has not included the data that you are looking for, you can reach out to our analysts and get segments customized.

Tbale of Contents

1 Executive Summary

2 Market Landscape

3 Market Sizing

4 Historic Market Size

5 Five Forces Analysis

6 Market Segmentation by Application

7 Market Segmentation by Component

8 Customer Landscape

9 Geographic Landscape

10 Drivers, Challenges, and Trends

11 Vendor Landscape

12 Vendor Analysis

13 Appendix

About Us

Technavio is a leading global technology research and advisory company. Their research and analysis focuses on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavio's report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavio's comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

ContactTechnavio ResearchJesse MaidaMedia & Marketing ExecutiveUS: +1 844 364 1100UK: +44 203 893 3200Email: media@technavio.comWebsite: http://www.technavio.com

Global Artificial Intelligence (AI) in Healthcare Sector Market 2023-2027

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/artificial-intelligence-ai-in-the-healthcare-sector-market-to-grow-by-usd-11-827-99-million-from-2022-to-2027-the-growing-demand-for-reduced-healthcare-costs-is-notably-driving-market-growth--technavio-301898991.html

Link:

Artificial intelligence (AI) in the healthcare sector market to Grow by USD 11,827.99 million from 2022 to 2027: The growing demand for reduced...

Read More..

Artificial intelligence, Boldly Elon accomplishments are the center of … – Today at Elon

With recent rapid advances in generative artificial intelligence, higher education institutions have increased their focus on the potential benefits and also moral dilemmas stemming from its use on campus.

Its been a topic on the mind of President Connie Ledoux Book, and on Monday, Aug. 14, in Alumni Gym, she officially opened the new academic year with the innovative technology front and center. On the screens in the venue, an AI-generated message using Books likeness began to welcome the campus back for another school year when the real President Book interrupted and told the audience that the video was an imitation.

On Nov. 30, 2022 we entered a new digital space with the launch of ChatGPT, the first in a series of applications and programs that put generative AI into the hands of all of us. New tools that leverage content at scales beyond human capacity to create content to do with as we choose, Book said. So, what we choose to do with it is critically important to all of us and for all of us.

AI will be the centerpiece of Elons proposal to the United Nations meeting of the Internet Governance Forum in October in Kyoto, Japan, Book said. Elons Center for Imagining the Internet will present a set of global principles designed to guide a healthy collaboration between AI and higher education.

While the list of principles is growing, there has been consensus for these six: people, not technology, must be at the center of our work; we should promote digital inclusion within and beyond our institutions; digital and information literacy is an essential part of a core education; all AI tools should enhance teaching and learning; learning about technologies is an experimental, lifelong process; and AI research and development must be done responsibly.

A working group that includes faculty members Haya Ajjan, Mustafa Akben and Paula Rosinski spent this summer working to understand the potential impact of AI at Elon and prepare the overall community for opportunities to embrace the technology and guard against its misuse. The group collected data from more than 300 community, faculty and staff members, carried out a policy analyses benchmark of peer and aspirant institutions, and created a preliminary report.

Akben, assistant professor of management, provided a snapshot during the Opening Day remarks on what the group has discovered. One element that was consistent in the findings were the strong mixed reaction AI provokes in the higher education community. One-half of people believe that this technology can accelerate scientific findings, increase worker productivity, and further provide adaptive learning spaces for students while enhancing their educational and learning outcomes. The other half raised concerns arguing that this technology undermines the very purpose of higher education, stifles creative and critical thinking which compromises academic integrity.

One irrefutable fact is: this technology is here, Akben said. Given this context and Elons position in the leadership and educational landscape, it is imperative for us to investigate the effect of these technologies on our campus and our learning environment.

However, there is one almost uniform voice among participants [which] indicated that consideration related to AI is an important next step that Elon should take which means we need to prepare and understand this technology much more carefully, he added.

In response to these findings, there are multiple AI-focused discussion sessions throughout Planning Week open for the community to attend and share their thoughts.

In the fall edition of the Magazine of Elon, President Book focused her column on how new technology is often challenged as it is introduced to the masses. But it is how we use our values to leverage that technology that should be judged, Book said.

Where human intelligence and artificial intelligence overlap is a future space of great hope that this new technology can be used to improve the quality of life for all of us, Book said. It is critical that Elon be a place where an education can drive the future use of AI so that students use this tool to problem solve the great challenges of health care, clean water, sustainable energy, just to name a few.

President Book also used her Opening Day address to highlight the many accomplishments of Phase I of theBoldly Elon strategic plan, which was launched in 2020. The 10-year plan was built in three phases and the first phase will wrap up in December. Some of the key goals accomplished were the addition of more than 90 full-time positions since 2020 as a commitment to student learning, staff mentors and support to campus operations. New curricular areas in engineering, nursing, business analytics and STEM have been launched on campus.

To support STEM, Elon has invested in new facilities and renovated existing ones to keep the institution on the cutting edge. This fall, Elon will welcome more than 170 nursing students on campus and one of the largest first-year Elon Law cohorts in the programs history.

The university adopted a new Multifaith Strategic Plan in the spring and is working to create student opportunities to explore faith and purpose.

HealthEU, a comprehensive effort to support the holistic health and well-being of students, faculty and staff was launched one year ago and the initiative has received significant improvements since. With investments in counseling, workshops and 2/7 virtual care through TimelyCare, HealthEU will continue to build on its mission to support the Elon community.

HealthEU was launched and makes visible that our mission to educate, must include preparing students and our community of faculty and staff for lifelong well-being and resourcing those practices, Book said.

Book welcomed new leadership at Elon with Dean Kenn Gaither in the School of Communications, Dean Maha Lund in the School of Health Sciences and Dean Zak Kramer in the Elon University School of Law all assuming their new roles this summer. Director of Athletics Jenn Strawley has also joined and will lead Phoenix Athletics into a new era.

The Inn at Elon has generated over $2 million in scholarships for nearly 200 Elon students since its opening. I love it when strategy works and the vision is realized, Book said.

Phase II of the Boldly Elon strategic plan has already begun and plans to improve residential efforts on campus with a sustainable living-learning community (LLC) at Loy Farm consisting of 12 sustainable residences. In East Neighborhood, a new commons building providing 90 residential rooms will also be underway.

Phase II of the Boldly Elon plan also includes a new Study USA program with sport management majors will open this fall in Charlotte, one of the nations fastest-growing metros. Twelve students will intern with professional sports teams and take classes as part of an immersive semester.

But the focal point for Phase II is cultivating meaningful relationships and mentorship opportunities, with Professor of Psychology Buffie Longmire-Avital and Director of New Student Programs Emily Krechel leading those efforts.

Longmire-Avital told those in attendance at the Opening Day ceremony that the 35 members of Mentoring Design Team have adopted a meaningful relationships framework to improve accessibility to meaningful relationships and build these critical skills starting from day one.

A meaningful relationships framework holistically embraces all the significant connections that make individuals of our Elon community feel valued, Longmire-Avital said.

Krechel said there are several pilots launched to support this initiative including developing awareness to recognize meaningful relationships, integrating meaningful relationship learning outcomes into signature first-year experiences and supporting retention efforts by connecting students with student success.

We think these next steps will provide a robust foundation for the continuation of this transformational work and will move us toward our strategic goal of becoming a national leader for meaningful relationships and mentoring, Krechel said.

Before the presidents Opening Day remarks, two Elon Medallions and the official awarding of five endowed professorships were announced.

The Elon Medallion is Elon Universitys highest honor. Elon Medallion recipients during the 2023-24 Opening Day ceremony were:

Faculty receiving professorships were:

Continue reading here:

Artificial intelligence, Boldly Elon accomplishments are the center of ... - Today at Elon

Read More..

Beijing Tries to Regulate Chinas AI Sector Without Crushing It – Yahoo Finance

(Bloomberg) -- Beijing is poised to implement sweeping new regulations for artificial intelligence services this week, trying to balance state control of the technology with enough support that its companies can become viable global competitors.

Most Read from Bloomberg

The government issued 24 guidelines that require platform providers to register their services and conduct a security review before theyre brought to market. Seven agencies will take responsibility for oversight, including the Cyberspace Administration of China and the National Development and Reform Commission.

The final regulations are less onerous than an original draft from April, but they show China, like Europe, moving ahead with government oversight of what may be the most promising and controversial technology of the last 30 years. The US, by contrast, has no legislation under serious consideration even after industry leaders warned that AI poses a risk of extinction and OpenAIs Sam Altman urged Congress in public hearings to get involved.

China got started very quickly, said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who is writing a series of research papers on the subject. It started building the regulatory tools and the regulatory muscles, so theyre going to be more ready to regulate more complex applications of the technology.

Chinas regulations go beyond anything contemplated in Western democracies. But they also include practical steps that have support in places like the US.

Beijing, for example, will mandate conspicuous labels on synthetically created content, including photos and videos. Thats aimed at preventing deceptions like an online video of Nancy Pelosi that was doctored to make her appear drunk. China will also require any company introducing an AI model to use legitimate data to train their models and to disclose that data to regulators as needed. Such a mandate may placate media companies that fear their creations will be co-opted by AI engines. Additionally, Chinese companies must provide a clear mechanism for handling public complaints about services or content.

Story continues

While the US historically hands-off approach to regulation gave Silicon Valley giants the space to become global juggernauts, that strategy holds serious dangers with generative AI, said Andy Chun, an artificial intelligence expert and adjunct professor at the City University of Hong Kong.

AI has the potential to profoundly change how people work, live, and play in ways we are just beginning to realize, he said. It also poses clear risks and threats to humanity if AI development proceeds without adequate oversight.

In the US, federal lawmakers have proposed a wide range of AI regulations but efforts remain in the early stages. The US Senate has held several AI briefings this summer to help members come up to speed on the technology and its risks before pursuing regulations.

In June, the European Parliament passed a draft of the AI Act, which would impose new guardrails and transparency requirements for artificial intelligence systems. The parliament, EU member states and European Commission must negotiate final terms before the legislation becomes law.

Beijing has spent years laying the groundwork for the rules that take effect Tuesday. The State Council, the countrys cabinet, put out an AI roadmap in 2017 that declared development of the technology a priority and laid out a timetable for putting government regulations in place.

Agencies like the CAC then consulted with legal scholars such as Zhang Linghan from the China University of Political Science and Law about AI governance, according to Sheehan. As Chinas draft guidelines on generative AI evolved into the latest version, there were months of consultation between regulators, industry players and academics to balance legislation and innovation. That initiative on Beijings part is driven in part by the strategic importance of AI, and the desire to gain a regulatory edge over other governments, said You Chuanman, director of the Institute for International Affairs Center for Regulation and Global Governance at the Chinese University of Hong Kong in Shenzhen.

Now, Chinas biggest AI players, from Baidu Inc. to Alibaba Group Holding and SenseTime Group Inc., are getting to work. Beijing has targeted AI as one of a dozen tech priorities and, after a two-year regulatory crackdown, the government is seeking private sector help to prop up the flagging economy and compete with the US. After the introduction of ChatGPT set off a global AI frenzy, leading tech executives and aspiring entrepreneurs are pouring billions of dollars into the field.

In the context of fierce global competition, lack of development is the most unsafe thing, Zhang, the scholar from China University of Political Science and Law, wrote about the guidelines.

In a flurry of activity this year, Alibaba, Baidu and SenseTime all showed off AI models. Xu Li, chief executive officer of SenseTime, pulled off the flashiest presentation, complete with a chatbot that writes computer code from prompts either in English or Chinese.

Still, Chinese companies trail global leaders like OpenAI and Alphabets Google. They will likely struggle to challenge such rivals, especially if American companies are regulated by no one but themselves.

China is trying to walk a tightrope between several different objectives that are not necessarily compatible, said Helen Toner, a director at Georgetowns Center for Security and Emerging Technology. One objective is to support their AI ecosystem, and another is to maintain social control and maintain the ability to censor and control the information environment in China.

In the US, OpenAI has shown little control over information even if its dangerous or inaccurate. Its ChatGPT made up fake legal precedents and provided bomb-building instructions to the public. A Georgia radio host claims the bot generated a false complaint that accused him of embezzling money.

In China, companies have to be much more careful. This February, the Hangzhou-based Yuanyu Intelligence pulled the plug on its ChatYuan service only days after launch. The bot had called Russias attack on Ukraine a war of aggression in contravention of Beijings stance and raised doubts about Chinas economic prospects, according to screenshots that circulated online.

Now the startup has abandoned a ChatGPT model entirely to focus on an AI productivity service called KnowX. Machines cannot achieve 100% filtering, said Xu Liang, head of the company. But what you can do is to add human values of patriotism, trustworthiness, and prudence to the model.

Beijing, with its authoritarian powers, plays by different rules than Washington. When Chinese agencies reprimand and fine tech companies, the corporations cant fight back and often publicly thank the government for its oversight. In the US, Big Tech hires armies of lawyers and lobbyists to contest almost any regulatory action. Alongside the robust public debate among stakeholders, this will make it difficult to install effective AI regulations, said Aynne Kokas, associate professor of media studies at the University of Virgina.

In China, AI is beginning to make its way into the sprawling censorship regime that keeps the countrys internet scrubbed of taboo and controversial topics. That doesnt mean it is easy, technically speaking. One of the most attractive innovations of ChatGPT and similar AI innovations is its unpredictability or its own innovation beyond our human intervention, You, from the Chinese University of Hong Kong, said. In many cases, its beyond control of the platform service providers.

Some Chinese tech companies are using two-way keyword filtering, using one large language model to ensure that another LLM is scrubbed of any controversial content. One tech startup founder, who declined to be named due to political sensitivities, said the government will even do spot-checks on how AI services are labeling data.

What is potentially the most fascinating and concerning time-line is the one where censorship happens through new large language models developed specifically as censors, said Nathan Freitas, a fellow at Harvard Universitys Berkman Klein Center for Internet and Society.

The European Union may be the most progressive in protecting individuals from such overreach. The draft law passed in June ensures privacy controls and curbs the use of facial recognition software. The EU proposal would also require companies to perform some analysis of the risks their services entail, for, say, health systems or national security.

But the EUs approach has drawn objections. OpenAIs Altman suggested his company may cease operating within countries that implement overly onerous regulations.

One thing Washington can learn from Chinese regulators is to be targeted and iterative, Sheehan said. Build these tools that they can keep improving as they keep regulating.

--With assistance from Emily Cadman, Alice Truong and Seth Fiegerman.

Most Read from Bloomberg Businessweek

2023 Bloomberg L.P.

Go here to read the rest:

Beijing Tries to Regulate Chinas AI Sector Without Crushing It - Yahoo Finance

Read More..

How FraudGPT presages the future of weaponized AI – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

FraudGPT, a new subscription-based generative AI tool for crafting malicious cyberattacks, signals a new era of attack tradecraft. Discovered by Netenrichs threat research team in July 2023 circulating on the dark webs Telegram channels, it has the potential to democratize weaponized generative AI at scale.

Designed to automate everything from writing malicious code and creating undetectable malware to writing convincing phishing emails, FraudGPT puts advanced attack methods in the hands of inexperienced attackers.

Leading cybersecurity vendors including CrowdStrike, IBM Security, Ivanti, Palo Alto Networks and Zscaler have warned that attackers, including state-sponsored cyberterrorist units, began weaponizing generative AI even before ChatGPT was released in late November 2022.

VentureBeat recently interviewed Sven Krasser, chief scientist and senior vice president at CrowdStrike, about how attackers are speeding up efforts to weaponize LLMs and generative AI. Krasser noted that cybercriminals are adopting LLM technology for phishing and malware, but that while this increases the speed and the volume of attacks that an adversary can mount, it does not significantly change the quality of attacks.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Krasser says that the weaponization of AI illustrates why cloud-based security that correlates signals from across the globe using AI is also an effective defense against these new threats. Succinctly put: Generative AI is not pushing the bar any higher when it comes to these malicious techniques, but it is raising the average and making it easier for less skilled adversaries to be more effective.

FraudGPT, a cyberattackers starter kit, capitalizes on proven attack tools, such as custom hacking guides, vulnerability mining and zero-day exploits. None of the tools in FraudGPT requires advanced technical expertise.

For $200 a month or $1,700 a year, FraudGPT provides subscribers a baseline level of tradecraft a beginning attacker would otherwise have to create. Capabilities include:

FraudGPT signals the start of a new, more dangerous and democratized era of weaponized generative AI tools and apps. The current iteration doesnt reflect the advanced tradecraft that nation-state attack teams and large-scale operations like the North Korean Armys elite Reconnaissance General Bureaus cyberwarfare arm, Department 121, are creating and using. But what FraudGPT and the like lack in generative AI depth, they more than make up for in ability to train the next generation of attackers.

With its subscription model, in months FraudGPT could have more users than the most advanced nation-state cyberattack armies, including the likes of Department 121, which alone has approximately 6,800 cyberwarriors, according to theNew York Times 1,700 hackers in seven different units and 5,100 technical support personnel.

While FraudGPT may not pose as imminent a threat as the larger, more sophisticated nation-state groups, its accessibility to novice attackers will translate into an exponential increase in intrusion and breach attempts, starting with the softest targets, such as in education, healthcare and manufacturing.

As Netenrich principal threat hunter John Bambenek told VentureBeat, FraudGPT has probably been built by taking open-source AI models and removing ethical constraints that prevent misuse. While it is likely still in an early stage of development, Bambenekwarns that its appearance underscores the need for continuous innovation in AI-powered defenses to counter hostile use of AI.

Given the proliferating number of generative AI-based chatbots and LLMs, red-teaming exercises are essential for understanding these technologies weaknesses and erecting guardrails to try to prevent them from being used to create cyberattack tools. Microsoft recently introduced a guide for customers building applications using Azure OpenAI models that provides a framework for getting started with red-teaming.

This past week DEF CON hosted the first public generative AI red team event, partnering with AI Village, HumaneIntelligenceand SeedAI. Models provided byAnthropic, Cohere, Google, Hugging Face, Meta, Nvidia, OpenAI and Stabilitywere tested on an evaluation platform developed byScale AI. Rumman Chowdhury, cofounder of the nonprofit Humane Intelligence and co-organizer of this Generative Red Team Challenge, wrote in a recent Washington Post article on red-teaming AI chatbots and LLMs that every time Ive done this, Ive seen something I didnt expect to see, learned something I didnt know.

It is crucial to red-team chatbots and get ahead of risks to ensure these nascent technologies evolve ethically instead of going rogue. Professional red teams are trained to find weaknesses and exploit loopholes in computer systems. But with AI chatbots and image generators, the potential harms to society go beyond security flaws, said Chowdhury.

Generative AI-based cyberattack tools are driving cybersecurity vendors and the enterprises they serve to pick up the pace and stay competitive in the arms race. As FraudGPT increases the number of cyberattackers and accelerates their development, one sure result is that identities will be even more under siege.

Generative AI poses a real threat to identity-based security. It has already proven effective in impersonating CEOs with deep-fake technology and orchestrating social engineering attacks to harvest privileged access credentials using pretexting. Here are five ways FraudGPT is presaging the future of weaponized AI:

FraudGPT demonstrates generative AIs ability to support convincing pretexting scenarios that can mislead victims into compromising their identities and access privileges and their corporate networks. For example, attackers ask ChatGPT to write science fiction stories about how a successful social engineering or phishing strategy worked, tricking the LLMs into providing attack guidance.

VentureBeat has learned that cybercrime gangs and nation-states routinely query ChatGPT and other LLMs in foreign languages such that the model doesnt reject the context of a potential attack scenario as effectively as it would in English. There are groups on the dark web devoted to prompt engineering that teaches attackers how to side-step guardrails in LLMs to create social engineering attacks and supporting emails.

While it is a challenge to spot these attacks, cybersecurity leaders in AI, machine learning and generative AI stand the best chance of keeping their customers at parity in the arms race. Leading vendors with deep AI, ML and generative AI expertise include ArticWolf, Cisco, CrowdStrike, CyberArk, Cybereason, Ivanti, SentinelOne, Microsoft, McAfee, Palo Alto Networks, Sophos and VMWare Carbon Black.

FraudGPT has proven capable of generating malicious scripts and code tailored to a specific victims network, endpoints and broader IT environment. Attackers just starting out can get up to speed quickly on the latest threatcraft using generative AI-based systems like FraudGPT to learn and then deploy attack scenarios. Thats why organizations must go all-in on cyber-hygiene, including protecting endpoints.

AI-generated malware can evade longstanding cybersecurity systems not designed to identify and stop this threat. Malware-free intrusion accounts for 71% of all detections indexed by CrowdStrikes Threat Graph, further reflecting attackers growing sophistication even before the widespread adoption of generative AI. Recent new product and service announcements across the industry show what a high priority battling malware is. Amazon Web Services, Bitdefender, Cisco, CrowdStrike, Google, IBM, Ivanti, Microsoft and Palo Alto Networks have released AI-based platform enhancements to identify malware attack patterns and thus reduce false positives.

Generative AI will shrink the time it takes to complete manual research to find new vulnerabilities, hunt for and harvest compromised credentials, learn new hacking tools and master the skills needed to launch sophisticated cybercrime campaigns. Attackers at all skill levels will use it to discover unprotected endpoints, attack unprotected threat surfaces and launch attack campaigns based on insights gained from simple prompts.

Along with identities, endpoints will see more attacks. CISOs tell VentureBeat that self-healing endpoints are table stakes, especially in mixed IT and operational technology (OT) environments that rely on IoT sensors. In a recent series of interviews, CISOs told VentureBeat that self-healing endpoints are also core to their consolidation strategies and essential for improving cyber-resiliency. Leading self-healing endpoint vendors with enterprise customers includeAbsoluteSoftware,Cisco,CrowdStrike, Cybereason, ESET,Ivanti,Malwarebytes,MicrosoftDefender365,Sophos andTrendMicro.

Weaponized generative AI is still in its infancy, and FraudGPT is its baby steps. More advanced and lethal tools are coming. These will use generative AI to evade endpoint detection and response systems and create malware variants that can avoid static signature detection.

Of the five factors signaling the future of weaponized AI, attackers ability to use generative AI to out-innovate cybersecurity vendors and enterprises is the most persistent strategic threat. Thats why interpreting behaviors, identifying anomalies based on real-time telemetry data across all cloud instances and monitoring every endpoint are table stakes.

Cybersecurity vendors must prioritize unifying endpoints and identities to protect endpoint attack surfaces. Using AI to secure identities and endpoints is essential. Many CISOs are heading toward combining an offense-driven strategy with tech consolidation to gain a more real-time, unified view of all threat surfaces while making tech stacks more efficient. Ninety-six percent of CISOs plan to consolidate their security platforms, with 63% saying extended detection and response (XDR) is their top choice for a solution.

Leading vendors providing XDR platforms include CrowdStrike, Microsoft,PaloAltoNetworks,Tehtris andTrendMicro. Meanwhile, EDR vendors are accelerating their product roadmaps to deliver new XDR releases to stay competitive in the growing market.

FraudGPT and future weaponized generative AI apps and tools will be designed to reduce detection and attribution to the point of anonymity. Because no hard coding is involved, security teams will struggle to attribute AI-driven attacks to a specific threat group or campaign based on forensic artifacts or evidence. More anonymity and less detection will translate into longer dwell times and allow attackers to execute low and slow attacks that typify advanced persistent threat (APT) attacks on high-value targets. Weaponized generative AI will make that available to every attacker eventually.

SecOps and the security teams supporting them need to consider how they can use AI and ML to identify subtle indicators of an attack flow driven by generative AI, even if the content appears legitimate. Leading vendors who can help protect against this threat include Blackberry Security (Cylance), CrowdStrike, Darktrace, Deep Instinct, Ivanti, SentinelOne, Sift and Vectra.

FraudGPT signals the start of a new era of weaponized generative AI, where the basic tools of cyberattack are available to any attacker at any level of expertise and knowledge. With thousands of potential subscribers, including nation-states, FraudGPTs greatest threat is how quickly it will expand the global base of attackers looking to prey on unprotected soft targets in education, health care, government and manufacturing.

With CISOs being asked to get more done with less, and many focusing on consolidating their tech stacks for greater efficacy and visibility, its time to think about how those dynamics can drive greater cyber-resilience. Its time to go on the offensive with generative AI and keep pace in an entirely new, faster-moving arms race.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

See more here:

How FraudGPT presages the future of weaponized AI - VentureBeat

Read More..

Unleashing the AI Imagination: A Global Overview of Generative AI … – JD Supra

This article discusses the latest developments of legislations on Generative AI in the United States (U.S.), Europe (EU), the United Kingdom (UK) and the Peoples Republic of China (China or the PRC).

United StatesCongressional leaders are intensifying efforts to develop legislation directing agency regulation of AI technology. In June, Senate Majority Leader Chuck Schumer (D-NY) publicly announced the SAFE Innovation Framework, which sets priorities for AI legislation, focusing on: security, accountability, protecting our foundations and explainability. The frameworks goal is to deliver security without compromising innovation. Although passage in 2023 is uncertain, we can expect that Congress will continue to introduce legislation, hold hearings and deploy the AI Forums throughout the rest of the session, giving industry several opportunities to engage with their representatives and senators before legislation is enacted.

Also, at a May 16 hearing of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, several senators indicated support for the creation of a new federal agency dedicated to regulating AI, which could include licensing activities for AI technology. There were also calls for creating an international AI regulatory body modeled after the International Atomic Energy Agency.

On May 23, the White House revealed three new steps advancing the research, development and deployment of AI technology nationwide. In addition, the Office of Science and Technology Policy (OSTP) completed a public comment period soliciting input to develop a comprehensive National AI Strategy, focused on promoting fairness and transparency in AI while maximizing AI benefits. The results of the public feedback will be made public and move OSTP to the next stage of developing the National AI Strategy.

U.S. federal agencies are also engaging with AI as it intersects with their respective jurisdictional and legislative authority, often issuing guidance to regulated entities, explaining how the agency will apply existing law to any AI violations. For example, the Federal Trade Commission (FTC) has been active in regulating deceptive and unfair practices attributed to AI, particularly enforcing statutes such as the Fair Credit Reporting Act, Equal Credit Opportunity Act and FTC Act.

European UnionThe EU has also made steady progress in shaping its proposed AI law, known as the AI Act, which has entered the final stage of the legislative process. The aim is to agree to a final version of the law by the end of 2023, after which there will likely be a 24-month transition period before it applies.

The proposed AI Act classifies AI usage based on risk levels, prohibiting certain uses (for example, real-time biometric identification surveillance systems used in public places and subliminal techniques which may cause harm) and imposing stringent monitoring and disclosure requirements for high-risk applications compared to lower-risk ones.

The EUs objective is to ensure that AI developed and utilized within Europe aligns with the regions values and rights, including ensuring human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being. The proposed penalties for breach could be as high as 7% of global revenue or 40 million.

For more detailed analysis on the AI Act, see here.

Further, on September 28, 2022, the European Commission proposed a new law, known as the AI Liability Directive, aimed at adapting non-contractual civil liability rules to AI. The proposed law (which is closely tied to the AI Act) aims to establish uniform rules for damages caused by AI systems, providing broader protection for victims and fostering the AI sector by increasing guarantees. It will address specific difficulties of proof linked with AI, requiring EU Member States to empower national courts to order the disclosure of relevant evidence about specific high-risk AI systems. The proposed law will impact both users and developers of AI systems, providing clarity for developers about accountability in the event of an AI system failure, and facilitating compensation recovery for victims of crimes associated with AI systems. The negotiations on the new law are ongoing, and it is not yet clear when it will be adopted.

United KingdomOn March 29, 2023, the UK government released a white paper outlining its pro-innovation approach to AI regulation. Rather than creating new laws or a separate AI regulator, as things stand, existing sectoral regulators will be empowered to regulate AI in their respective sectors. The focus is on enhancing existing regimes to cover AI and avoiding heavy-handed legislation that could hinder innovation.

The proposed regulatory framework outlined in the white paper defines AI based on two key characteristics, namely adaptivity and autonomy. The white paper holds that by defining AI with reference to these characteristics and designing the regulatory framework to address the challenges created by these characteristics, UK lawmakers can future proof the framework against unanticipated new technologies.

The white paper also sets out five values-focused cross-sectorial principles that regulators should adhere to when addressing the risks associated with AI. The principles include: (i) safety, security and robustness, (ii) appropriate transparency and explainability, (iii) fairness, (iv) accountability and governance, and (v) contestability and redress.

The principles build on, and reflect the UK governments commitment to, the Organisation for Economic Cooperation and Developments (OECD) values-based AI principles, which promote the ethical use of AI. The aim of the principles-based approach is to allow the framework to be agile and proportionate. While not legally binding at the outset, the UK government anticipates that the principles may become enforceable in the future, depending on the evolving landscape of AI technology and its societal impact.

In addition to these principles, the UK government plans to establish central functions to support regulators in their AI oversight roles and make sure the regulatory framework operates proportionately and supports innovation. The white paper is silent on which specific entity or entities will undertake these central functions.

The white paper accompanies the investment of 2 million by the UK government to fund a new sandbox to enable AI innovators to test new AI products prior to market launch. The sandbox will enable businesses to test how AI regulations could apply to their products.

Following publication of the white paper, the UK government will continue to work with businesses and regulators as it starts to establish the central functions identified. The UK government will publish an AI regulatory roadmap alongside its response to the consultation on this white paper. In the long term, 12 months or more after the publication of the white paper, the UK government plans to implement all central functions, support regulators in applying cross-sectoral principles, publish a draft AI risk register, develop a regulatory sandbox, and release a monitoring and evaluation report to assess the frameworks performance.

ChinaOn July 13, 2023, the Cyberspace Administration of China (CAC) issued the final version of the Interim Administrative Measures for Generative Artificial Intelligence Service (PRC AI Regulations). Generative AI services provided to the public within the PRC fall within the scope of these regulations, which primarily address content generation using AI technology (Generative AI Services). The PRC AI Regulations explicitly exclude from their scope non-public service providers, such as industry organizations, enterprises, academic and research institutions, and public cultural institutions engaged in research, development and application of generative AI technology.

The PRC AI Regulations introduce significant obligations for providers of Generative AI Services. These requirements include monitoring and controlling content generated by their services. Providers are required to promptly remove any illegal content, take actions against users engaged in illegal activities, and report to the authorities. Additionally, the providers must mark generated content with appropriate labels and use legitimate sources for data training while respecting intellectual property rights and obtaining consent for personal information processing. Reiterating the existing cybersecurity and personal privacy rules in China, the PRC AI Regulations mandate protecting the users personal information, prohibiting illegal collection and sharing of identifiable data.

China is also likely to adopt an industry-oriented regulatory model, with different governmental departments regulating Generative AI Services within their specific fields. Industry-specific AI regulations and classification guidelines are expected to be introduced.

The PRC AI Regulations are the latest addition to the Administrative Provisions on Algorithm Recommendation for Internet Information Services (Algorithm Provisions, effective as of March 1, 2022) and Administrative Provisions on Deep Synthesis of Internet Information Services (Deep Synthesis Provisions, effective as of January 10, 2023).

The Algorithm Provisions apply to any entity that uses algorithm recommendation technologies (including without limitation technologies for generation and synthesis, personalized push, sorting and selection, retrieval and filtering, scheduling decision-making) to provide internet information services within mainland China. The Algorithm Provisions, among others, require an algorithm recommendation service provider (which could cover a Generative AI services provider) with a public opinion attribute or social mobilization ability to carry out a safety assessment in accordance with the application regulations and complete online record-filing formalities within 10 working days from the date it starts to provide services.

The Deep Synthesis Provisions regulate the provision of internet information services in mainland China by applying deep synthesis technologies, which is defined as technologies that use generative sequencing algorithms, such as deep learning and virtual reality, to create text, images, audio, video, virtual scenes, or other information. The Deep Synthesis Provisions set out a comprehensive set of responsibilities for deep synthesis service providers and technical supporters concerning data security and personal information protection, transparency, content management and labeling, technical security, etc. A Generative AI service provider is required to add a mark on content (pictures, videos and other content generated by Generative AI Services) according to the Deep Synthesis Provisions.

Overall, together with the existing cybersecurity and data privacy rules in China, the PRC AI Regulations aim to establish a framework for responsible and transparent use of Generative AI Services, imposing significant responsibilities while offering some flexibility to the service providers. The Chinese authorities have placed more emphasis on industrial policies encouraging AI innovations and massive industrial applications rather than restricting the development of AI technologies, which is also reflected in the PRC AI Regulations.

ConclusionThe global landscape of AI governance features diverse strategies. The EU focuses on sector-specific regulation, the United States opts for a decentralized approach featuring federal guidance with local adaptations, and China prioritizes consumer transparency and global AI standards dominance. Companies will need to develop global positions on AI ethics and compliance for their products in order to comply with new regulations

[View source.]

See the rest here:

Unleashing the AI Imagination: A Global Overview of Generative AI ... - JD Supra

Read More..

New IBM study reveals how AI is changing work and what HR … – IBM

The rise of generative AI has surfaced many new questions about how the technology will impact the workforce. Even as AI becomes more pervasive in business, people are still a core competitive advantage. But business leaders are facing a host of talent-related challenges, as a new global study from the IBM Institute for Business Value (IBV) reveals, from the skills gap to shifting employee expectations to the need for new operating models.

The global skills gap is real and growing. Executives surveyed estimate that 40% of their workforce will need to reskill as a result of implementing AI and automation over the next three years. That could translate to 1.4 billion of the 3.4 billion people in the global workforce, according to World Bank statistics. Respondents also report that building new skills for existing employees is a top talent issue.

AIs impact will vary across employee groups. Workers at all levels could feel the effects of generative AI, but entry-level employees are expected to see the biggest shift. Seventy-seven percent of executive respondents say entry-level positions are already seeing the effects of generative AI and that will intensify in the next few years. Only 22% of respondents report the same for executive or senior management roles.

AI can open up more possibilities for employees by enhancing their capabilities. In fact, 87% of executives surveyed believe employees are more likely to be augmented than replaced by generative AI. That varies across functions 97% of executives think employees in procurement are more likely to be augmented than replaced, compared to 93% for employees in risk and compliance, 93% for finance, 77% for customer service and 73% for marketing.

Employees care more about doing meaningful work than flexibility and growth opportunities, but leaders arent always in lockstep with their needs. With AI primed to take on more manual and repetitive tasks, employees surveyed report engaging in impactful work is the top factor they care about beyond compensation and job securitymore important than flexible work arrangements, growth opportunities and equity. On top of that, nearly half of employees surveyed believe the work they do is far more important than who they work for or who they work with regularly.

However, employers seem to have missed the memo about what matters. Executives surveyed said impactful work was the least important factor to their employees, instead pointing to flexible work arrangements as the most important attribute beyond compensation and job security.

The world of work has changed compared to even six months ago. Leaders are starting to believe that the enterprise of tomorrow may not be able to run with yesterdays talent and tomorrows talent may not be able be rely on yesterdays ways of working.

HR leaders can play a critical role in how organizations adapt to the changes driven by generative AI. These leaders can be at the helm of navigating these challenges, redesigning work and operating models to shepherd their organizations into the future. Here are a few actions to consider.

Were at a pivotal point in the world of work and theres a massive opportunity in front of HR leaders, but there are risks as well. As businesses further embrace AI, successful change will only come if organizationsby way of HR leadersprioritize a new approach to talent and operating models where people and technology come together to boost productivity and drive business value.

Managing Partner, Talent Transformation Consulting

Visit link:

New IBM study reveals how AI is changing work and what HR ... - IBM

Read More..

Unlocking the potential of AI data modeling within CPG – Supermarket News

Rich Wagner is chief product officer and founder of Prevedere, a predictive analytics company that helps enterprises createaccurate forecast models by incorporating the global economic leading indicators. Rich brings over 20 years of technology, innovation, and leadership experience from Big 4 consulting and Fortune 500 companies.

Strategic growth in the face of economic turbulence is difficult at best. Even three years since the start of the pandemic and the accompanying economic concern, business leaders continue to function under layers of uncertainty. Looking at 2023, the first half of the year did not economically perform as anyone expected, and lingering doubts ofconsumer resilience are casting a shadow around the second half.

As we come face to face with an economic turning point, strategic executives are turning to AI modeling and machine learning technology to bring them back to proactive planning and away from reactive navigation. However, a recent survey by tech services company Accenture found that only 16% of CFOs are harnessing the power of real-time economic data in their business planning.

Just as executives are turning to generative AI to help solve costly productivity issues, executives must also turn to AI modeling and machine learning for business planning. Custom insights that combine their business historic performance data with real-time economic data allow them to create optimistic and pessimistic financial guardrails for the future.

With these insights at their fingertips, executives are able to navigate turbulence with grace, recovering quicker and seizing opportunity with more success.

The importance of external economic data

Traditionally, demand forecasting relies only on the historic data of a company in order to predictively plan for future performance. However, without also incorporating external economic data, this approach leaves a lot of room for error.

Take, for example, years where business performance is more of an anomaly than a sign of business growth or contraction. Executives and FP&A teams relying primarily on historical performance are basing future growth off of bad data. This is exactly how we should look at business performance between 2020 and 2022.

Those years were riddled with supply chain disruptions and inflated consumer spending fueled by government stimulus during the pandemic. Consumers were spending more than they were making and changing their behaviors every few months. Without considering those economic oddities, companies that use traditional demand-forecasting methods could go on to project higher growth in 2021 than was possible. The ongoing economic stimulus that continued in 2021 only made this more prominent in 2022.

However, executives who utilized AI data modeling and machine learning to include external economic data in their planning process were able to more accurately predict business performance throughout the pandemic. For example, retailers who used this technology accurately predicted the return to normalcy along the supply chain and avoided the inventory issues many big retailers like Target and Walmart experienced.

The competitive edge to AI modeling and machine learning

The benefit of using predictive analytics for financial planning goes well beyond navigating economic uncertainty. Because the technology is able to analyze and predict based on more than just financial performance (think trends in sales, promotions, and audience), it allows leaders to confidently step outside their comfort zones and approach business challenges with a competitive edge.

Instead of stepping into the unknown with a blindfold, leaders can use predictive planning to create optimistic and pessimistic forecasts, giving them guardrails in which to make their decisions. So if a business is looking to experiment with its audience or take a new creative approach to earning more market share, then leaders have data-driven insights to support those decisions.

Stepping into some certainty

The rise of generative AI has made the financial impact of productivity very clear. In fact, a recent McKinsey report found that generative AI could increase productivity enough for a 10 to 15% overall cost savings on R&D. The conclusion is clear: If a computer can do the mundane tasks quickly and free up your teams time for more important work, then its worth the investment.

The same thought can be applied to AI data modeling and financial planning.

While many businesses continue to reactively navigate the current uncertain market, those who have decided to embrace AI data modeling in their financial planning process are able to look beyond recovery and plan their next strategic business move. Whether its increasing sales with their already loyal customers or branching out to capture a new market, predictive planning allows leaders to move forward with confidence.

Read more:

Unlocking the potential of AI data modeling within CPG - Supermarket News

Read More..

Company used by Lubbock government agency acquires AI company – KLBK | KAMC | EverythingLubbock.com

LUBBOCK, Texas Tyler Technologies announced in a press release on Tuesday, August 8 that it had acquired artificial intelligence company Computing System Innovations.

According to the release, Tyler Technologies now adds AI-driven redaction and indexing solutions to its portfolio.

CSI is a company that has provided the leading artificial intelligence (AI) automation, redaction, and indexing solution for courts, recorders, attorneys, and others, the press release stated.

Tyler does business with both Lubbock and Lubbock County for government software systems. Years ago, Tyler Technologies bought a Lubbock company, INCODE. Tyler still maintains a Lubbock location, according to a 2022 entry on the companys website.

Read the following press release from Tyler Technologies for more information.

PLANO, Texas(BUSINESS WIRE) Tyler Technologies, Inc. (NYSE: TYL) announced today it has acquired Computing System Innovations, LLC (CSI), a company that provides the leading artificial intelligence (AI) automation, redaction, and indexing solution for courts, recorders, attorneys, and others.

Through this acquisition, Tyler adds CSIs AI-driven redaction and indexing solution to its portfolio, bringing automated data entry and document processing options to current and prospective clients. In addition, Tyler plans to leverage CSIs AI and automation technology across other Tyler verticals, including Municipal & Schools, Property & Recording, and Platform Solutions.

CSI and Tyler have both served the court technology space for many years and have worked as partners on behalf of Tylers clients often, so we are thrilled to officially welcome them to Tyler, said Brian McGrath, president of Tylers Courts & Justice Division. CSIs expertise in AI and machine learning-powered process automation combined with Tylers expansive footprint will help us deliver even stronger electronic filing and Enterprise Justice solutions to our clients.

CSIs Intellidact Platform is a suite of applications that enhance document processing and identity protection with AI technology. These applications include data redaction; data extraction; document classification; and process automation. Tylers eFile & Serve solution allows users to electronically file documents with the court via a secure, web-based portal. The addition of CSIs platform will elevate the eFile & Serve solution by making the filing process quicker, less redundant, and more accurate.

CSI has more than 80 clients across the United States, including the United States Army; the Supreme Court of Virginia; the State of Iowa; the City of New York; and Tarrant County, Texas. The company has won multiple industry awards due to its significant and transformative impact in the justice technology space.

We have seen great demand from the public sector and courts specifically for AI-powered document automation that significantly reduces manual labor of document review and data entry. We couldnt be more excited to combine our expertise with Tylers to provide AI-enabled document automations within Tylers impressive product suites, said Henry Sal, president and CEO, CSI.

Orlando, Florida-based CSI was founded in 1987 by Henry Sal and Glen Johnson. Management and staff will become part of Tylers Courts & Justice Division, and continuing employees will remain in their current office locations.

Read more here:

Company used by Lubbock government agency acquires AI company - KLBK | KAMC | EverythingLubbock.com

Read More..