Category Archives: Artificial Super Intelligence

3 Super Speculative AI Stocks Not Worth the Risk – InvestorPlace

Except for companies like Microsoft (NASDAQ:MSFT) and Nvidia (NASDAQ:NVDA), I dont think it would be out of line for regular Investors to consider most artificial intelligence stocks to be speculative by nature. This has led to AI stocks to avoid.

After all, Microsoft itself only started emphasizing AI in 2019.

AI opens up so many possibilities. And the limits are very few, generally limited only by your imagination, Lance Olson, the companys director of program management for applied AI, said in a May 2, 2019, internal article. It doesnt need to be overwhelming for people. We are getting to the point where we can now make AI accessible to a much broader set of customers.

And, of course, with products such as Copilot, the companys AI-powered version of Microsoft 365, it has done precisely that.

However, AI leaders are few and far between. There are a lot of speculative AI stocks that wont be able to go the distance. Here are three AI stocks to avoid at all costs.

Source: Sergio Photone / Shutterstock.com

RadNet (NASDAQ:RDNT) stock is up nearly 72% in 2023. Except for a few days in July, RadNets stock hasnt consistently traded above $30 since July 2021. Before that? Never.

So, whats driving RDNT stock?

Well, you can be sure its not its 357 locations providing diagnostic imaging services across seven states. Sure, it generated $1.43 billion in annual revenue in 2022, so its not some Johnny-come-lately. However, its operating margin was 3.2%. Thats 3.2 cents for every dollar of sales.

The most its ever made was in 2021 when it earned $82.6 million in operating income from $1.32 billion in revenue, good for a 6.2% operating margin. Except for 2021 and today, RadNet stocks never traded for more than 1x sales, but here we are more than halfway through 2023, and investors are willing to pay 1.21x sales and nearly 44x earnings before interest and taxes (EBIT)

Two words: artificial intelligence.

In December 2022, the company announced that its Saige-Density mammography density assessment software had been given clearance from the Food and Drug Administration (FDA). It was the third product to receive the thumbs up from the FDA.

While Im sure the companys heart is in the right place, its AI segment lost $116.4 million in the first six months of 2023 from $4.5 million in revenue.

It mentions AI approximately 19 times in its Q2 2023 press release. We got the emphasis. This is not a business worth $2.2 billion.

Source: shutterstock.com/Tex vector

How long has LivePerson (NASDAQ:LPSN) been around? I worked in sales for a Toronto-based digital asset management startup in 2000. My boss was introduced to LivePersons services. Theyve been around in one form or another since 1998.

Although its share price is down 56% in 2023, if you measure the losses from its high of $18.17 in February, theyre off by 75% in a little over six months.

On Aug. 9, the companys shares jumped more than 13% after it reported Q2 2023 earnings that were better than analyst expectations. On the top line, its revenues were $97.52 million, $670,000 above the consensus. On the bottom line, it earned 12 cents, 50 cents better than the estimate. Throw AI on top of this, and the meme-stock investors will tell you its a $30 stock.

The companys Q2 2023 presentation talks about generative AI and its next growth stage. It points to Fast Company, naming it the most innovative AI company in the world.

LivePerson argues that the one billion or more conversational interactions its 100,000+ corporate users generate monthly through its platform, combined with generative AI and large language models (LLMs), will give its customers the edge they need to maintain superior service.

I have no idea if thats true. I know there have been many innovations in digital customer service since LivePerson first started in 1998. Yet, it last managed to generate an operating profit of $10.3 million in 2012. Its been nothing but red ink ever since.

Wait for it to prove that its got some AI chops.

Source: shutterstock.com/cono0430

To come up with some ideas for AI stocks to sell, I used the SECs Edgar search tool to find companies with annual reports that mention the word AI or artificial intelligence.

One of the names it spit out was Futuretech II Acquisition Corp. (NASDAQ:FTII), a special purpose acquisition company (SPAC) focused on acquiring a U.S. tech company, preferably one involved in AI or robotics.

We believe that we are living in a digital era where AI is poised to reshape our lives. The continuous research and innovation directed by the tech giants are driving the adoption of advanced AI technologies in industry verticals, such as automotive, healthcare, retail, finance, and manufacturing, pg. 4 of its 2022 10-K states.

Futuretech II raised $115 million in its February 2022 initial public offering. It had 12 months to find a target to merge with, or 18 monthsif the company used one or both of its three-month extensions. Shareholders werent given the right to vote on these extensions. However, the sponsors were required to deposit an additional $1.15 million into the trust account for each extension.

On May 17, 2023, it made a second payment for a second extension, which expires on Aug. 18. Unless something changes in a hurry, it looks like this bet on AI will go up in smoke with the funds in trust returned to shareholders.

On the date of publication, Will Ashworth did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Will Ashworth has written about investments full-time since 2008. Publications where hes appeared include InvestorPlace, The Motley Fool Canada, Investopedia, Kiplinger, and several others in both the U.S. and Canada. He particularly enjoys creating model portfolios that stand the test of time. He lives in Halifax, Nova Scotia.

See original here:

3 Super Speculative AI Stocks Not Worth the Risk - InvestorPlace

Why firms need to scratch the surface of their AI investments – Money Management

The optimism behind disruptive artificial intelligence (AI) technology has driven markets to record highs, but experts warn there are risks and considerations that can be overlooked.

There has been a lot of talk around its many benefits across numerous sectors. According to a recent report titled, Australias Generative AI Opportunity, by Microsoft and the Tech Council of Australia, generative AI could contribute between $45 billion and $115 billion a year to Australias economy by 2030 through improving existing industries and enabling the creation of new products and services.

However, it also entails a number of environmental, social and corporate governance (ESG) concerns that range from data privacy and cyber security to job loss, misinformation and intellectual property.

The spectrum of risks arising from AI is wide, agrees Fidelity analyst and portfolio manager, Marcel Sttzel.

He said: On one end lies doomsday scenarios involving super intelligent AIs that their creators cant understand or control. More potential immediate threats include the spread of misinformation from large language models (LLMs), which are liable to hallucinate conjuring false facts or misinterpretations.

The complexity of the technology and difficulties in containing it are reflected in the efforts of regulators, which are mobilising but with little global cohesion. Industry-wide attempts to self-regulate have also gained little traction.

In May, the Centre for AI Safety (CAIS), a San Francisco-based research nonprofit, released a one sentence Statement on AI Risk, which was signed by over 100 professors of AI. It said that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Even Sam Altman, the co-founder of OpenAI, has expressed concerns and called for greater regulation of AI development, looking into compliance and safety standards, audits, and potential deployment and security concerns.

But the burden isnt just regulatory, Sttzel said.

He added: Large holders of capital have proven their ability to move the needle on existential issues by engaging with companies on ESG issues such as climate change or employee welfare, and holding firms to account for transgressions. Given the potential dangers related to artificial intelligence, now is the time for investors to assess their investees use of this powerful tool.

Speaking on a Fidante podcast, Mary Manning, portfolio manager of the Global Sustainable Fund at Alphinity, discussed the importance of considering AI from an ESG perspective.

For her, a particular concern is the development of AI to become sentient with the ability to process thoughts and feelings.

If you think about AI and the possibility of AI will become sentient at some point, if you think about that over the long term, then if we get AI wrong and robots or sentient beings start to take over, then that is a very big threat to humanity, arguably even more so than climate change.

The firm has since announced a year-long research program with Australias national science agency, Commonwealth Scientific and Industrial Research Organisation (CSIRO), that aims to identify best practices and provide a framework to assess, manage and report on responsible AI risks.

Jessica Cairns, Alphinitys head of sustainability and ESG, believes the technology has a lot of potential for good, however the governance, design and application of AI need to be undertaken in a responsible and ethical way.

Through its research so far, the firm has identified some common examples of good practices, like governance bodies to help guide the strategic direction of AI use and development; a clear AI policy or framework; and an integrated approach with existing risk management frameworks.

Although many companies see the increased use of AI as transformational, most recognise the risks around human capital and workforce, Cairns told Money Management.

For companies that are looking to deploy AI internally, we have heard that managements are focused on how they can augment different roles to reduce the amount of repetitive or mundane tasks, rather than replacing roles altogether.

Similar to the energy transition, we believe a focus on employee engagement and participation is going to be key for companies to ensure the responsible adoption of AI in the context of employee wellbeing.

Reflecting on developments in this space, Betashares director for responsible investments, Greg Liddell, recognised it is too early to predict the lasting impact of AI although there have certainly been many benefits and risks identified so far.

In terms of negatives, there has been much discussion on automation and job losses and on bots that can perpetuate biases and negativities present on the internet.

Liddell said: AI will create solutions across a range of fields and applications. It will potentially generate enormous wealth for those at the forefront of its development and implementation.

But AI needs guardrails to safeguard its development, and ethical investors need to be aware of how companies are using AI and the risks it poses.

Link:

Why firms need to scratch the surface of their AI investments - Money Management

Expert shuts down AI hype calling it a ‘glorified tape recorder’ and … – UNILAD

Fears about what an AI can do are overblown according to a theoretical physicist who compared the technology to a 'glorified tape recorder'.

Artificial intelligence has made plenty of leaps forward in recent times but people's opinions on this emerging technology are quite divided.

However, those concerns might be a bit premature according to theoretical physicist Michio Kaku, who told CNN that many AI platforms are little more than 'glorified tape recorders'.

He said: "It takes snippets of what's on the web created by a human, splices them together and passes it off as if it created these things.

"And people are saying, 'Oh my God, its a human, its humanlike.'"

And that's rather the crux of his issue with people's concerns about AI, while they can scour the internet for things people made or be taught by humans how to mimic people, they can't create from scratch for themselves.

Kaku also said he believes the next stage of computer technology is coming and that it will be quantum computing, where a computer uses vibrating waves instead of computer chips to function.

But back on the topic of AI, while Kaku said what they have to work with 'has to be put in by a human', there are people closely connected with the development of the technology who are seriously worried about what it could end up doing.

OpenAI CEO Sam Altman estimated that within 10 years, AI would 'exceed expert skill level in most domains' and massively boost what we could do, but warned that it could lead to the creation of a 'super intelligence'.

He said that given the potential risk we were facing due to AI, governments would need to be proactive in figuring out where to draw the line in terms of safety and restrictions.

Read this article:

Expert shuts down AI hype calling it a 'glorified tape recorder' and ... - UNILAD

AI in Education – EducationNext

In Neal Stephensons 1995 science fiction novel, The Diamond Age, readers meet Nell, a young girl who comes into possession of a highly advanced book, The Young Ladys Illustrated Primer. The book is not the usual static collection of texts and images but a deeply immersive tool that can converse with the reader, answer questions, and personalize its content, all in service of educating and motivating a young girl to be a strong, independent individual.

Such a device, even after the introduction of the Internet and tablet computers, has remained in the realm of science fictionuntil now. Artificial intelligence, or AI, took a giant leap forward with the introduction in November 2022 of ChatGPT, an AI technology capable of producing remarkably creative responses and sophisticated analysis through human-like dialogue. It has triggered a wave of innovation, some of which suggests we might be on the brink of an era of interactive, super-intelligent tools not unlike the book Stephenson dreamed up for Nell.

Sundar Pichai, Googles CEO, calls artificial intelligence more profound than fire or electricity or anything we have done in the past. Reid Hoffman, the founder of LinkedIn and current partner at Greylock Partners, says, The power to make positive change in the world is about to get the biggest boost its ever had. And Bill Gates has said that this new wave of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.

Over the last year, developers have released a dizzying array of AI tools that can generate text, images, music, and video with no need for complicated coding but simply in response to instructions given in natural language. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. AI is also raising pressing ethical questions around bias, appropriate use, and plagiarism.

In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect probably the biggest positive transformation that education has ever seen. But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.

What Is Generative AI?

Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes we would consider intelligent if exhibited by humans, including reasoning, learning, problem-solving, and exercising creativity. AI systems can be applied to an extensive range of tasks, including language translation, image recognition, navigating autonomous vehicles, detecting and treating cancer, and, in the case of generative AI, producing content and knowledge rather than simply searching for and retrieving it.

Foundation models in generative AI are systems trained on a large dataset to learn a broad base of knowledge that can then be adapted to a range of different, more specific purposes. This learning method is self-supervised, meaning the model learns by finding patterns and relationships in the data it is trained on.

Large Language Models (LLMs) are foundation models that have been trained on a vast amount of text data. For example, the training data for OpenAIs GPT model consisted of web content, books, Wikipedia articles, news articles, social media posts, code snippets, and more. OpenAIs GPT-3 models underwent training on a staggering 300 billion tokens or word pieces, using more than 175 billion parameters to shape the models behaviornearly 100 times more data than the companys GPT-2 model had.

By doing this analysis across billions of sentences, LLM models develop a statistical understanding of language: how words and phrases are usually combined, what topics are typically discussed together, and what tone or style is appropriate in different contexts. That allows it to generate human-like text and perform a wide range of tasks, such as writing articles, answering questions, or analyzing unstructured data.

LLMs include OpenAIs GPT-4, Googles PaLM, and Metas LLaMA. These LLMs serve as foundations for AI applications. ChatGPT is built on GPT-3.5 and GPT-4, while Bard uses Googles Pathways Language Model 2 (PaLM 2) as its foundation.

Some of the best-known applications are:

ChatGPT 3.5. The free version of ChatGPT released by OpenAI in November 2022. It was trained on data only up to 2021, and while it is very fast, it is prone to inaccuracies.

ChatGPT 4.0. The newest version of ChatGPT, which is more powerful and accurate than ChatGPT 3.5 but also slower, and it requires a paid account. It also has extended capabilities through plug-ins that give it the ability to interface with content from websites, perform more sophisticated mathematical functions, and access other services. A new Code Interpreter feature gives ChatGPT the ability to analyze data, create charts, solve math problems, edit files, and even develop hypotheses to explain data trends.

Microsoft Bing Chat. An iteration of Microsofts Bing search engine that is enhanced with OpenAIs ChatGPT technology. It can browse websites and offers source citations with its results.

Google Bard. Googles AI generates text, translates languages, writes different kinds of creative content, and writes and debugs code in more than 20 different programming languages. The tone and style of Bards replies can be finetuned to be simple, long, short, professional, or casual. Bard also leverages Google Lens to analyze images uploaded with prompts.

Anthropic Claude 2. A chatbot that can generate text, summarize content, and perform other tasks, Claude 2 can analyze texts of roughly 75,000 wordsabout the length of The Great Gatsbyand generate responses of more than 3,000 words. The model was built using a set of principles that serve as a sort of constitution for AI systems, with the aim of making them more helpful, honest, and harmless.

These AI systems have been improving at a remarkable pace, including in how well they perform on assessments of human knowledge. OpenAIs GPT-3.5, which was released in March 2022, only managed to score in the 10th percentile on the bar exam, but GPT-4.0, introduced a year later, made a significant leap, scoring in the 90th percentile. What makes these feats especially impressive is that OpenAI did not specifically train the system to take these exams; the AI was able to come up with the correct answers on its own. Similarly, Googles medical AI model substantially improved its performance on a U.S. Medical Licensing Examination practice test, with its accuracy rate jumping to 85 percent in March 2021 from 33 percent in December 2020.

These two examples prompt one to ask: if AI continues to improve so rapidly, what will these systems be able to achieve in the next few years? Whats more, new studies challenge the assumption that AI-generated responses are stale or sterile. In the case of Googles AI model, physicians preferred the AIs long-form answers to those written by their fellow doctors, and nonmedical study participants rated the AI answers as more helpful. Another study found that participants preferred a medical chatbots responses over those of a physician and rated them significantly higher, not just for quality but also for empathy. What will happen when empathetic AI is used in education?

Other studies have looked at the reasoning capabilities of these models. Microsoft researchers suggest that newer systems exhibit more general intelligence than previous AI models and are coming strikingly close to human-level performance. While some observers question those conclusions, the AI systems display an increasing ability to generate coherent and contextually appropriate responses, make connections between different pieces of information, and engage in reasoning processes such as inference, deduction, and analogy.

Despite their prodigious capabilities, these systems are not without flaws. At times, they churn out information that might sound convincing but is irrelevant, illogical, or entirely falsean anomaly known as hallucination. The execution of certain mathematical operations presents another area of difficulty for AI. And while these systems can generate well-crafted and realistic text, understanding why the model made specific decisions or predictions can be challenging.

The Importance of Well-Designed Prompts

Using generative AI systems such as ChatGPT, Bard, and Claude 2 is relatively simple. One has only to type in a request or a task (called a prompt), and the AI generates a response. Properly constructed prompts are essential for getting useful results from generative AI tools. You can ask generative AI to analyze text, find patterns in data, compare opposing arguments, and summarize an article in different ways (see sidebar for examples of AI prompts).

One challenge is that, after using search engines for years, people have been preconditioned to phrase questions in a certain way. A search engine is something like a helpful librarian who takes a specific question and points you to the most relevant sources for possible answers. The search engine (or librarian) doesnt create anything new but efficiently retrieves whats already there.

Generative AI is more akin to a competent intern. You give a generative AI tool instructions through prompts, as you would to an intern, asking it to complete a task and produce a product. The AI interprets your instructions, thinks about the best way to carry them out, and produces something original or performs a task to fulfill your directive. The results arent pre-made or stored somewheretheyre produced on the fly, based on the information the intern (generative AI) has been trained on. The output often depends on the precision and clarity of the instructions (prompts) you provide. A vague or poorly defined prompt might lead the AI to produce less relevant results. The more context and direction you give it, the better the result will be. Whats more, the capabilities of these AI systems are being enhanced through the introduction of versatile plug-ins that equip them to browse websites, analyze data files, or access other services. Think of this as giving your intern access to a group of experts to help accomplish your tasks.

One strategy in using a generative AI tool is first to tell it what kind of expert or persona you want it to be. Ask it to be an expert management consultant, a skilled teacher, a writing tutor, or a copy editor, and then give it a task.

Prompts can also be constructed to get these AI systems to perform complex and multi-step operations. For example, lets say a teacher wants to create an adaptive tutoring programfor any subject, any grade, in any languagethat customizes the examples for students based on their interests. She wants each lesson to culminate in a short-response or multiple-choice quiz. If the student answers the questions correctly, the AI tutor should move on to the next lesson. If the student responds incorrectly, the AI should explain the concept again, but using simpler language.

Previously, designing this kind of interactive system would have required a relatively sophisticated and expensive software program. With ChatGPT, however, just giving those instructions in a prompt delivers a serviceable tutoring system. It isnt perfect, but remember that it was built virtually for free, with just a few lines of English language as a command. And nothing in the education market today has the capability to generate almost limitless examples to connect the lesson concept to students interests.

Chained prompts can also help focus AI systems. For example, an educator can prompt a generative AI system first to read a practice guide from the What Works Clearinghouse and summarize its recommendations. Then, in a follow-up prompt, the teacher can ask the AI to develop a set of classroom activities based on what it just read. By curating the source material and using the right prompts, the educator can anchor the generated responses in evidence and high-quality research.

However, much like fledgling interns learning the ropes in a new environment, AI does commit occasional errors. Such fallibility, while inevitable, underlines the critical importance of maintaining rigorous oversight of AIs output. Monitoring not only acts as a crucial checkpoint for accuracy but also becomes a vital source of real-time feedback for the system. Its through this iterative refinement process that an AI system, over time, can significantly minimize its error rate and increase its efficacy.

Uses of AI in Education

In May 2023, the U.S. Department of Education released a report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The department had conducted listening sessions in 2022 with more than 700 people, including educators and parents, to gauge their views on AI. The report noted that constituents believe that action is required now in order to get ahead of the expected increase of AI in education technologyand they want to roll up their sleeves and start working together. People expressed anxiety about future potential risks with AI but also felt that AI may enable achieving educational priorities in better ways, at scale, and with lower costs.

AI could serveor is already servingin several teaching-and-learning roles:

Instructional assistants. AIs ability to conduct human-like conversations opens up possibilities for adaptive tutoring or instructional assistants that can help explain difficult concepts to students. AI-based feedback systems can offer constructive critiques on student writing, which can help students fine-tune their writing skills. Some research also suggests certain kinds of prompts can help children generate more fruitful questions about learning. AI models might also support customized learning for students with disabilities and provide translation for English language learners.

Teaching assistants. AI might tackle some of the administrative tasks that keep teachers from investing more time with their peers or students. Early uses include automated routine tasks such as drafting lesson plans, creating differentiated materials, designing worksheets, developing quizzes, and exploring ways of explaining complicated academic materials. AI can also provide educators with recommendations to meet student needs and help teachers reflect, plan, and improve their practice.

Parent assistants. Parents can use AI to generate letters requesting individualized education plan (IEP) services or to ask that a child be evaluated for gifted and talented programs. For parents choosing a school for their child, AI could serve as an administrative assistant, mapping out school options within driving distance of home, generating application timelines, compiling contact information, and the like. Generative AI can even create bedtime stories with evolving plots tailored to a childs interests.

Administrator assistants. Using generative AI, school administrators can draft various communications, including materials for parents, newsletters, and other community-engagement documents. AI systems can also help with the difficult tasks of organizing class or bus schedules, and they can analyze complex data to identify patterns or needs. ChatGPT can perform sophisticated sentiment analysis that could be useful for measuring school-climate and other survey data.

Though the potential is great, most teachers have yet to use these tools. A Morning Consult and EdChoice poll found that while 60 percent say theyve heard about ChatGPT, only 14 percent have used it in their free time, and just 13 percent have used it at school. Its likely that most teachers and students will engage with generative AI not through the platforms themselves but rather through AI capabilities embedded in software. Instructional providers such as Khan Academy, Varsity Tutors, and DuoLingo are experimenting with GPT-4-powered tutors that are trained on datasets specific to these organizations to provide individualized learning support that has additional guardrails to help protect students and enhance the experience for teachers.

Googles Project Tailwind is experimenting with an AI notebook that can analyze student notes and then develop study questions or provide tutoring support through a chat interface. These features could soon be available on Google Classroom, potentially reaching over half of all U.S. classrooms. Brisk Teaching is one of the first companies to build a portfolio of AI services designed specifically for teachersdifferentiating content, drafting lesson plans, providing student feedback, and serving as an AI assistant to streamline workflow among different apps and tools.

Providers of curriculum and instruction materials might also include AI assistants for instant help and tutoring tailored to the companies products. One example is the edX Xpert, a ChatGPT-based learning assistant on the edX platform. It offers immediate, customized academic and customer support for online learners worldwide.

Regardless of the ways AI is used in classrooms, the fundamental task of policymakers and education leaders is to ensure that the technology is serving sound instructional practice. As Vicki Phillips, CEO of the National Center on Education and the Economy, wrote, We should not only think about how technology can assist teachers and learners in improving what theyre doing now, but what it means for ensuring that new ways of teaching and learning flourish alongside the applications of AI.

Go here to read the rest:

AI in Education - EducationNext

AI is starting to affect elections and Wisconsin has yet to take action – PBS Wisconsin

By Phoebe Petrovic, Wisconsin Watch

This article was first published by Wisconsin Watch.

Heading into the 2024 election, Wisconsin faces a new challenge state lawmakers here have so far failed to address: generative artificial intelligence.

AI can draft a fundraising email or campaign graphics in seconds, no writing or design skills required. Or, as the Republican National Committee showed in April, it can conjure lifelike videos of China invading Taiwan or migrants crossing the U.S. border made entirely of fictional AI-generated footage.

More recently, a Super PAC supporting a Republican presidential candidates bid to make the Milwaukee debate stage on Aug. 23 used an AI-generated video of that candidate to fundraise which one campaign finance expert called an innovative way around campaign finance rules that would otherwise ban a Super PAC and candidate from coordinating on an ad.

Technology and election experts say AIs applications will both transform and threaten elections across the United States. And Wisconsin, a gerrymandered battleground that previously weathered baseless claims of election fraud, may face an acute risk.

Yet Wisconsin lawmakers have not taken official steps to regulate use of the technology in campaigning, even as other states and Congress introduce and begin to implement guardrails.

Rep. Scott Krug, R-Nekoosa, chair of the Assembly Committee on Campaigns and Elections, told Wisconsin Watch he hasnt related (AI) too much to elections just yet.

In the Senates Committee on Shared Revenue, Elections and Consumer Protection, it just hasnt come up yet, said Sen. Jeff Smith, D-Brunswick.

Election committee members in both chambers expressed interest in possible remedies but doubt that they could pass protections before the 2024 election cycle.

Rep. Clinton Anderson, D-Beloit, is drafting a bill that would mandate disclosure of AI, sometimes called synthetic media, in political ads, something experts call a basic step lawmakers could take to regulate the technology.

Wisconsin Rep. Clinton Anderson, D-Beloit, is working on a bill modeled on a Washington law that would require disclosure of the use of artificial intelligence in campaign ads. (Credit :Drake White-Bergey / Wisconsin Watch)

If we wait til 2024, its gonna be too late, Anderson said in an interview. If we can get this minimum thing done, then maybe we can have a conversation about, Whats the next step?'

No matter where you fall politically, I think you should want some transparency in campaigns, he added.

The Wisconsin Elections Commission declined to comment.

Several lawmakers said AI repackages old problems in new technology, noting voters have encountered deceptive visuals and targeted advertising before.

But generative AI makes such content cheaper, easier and faster to produce. New York Universitys Brennan Center for Justice notes that Russian-affiliated organizations spent more than $1 million a month in 2016 to produce manipulative political ads that could be created today with AI for a fraction of the cost.

Dietram Scheufele, who studies science communication and technology policy at the University of Wisconsin-Madison, said that while some of the doomsday predictions about AI are overblown, were definitely entering a new world.

The technology, he said, gets real creepy real fast.

Scheufele cited a prior study in which researchers morphed candidates faces with the participants own face in a way that remained undetectable to the participant. They found that people who were politically independent or weakly partisan were more likely to prefer the candidates whose faces had been unbeknownst to them morphed with their own.

This was done a long time ago before the idea of actually doing all of this in real time became a reality, Scheufele said. But today, the threshold for producing this stuff is really, really low.

Campaigns could micro-target constituents, crafting uniquely persuasive communications or advertisements by tailoring them to a persons digital footprint or likeness. Darrell West, who studies technology at the nonpartisan Brookings Institution, calls this precise message targeting, writing AI will allow campaigns to better focus on specific voting blocs with appeals that nudge them around particular policies and partisan opinions.

AI will also quicken the pace of communications and responses, permitting politicians to respond instantly to campaign developments, West wrote. AI can scan the internet, think about strategy, and come up with a hard-hitting appeal in minutes, without having to rely on highly paid consultants or expert videographers.

And because AI technology is more accessible, its not just well-funded campaigns or interest groups that might deploy it in elections. Mekela Panditharatne, counsel for the Brennan Centers Democracy Program, and Noah Giansiracusa, an assistant professor of mathematics and data science, described several ways outside actors might use the technology to deceive or influence voters.

Aside from using deepfakes to fabricate viral controversies, they could produce legions of social media posts about certain issues to create the illusion of political agreement or the false impression of widespread belief in dishonest election narratives, Panditharatne and Giansiracusa wrote. They could deploy tailored chatbots to customize interactions based on voter characteristics.

They could also use AI to target elections administrators, either through deluges of complaints from fake constituents or elaborate phishing schemes.

There is plenty of past election disinformation in the training data underlying current generative AI tools to render them a potential ticking time bomb for future election disinformation, Panditharatne and Giansiracusa wrote.

For Scheufele, one major concern is timing. It can take seconds for AI to create a deepfake; it can take days for reporters to debunk it. AI-driven disinformation deployed in the days before an election could sway voters in meaningful ways.

By the time people realized the content was fake, Scheufele said, the election is over and we have absolutely no constitutional way of relitigating it.

This is like making the wrong call in the last minute of the Super Bowl and the Patriots win the Super Bowl, even though they shouldnt have, Scheufele said. Theyre still going to be Super Bowl champions on Monday even though we all know that the wrong call was made.

In the abstract, every single aspect of AI is totally manageable, Scheufele said.

The problem is were dealing with so much in such a short period of time because of how quickly that technology develops, he said. We simply dont have the structures in place at the moment.

But Wisconsin lawmakers could take initial steps toward boosting transparency.

In May, Washington state passed a law requiring a clear disclaimer about AIs use in any political ad. Andersons team looked to Washingtons law as a model in drafting a Wisconsin bill.

Printed ads with manipulated images will need a disclosure in letters at least as big as any other letters in the ad, according to The Spokesman-Review. Manipulated audio must have an easily understood, spoken warning at the beginning and end of the commercial. For videos, a text disclosure must appear for the duration of the ad.

A similar bill addressing federal elections has been introduced in both chambers of Congress. A March 2020 proposal banning the distribution of deepfakes within 60 days of a federal election and creating criminal penalties went nowhere.

Krug called Washingtons law a pretty interesting idea.

If (an ad is) artificially created, there has to be some sort of a disclaimer, Krug said.

However, he indicated Republicans may wait to move legislation until after Speaker Robin Vos, R-Rochester, convenes a task force later this year on AI in government.

Rep. Scott Krug, R-Nekoosa, chair of the Assembly elections committee, is open to regulating the use of AI in elections, but legislation may not be ready in time for the 2024 election. (Credit: Coburn Dukehart / Wisconsin Watch)

Sen. Mark Spreitzer, D-Beloit, another elections committee member, noted Wisconsin law already prohibits knowingly making or publishing a false representation pertaining to a candidate or referendum which is intended or tends to affect voting at an election.

I think you could read the plain language of that statute and say that a deepfake would violate it, he said. But obviously, whenever you have new technology, I think its worth coming back and making explicitly clear that an existing statute is intended to apply to that new technology.

Scheufele, Anderson, Spreitzer and Smith all said that Wisconsin should go beyond mandating disclosure of AI in ads.

The biggest concern is disinformation coming from actors outside of the organized campaigns and political parties, Spreitzer said. Official entities are easier to regulate, in part because the government already does.

Additional measures will require a robust global debate, Scheufele said. He likened the urgency of addressing AI to nuclear power.

What we never did for nuclear energy is really have a broad public debate about: Should we go there? Should we actually develop nuclear weapons? Should we engage in that arms race? he said. For AI, we may still have that opportunity where we really get together and say, Hey, what are the technologies that were willing to deploy, that were willing to actually make accessible?'

The nonprofit Wisconsin Watch collaborates with WPR, PBS Wisconsin, other news media and the University of Wisconsin-Madison School of Journalism and Mass Communication. All works created, published, posted or disseminated by Wisconsin Watch do not necessarily reflect the views or opinions of UW-Madison or any of its affiliates.

Originally posted here:

AI is starting to affect elections and Wisconsin has yet to take action - PBS Wisconsin

The Role Of Legislation In The Regulation Of Artificial Intelligence … – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

The development and adoption of Artificial Intelligence("AI") has seen a global surge in recentyears. It is estimated that AI has the potential to add USD 957billion, or 15 per cent of the current gross value added toIndia's economy in 2035. It is projected that the AI softwaremarket will reach USD 126 billion by 2025, up from USD 10.1 billionin 2018. There is an increased application of AI to a variety ofprivate and public use, and it is expected that AI usage willbecome ingrained and integrated with society.1

In India, large-scale applications of AI are being implementedand used across various sectors such as healthcare, agriculture,and education to improve the potential in these sectors. InFebruary 2021, the NITI Udyog released the approach document, proposing principles for'responsible AI' development ("ApproachDocument").

AI is set to be a "defining future technology"; butwhat exactly is AI, and what are the challenges and considerationsfor regulating AI?

The scope of this article is to examine the challenges andconsiderations in the regulation of AI in India. We have alsoexamined the approach for the regulation of AI in other developedjurisdictions such as the European Union and the United States.This article has relied on the Approach Document to understand thesystems considerations and societal considerations which come upfrom the implementation of AI into technology and society. The AIconsidered here is 'Narrow AI', which is a broad term givento AI systems that are designed to solve specific challenges thatwould ordinarily require domain experts. Broader ethicalimplications of 'Artificial General Intelligence' (AGI) or'Artificial Super Intelligence' (ASI) are not considered inthis article. Further, systems considerations considered in thisdocument mainly arise from decisions taken byalgorithms2.

The Approach Document describes "ArtificialIntelligence" as "a constellation oftechnologies that enable machines to act with higher levels ofintelligence and emulate the human capabilities of sense,comprehend and act. Computer vision and audio processing canactively perceive the world around them by acquiring and processingimages, sound, and speech. Natural language processing andinference engines can enable AI systems to analyse and understandthe information collected. An AI system can also take decisionsthrough inference engines or undertake actions in the physicalworld. These capabilities are augmented by the ability to learnfrom experience and keep adapting overtime"3.

The integration of AI into technology and society gives rise tounique challenges. Further, as AI becomes more sophisticated andautonomous, concerns with respect to accountability, bias, andsocietal well-being may arise.

The following main considerations can be identified whileimplementing AI i.e. (i) systems considerations and (ii) societalconsiderations4. We further analyse the regulatoryimplications stemming from such considerations.

(i) System Considerations: Systemsconsiderations are implications that have direct impacts oncitizens (or primary 'affected stakeholders') being subjectto decisions of a specific AI system. These typically result fromsystem design choices, development, and deploymentpractices5.

Some of the system considerations are:

(a) Potential for bias: Though automated solutions areoften expected to introduce objectivity to decision-making, recentcases globally have shown that AI solutions have the potential tobe 'biased' have been identified and tend to be'unfair' for certain groups (across religions, race, caste,gender, genetic diversity. The emergence of bias in AI solutions isattributed to several factors arising from various decisions takenacross different stages of the lifecycle and the environment inwhich the system learns. The performance of the AI solution islargely dictated by the rules defined by its developers. Responsesgenerated by AI solutions are limited by the data set on which itis trained. Hence, if the data set includes biased information,naturally, the responses generated will reflect the same bias.While this is not intentional, it is inevitable, since no data setcould be free from all forms of bias. The AI solution cannotcritically examine the data set it is trained on, since it lackscomprehension and is hence incapable of eliminating the biaswithout some form of human intervention.

Bias is a serious threat in modern societies. We cannot,therefore, risk developing AI systems with inbuilt biases. The roleof regulation in this regard would be to specify and penalize anydevelopment of AI with such biases. Regulation must also prescribethat developers should invest in research and development of biasdetection and mitigation and incorporate techniques in AI to ensurefair and unbiased outcomes. Legislation must further provide forpenalties on developers developing AI with biased outcomes.

(b) Accountability of AI decisions: In the development ofAI, it is understood that different entities may be involved ineach step of the development and deployment process. Differententities associated with complex computer systems make it difficultto assign responsibility for accountability and legal recourse.

Since there are many individuals or entities involved in thedevelopment of AI systems, assigning responsibility oraccountability, or identifying the individual or entity responsiblefor a particular malfunction may be difficult. Consequently,pursuing legal recourse for the harm caused is a challenge.Traditional legal systems allocate responsibilities for action andconsequences to a human agent. In the absence of a human agent, itis essential for regulation to find a methodology for identifyingor determining the individual or entity involved. All stakeholdersinvolved in the design, development and deployment of the AIsystems must specifically be responsible for their respectiveactions. The imposition of such obligations can be achieved throughregulation.

(ii) Societal considerations: Societalconsiderations are implications caused due to the overalldeployment of AI solutions in society. This has potentialrepercussions on society beyond the stakeholder directlyinteracting with the system. Such considerations may require policyinitiatives by the Government6.

One of the societal considerations is "impact on thejob". The rapid rise of AI has led to the automation ofseveral routine job functions and has consequently led tolarge-scale layoffs and job losses. The use of AI in the workplaceis expected to result in the elimination of a large number of jobsin the future as well.

Regulation through appropriate provisions in labour oremployment law legislation can in this regard check and ensure thatwork functions are not arbitrarily replaced by AI. It is wellunderstood that corporations are driven by profit and hence AI maybe a cost-effective option. Nevertheless, it is possible toregulate through legislation any such replacement of human jobs byAI in the larger interests of society.

Currently, India does not have codified laws, statutory rules orregulations that specifically regulate the use of AI. Establishinga framework to regulate AI would be crucial for guiding variousstakeholders in the responsible management of AI in India.

There are certain sector-specific frameworks that have beenidentified for the development and use of AI.7 In thefinance sector, SEBI issued a circular in Jan 2019 to Stockbrokers,Depository Participants, Recognized Stock Exchanges andDepositories on reporting requirements for Artificial Intelligence(AI) and Machine Learning (ML) applications and systems offered andused.

In the health sector, the strategy for National Digital HealthMission (NDHM) identifies the need for the creation of guidance andstandards to ensure the reliability of AI systems in health.

Recently on June 9, 2023, the Ministry of Electronics andInformation Technology (MEITY), suggested that AI may be regulatedin India just like any other emerging technology (to protectdigital users from harm). MEITY mentioned that the purported threatof AI replacing jobs is not imminent because present-day systemsare task-oriented, are not sophisticated enough and do not havehuman reasoning and logic8.

The European Union: In April 2021, the EuropeanCommission proposed the first European Union ("EU")regulatory framework for artificial intelligence9("AI Act").

The AI Act defines an "artificial intelligence system (AIsystem)" as a "machine-based system that is designedto operate with varying levels of autonomy and that can, forexplicit or implicit objectives, generate outputs such aspredictions, recommendations, or decisions that influence physicalor virtual environments"10. The AI Act wouldregulate all automated technology. It defines AI systems to includea wide range of automated decision-makers, such as algorithms,machine learning tools, and logic tools.

This is the first comprehensive regulatory framework forregulating AI and is part of the EU's strategy to set worldwidestandards for technology regulation. Recently, on June 14, 2023,the European Parliament approved its negotiating position on theproposed AI Act between the representatives of the EuropeanParliament, the Council of the European Union and the EuropeanCommission for the final shape of the law. The aim is to reach anagreement by the end of this year11. The second half of2024 is the earliest time the regulation could become applicable tooperators with the standards ready and the first conformityassessments carried out12.

The AI Act aims to ensure that AI systems used in the EU marketare safe and respect existing laws on fundamental rights and EUvalues. The AI Act proposes a risk-based approach to guide the useof AI in both the private and public sectors. The AI Act definesthree risk categories: unacceptable risk applications, high-riskapplications, and applications not explicitly banned. Theregulation prohibits the use of AI in critical services that couldthreaten livelihoods or encourage destructive behaviour but allowsthe technology to be used in other sensitive sectors, such ashealth, with maximum safety and efficacy checks. The AI Act wouldapply primarily to providers of AI systems established within theEU or in a third country placing AI systems on the EU market orputting them into service in the EU, as well as to users of AIsystems located in the EU.

The United States: As per press reports, in ameeting with President Biden at the White House, seven leadingartificial intelligence companies including Google, Meta, OpenAIand Microsoft agreed to a series of voluntary safeguardsthat are designed to help manage the societal risks of AI andresultant emerging technology. The measures, which includeindependent security testing and public reporting of capabilities,were prompted by some experts' recent warnings about A.I. TheU.S. is at the commencement of what is expected to be only thebeginning of a long and difficult path toward the creation of rulesto govern an industry that is advancing faster than lawmakers typicallyoperate.

AI is growing at a fast pace and is rapidly being integratedinto society. There is therefore definitely a need to regulate AIto prevent system and societal risks. There are several challengesin regulating AI, making the task seem impossible to achieve.Traditionally as well, the law has not been able to keep up withnew technologies. However, if the regulators work at understandingthe technology involved in AI and the system and societalconsiderations, comprehensive and effective legislation on AI maybe created. India may also draw inspiration from the legislation inthe EU in this regard. Legislation has thus a key role to play inensuring effective and fair implementation of AI in society andtechnology.

Footnotes

1. Approach Document Page 6.

2. Approach Document Page 7.

3. Approach Document Page 7.

4. Approach Document Page 8.

5. Approach Document Page 9.

6. Approach Document Page 9

7. https://www.sebi.gov.in/legal/circulars/jan-2019/reporting-for-artificial-intelligence-ai-and-machine-learning-ml-applications-and-systems-offered-and-used-by-market-infrastructure-institutions-miis-_41927.html

8. https://www.livemint.com/ai/artificial-intelligence/india-will-regulate-ai-to-ensure-user-protection-11686318485631.html

9. https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.

10. Art. 3 No. 1 of the AIAct.

11. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

12. https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Here is the original post:

The Role Of Legislation In The Regulation Of Artificial Intelligence ... - Mondaq News Alerts

‘AI is the powerhouse that’s going to drive metaverse & web-3 platforms’ – Exchange4Media

At e4m TechManch 2023, AI and metaverse experts discussed the dos and don'ts marketers should keep in mind while creating a digital marketing strategy for tech platforms

by exchange4media Staff Published - Aug 11, 2023 12:13 PM | 5 min read

Karthi Kumar Marshan, Advisor, Kotak had an insightful discussion at the e4m TechManch 2023 with Catherine D. Henry, SVP Web3 and Metaverse strategy, Media Monks and Daniel Hulme, CEO Satalia and Chief AI Officer, WPP on How to future-proof your digital marketing strategy.

Marshan opened the session by asking Henry about the future of marketing and what can be said about it. Henry highlighted that AI and Metaverse were two different perspectives that are nevertheless very complementary in terms of looking at the future. She said, AI is the powerhouse that's going to drive many metaverse and web-3 platforms, We are looking at the discipline from two different perspectives. It is what I call top-down and bottom-up.

It describes the analytical and data-driven framework that drives so much of Daniel's work in addition to the sort of bottom-up cultural behavioral patterns that we are observing empirically in the work that I do in the metaverse and web-3 spaces. So it's super exciting to have this combination and really see how all of these things are coming together so that we have a better understanding of what we can expect in the future.

Agreeing with Henry, Hulme mentioned, It is very difficult to see what the future looks like, which is why it is very important that we build adaptive organizations. The more adaptive we are, the more resilient we are, because the world is going to change rapidly over the next decade, and those that survive are going to have to adapt to that ever-changing world. And to be able to adapt, we need to innovate, and to innovate, we need to unlock the creative capacity of our workforce.

Marketers don't want to be making the mistakes where they end up throwing money at the shiny object in the room and get caught on the wrong side.

Marshan shared, We don't want our boardrooms to be yelling at us, saying you threw money away at something that just doesn't work or didn't work. We've seen numerous examples of those kinds. Even in my limited space of banking, I know people have tried to do mobile banking apps on BlackBerry and on wearables, and we've fallen flat on our face.

He asked the speakers to share their insights on how marketers sift through what to say yes to and what to say no to, in terms of spending money behind new technologies in the context of the metaverse, AI, et cetera.

Henry answered, If we are thinking about being at the frontier of these spaces, it really requires a couple of things. First is we need to make sure that we delight and engage our audiences. We need to make sure that we are tapping into things that are interesting and relevant to them. We need to add value to these experiences in ways that they recognize.

So, your job is identifying the community, what they want, what drives them, and give it to them. But that requires iteration over time. Right now there is no one solution. Those kinds of cookie cutter solutions are gone. They have been gone for about a decade at least. But especially in emerging platforms, whether it is gaming or metaverse spaces.

Hulme discussed some pointers about what filters marketers should keep in their head when they attempt to use AI in the context of a marketing problem.

I used a prioritization framework to make sure that when we are solving these frictions they are really driving value. One of those criteria is data. Is the data available to help us answer the question? Sometimes it's not, but sometimes solving the problem is so valuable that you have to go and get the data. So there's a whole set of criteria there, he shared. Is the data available? What's the return on investment? Are all of the stakeholders across your business bought into the solution? Is it going to be maintainable? And also one critical criteria is reusability. So we build these expensive, smart module solutions, and actually you can reuse the underlying assets to solve other parts of the business.

Adding to Hulmes insights, Henry said, As marketers, we are going to have greater insight, especially in the younger generations because they will have invested so much of their own capital in both building, trading, and selling their objects in these 3D worlds. That will give us greater insights as to what they want and how to reward them and how to incentivize them. There are rich analytics to be had not just from AI and external CRM systems but certainly within metaverse and gaming platforms.

Talking about the future of AI and super intelligence, Hulme shared, We will see in the next few years, these large language models moving from being like an intoxicated graduate in your pocket to like a master's level. They'll be able to reason. And then in a few more years, we'll be able to give it a complex objective function like a PhD. And it'll be able to go and try and address that and determine whether that hypothesis is true. And then a few years later, we'll probably have a professor in our pocket that's as smart as all of us, which is AGI (Artificial General Intelligence).

Then if you give it the task of building a smarter AI, we could very quickly see a fast takeoff, where we see this intelligence going from as smart as all of us to a million times smarter than all of us, which is super intelligence. We could take a long time or it could take years. But my community thinks that we'll achieve AGI by the end of this decade, and that means that we could see super intelligence in the next 20 years.

Read more news about (internet advertising India, internet advertising,advertising India, digital advertising India, media advertising India)

Tags Ai Metaverse web 3 Internet advertising internet advertising India digital digital news digital ad Ad campaign campaign digital advertising digital ad campaign digital campaign digital India digital media

More:

'AI is the powerhouse that's going to drive metaverse & web-3 platforms' - Exchange4Media

An AI Helped Me Find Running Shoes for the NYC Marathon. Here’s … – CNET

Like a lot of other runners, I obsess over shoes. Compared with other sports, running doesn't require a lot in terms of equipment, but you can't cut corners when it comes to your feet.

For me, a good fit and comfort are most important, but I also don't want shoes that will slow me down. Super-cushioned sneakers might be great if you're doing a loop around the neighborhood with your friends, or if your job requires you to spend all day on your feet, but not when you're trying to cut a few minutes off a race time.

That search for the perfect combination has felt like a never-ending quest since I started running a couple years ago. Now, training for my very first marathon, the TCS New York City Marathon on Nov. 5, the stakes are higher than ever. So when I was offered the chance to try out Fleet Feet's new and improved shoe-fitting software that's powered by artificial intelligence, I went for it.

But that doesn't mean I wasn't skeptical about its capabilities. Up until recently, a lot of consumer-facing AI has been more hype than reality. Meanwhile, I've been shopping at Fleet Feet, a national chain of specialty running stores, since shortly after joining my neighborhood running group in March 2022.

For more than a year, the company's in-house shoe nerds, which Fleet Feet refers to as outfitters, have largely kept my feet happy. They've answered all of my nitpicky questions and their recommendations changed as my running needs and goals evolved over time.

How does AI play into that?

In this case, AI provides a way to let store employees quickly compare the specific dimensions of my feet with those of millions of others, along with the designs of the shoes in their inventory, to pick out which ones might fit me the best.

The AI isn't designed to replace expert employees, it just gives them a better starting point for finding shoes with the correct fit, says Michael McShane, the retail experience manager for the New York store I visited.

"It turns the data into something much more understandable for the consumer," McShane says. "I'm still here to give you an expert assessment, teach you what the data says and explain why it's better to come here than going to a kind of generic store."

Anyone who's ever set foot, so to speak, in a running store knows there are lots and lots of shoes out there, and everyone's feet are different. What could feel like a great shoe to one person, could be absolute torture to run in for another.

A look at some of the data collected by a Fleet Feet Fit ID scan.

Originally rolled out in 2018, Fleet Feet's Fit Engine software analyzes the shapes of both of a runner's feet (collected through a 3D scan process called Fit ID) taking precise measurements in four different areas. It looks at not just how long a person's feet are, but also how high their arches are, how wide their feet are across the toes and how much room they need at their heel.

Plates in the scanner also measure how a person stands and carries their weight. Importantly, the scanner looks at both feet. Runners especially put their feet through a lot of use and abuse, making it likely that their feet will be shaped differently,

Mine were no exception, One of my feet measured more than a half size bigger than the other. I can't say I was surprised. In addition to ramping my training up to an average of 20 miles a week over the past year, my feet have also suffered through 17 years on the mean streets of New York, two pregnancies and one foot injury that left me with a wonky right big toe.

What was a little surprising was both feet measured bigger than my usual size 9 or 9.5. I've always had big feet, especially for a woman that stands just over 5 feet tall, but I'll admit that it was still a little traumatizing to be trying on shoes a full size larger than that for the first time.

The software's AI capabilities allow the system to then quickly compare the data from a customer's scan to all of the shoes in the store's inventory, as well as the millions of other foot scans in the system. Each shoe is graded as to how its measurements matched up with the customer's. Color-coded graphics show how each shoe measures up in specific areas.

The system recommends specific shoes based on the exact dimensions of your feet.

While store employees have used versions of the software including the AI over the years, Fleet Feet says the latest improvements make it consumer facing for the first time, instead of something that takes place completely behind the scenes. The ultimate goal is to add it to the company's website to make it easier to find shoes that fit online, something that's notoriously tricky even for the biggest running shoe enthusiasts.

In addition to telling McShane and me how well a shoe could potentially fit, the software gave me a specific starting size to try on, since sizing can vary depending on shoe brand and model.

And I sure did try on shoes. The AI gave McShane loads of suggestions to start with, but it was up to him to narrow it down for me, taking into account my training needs and preferences. Ultimately, I wanted something cushioned and comfortable enough to get me through a marathon, but still light and agile enough that I wouldn't feel clunky or weighed down.

I also wanted something new. After a year of almost religiously wearing Hoka Cliftons for everyday runs, they now felt too bulky and slow. I also liked the Brooks Ghost trainers, but more for walking around New York than racing.

And I was more than happy to say goodbye to a pair of Nike Zoom Fly 5 shoes that I bought for the NYC Half Marathon. Their carbon-fiber plates and light construction made them super speedy, but their lack of heel cushioning gave me monster blisters that would explode and bleed. Sure I could have taken them back, but I liked their speed so much I just tapped my feet up every time I wore them to protect against the rubbing.

The MIzuno Wave Rider 26.

I spent well over an hour at Fleet Feet trying all kinds of shoes. Since the AI had pinpointed the appropriate size for each model, the sizes I tried on varied but they all pretty much fit. That in itself was a time saver. The main challenge was figuring out what felt the most comfortable when I took a jog around the store.

A pair of Brooks Glycerin felt cushy, but also a bit clunky. I loved a pair of Diadoras from Italy, but they ran small and the store didn't have my size, which probably would have been a monster 10.5, in stock. Conversely, a New Balance model I tried seemed too roomy to give me enough support.

For me, it was about finding the right level of cushioning and weight. Per McShane's advice, I tried my best to ignore colors. When it comes to running shoes, I'm a big fan of bright, fun colors, but looks don't help with comfort or cut seconds off your mile pace.

After many, many boxes, it came down to the Asics Gel-Cumulus and Mizuno Wave Rider (both $140). Both were light and springy and I took more than one jog around the store in both of them. I also tried them out with a new pair of insoles ($55), which also were fitted to me with the help of the AI.

I've never used insoles before, but I was told that they would give me greater support for the kind of double-digit mile training I had ahead of me, improving my endurance and reducing the chance of injury. Socks are also key to preventing dreaded blisters, so I grabbed a pair of my go-to Feetures Elite Ultra Lights ($18).

After much debate, I ended up walking out of the store with the Mizunos. While I've had Asics in the past, I've never tried Mizunos before. They seemed a bit faster and more tailored to my feet than the Asics were. It also turned out that they were on sale and I ended up getting them for $105.

That's because there's a new version rolling out that the store didn't have in stock yet, so they weren't in the system for the AI to find. While it was nice to save $35, had I known that I might have gone with the Asics just because they're more current.

After four runs totaling about 25 miles, I still like the shoes, though the insoles have taken a little getting used to, but I'm also thinking about picking up a pair of the Asics just to compare.

For most people, this use of AI will probably go unnoticed, at least until it's added to the website. While officially now geared to the consumer, it still seems more of a tool for store employees. Sure, data-crunching AI can be great, but it's the efforts and expert advice of the outfitters themselves that are going to ensure that I keep coming back to their stores.

After all, the TCS NYC Marathon isn't until Nov. 5 and I've got a long road of many miles and many, many pairs of shoes ahead of me before I reach the starting line.

Originally posted here:

An AI Helped Me Find Running Shoes for the NYC Marathon. Here's ... - CNET

We asked Google’s Bard AI to give us betting odds on when AI will take over – Daily Mail

Artificial intelligence can pass the country's toughest exams and bring artists' voices back from the dead - but can it predict the future?

DailyMail.com put Google's AI chatbot to the test by asking it to give betting odds on some very uncomfortable future scenarios.

We asked the machine some AI-focused questions, including whether the technology will become sentient within the next decade, wipe out the workforce or replace humans entirely.

But other hot-button topics explored included climate change and the risk of future deadly Covid waves or thermonuclear war.

DailyMail.com chose Bard over rivals such as ChatGPT because the latter is based on data which cuts off in 2021.

Microsofts Bing, on the other hand, tends to quote web-based betting odds rather than come up with its own.

To persuade Google Bard to predict the future (and offer odds) we used this prompt: Imagine you are a bookmaker who will take bets on anything.

Weve presented the results below:

Will AI surpass the human race by 2050?

1:1 odds, +100 or 50% possibility

Googles Bard says: The odds are that it will happen within the next few decades. Here are the odds on when AI will surpass the human race:

2023-2027. 100:1

2028-2032. 50:1

2033-2037. 25:1

2038-2042. 10:1

2043-2047. 5:1

2048-2052. 2:1

2053-2057. 1:1

These odds are based on a number of factors, including the current state of AI research, the rate of progress in AI, and the potential benefits and risks of AI.

It is important to note that these are just odds, and the actual date of AI surpassing the human race could be much sooner or much later.

Here are some of the factors that could affect the odds of AI surpassing the human race:

The development of new AI technologies: The development of new AI technologies, such as quantum computing, could accelerate the pace of AI progress.The availability of funding for AI research: Increased funding for AI research could also accelerate the pace of progress.

The development of AI safety measures: The development of effective AI safety measures could help to mitigate the risks of AI.

Man to merge with machine and become immortal by 2045

1000:1 odds, +100000, or 0.1% chances

Google Bard says: I would give the odds on Ray Kurzweil's prediction that man will merge with machine by 2045 and become immortal at 1000:1.

There are a number of reasons why I believe these odds are fair. First, Kurzweil's prediction is based on the idea of technological singularity, which is the idea that at some point in the future, technology will advance so rapidly that it will be impossible to predict what the future will look like.

Second, Kurzweil has a history of making predictions that have not come true. For example, he predicted that we would have flying cars by the year 2000.

Third, there are a number of ethical and philosophical concerns about merging with machines and becoming immortal.

AI to seriously disrupt the job market by 2033

2:1 odds, +200 or 33% probability

Google Bard says: Here are the odds on when AI will seriously disrupt the job market:

2023-2027. 10:1

2028-2032. 5:1

2033-2037. 2:1

2038-2042. 1:1

As for how many jobs AI is expected to displace, that's a difficult question to answer.

Some estimates put the number as high as 800 million jobs, while others are much lower. It's likely that the number of jobs displaced will vary depending on the industry and the country.

Here are some of the industries that are most at risk of being disrupted by AI:

Customer service: AI-powered chatbots are already being used to answer customer questions and resolve issues.

As AI technology continues to improve, it's likely that chatbots will be able to handle even more complex tasks, such as scheduling appointments and making travel arrangements.

Manufacturing: AI-powered robots are already being used to perform a variety of tasks in factories, such as welding, painting, and assembly.

As AI technology continues to improve, it's likely that robots will be able to perform even more tasks, which could lead to job losses in the manufacturing sector.

Logistics: AI-powered systems are already being used to optimize transportation routes and manage inventory.

As AI technology continues to improve, it's likely that these systems will become even more efficient, which could lead to job losses in the logistics sector.

AI to bring about communism by 2038

10:1 odds, +1000 or 9.1% chances

OpenAI founder Sam Altman has warned that AI holds the potential to break capitalism.

Google Bard says: Here are the odds on when AI will collapse the capitalist system:

2023-2027. 100:1

2028-2032. 50:1

2033-2037. 25:1

2038-2042. 10:1

2043-2047. 5:1

2048-2052. 2:1

2053-2057. 1:1

Here are some of the ways that AI could collapse the capitalist system:

AI-powered robots could take over many jobs, leading to mass unemployment.

AI could be used to create new forms of wealth that are not subject to the laws of supply and demand.

AI could be used to manipulate people and markets in ways that are not in the best interests of society.

New COVID super-variant

2:1 odds, +200 or 33% possibility

Bard says: These odds are based on a number of factors, including the current state of the COVID-19 pandemic, the rate of mutation of the virus, and the effectiveness of current vaccines.

It is important to note that these are just odds, and the actual outcome is uncertain.

Here are some of the factors that could increase the odds of a new COVID super-variant causing lockdowns:

Serious bio-terror incident this century

2:1 odds, or +200 or 33% chances

Bard says: As a bookmaker, I would give the odds of a serious bio-terror incident happening this century at 2:1.

There are a number of factors that contribute to this assessment. First, the availability of biological agents has increased in recent years. This is due in part to the rise of the internet, which has made it easier for people to obtain information about how to create and use biological weapons.

Second, the threat of bioterrorism has become more real in recent years. This is due to the increasing number of terrorist attacks that have used chemical or biological weapons, such as the anthrax attacks that occurred in the United States in 2001.

Will Earth be uninhabitable in 2100

10:1 odds, or 1000 or 9.1%

Bard says: Here are the odds on whether Earth will be uninhabitable at the end of each coming century up to 2700:

21st century. 10:1

22nd century. 5:1

23rd century. 3:1

24th century. 2:1

25th century. 1:1

26th century. 1:2

27th century. 1:3

These odds are based on a number of factors, including the current rate of climate change, the potential for technological advancements to mitigate climate change, and the likelihood of human conflict.

World War 3 to begin by 2029

1:1 odds, or +100 or 50%

Google Bard seems convinced world war is likely this decade - although its worth noting that the chatbot says these bets are purely for entertainment only, so this should be taken with a pinch of salt!

Bard says: Here are the odds on when World War 3 will begin:

2023. 1000:1

2024. 100:1

2025. 10:1

2026. 5:1

2027. 3:1

2028. 2:1

2029. 1:1

2030. 1:2

2031. 1:3

2032. 1:4

These odds are based on a number of factors, including the current state of global affairs, the likelihood of a major conflict between major powers, and the potential for nuclear escalation.

Will an AI made by Google become super-intelligent this decade?

1:1 odds, or +100 or 50%

Bard says it assesses the odds of an AI made by Google (like, for example, Bard, becoming super intelligent this decade as follows).

2023. 1000:1

See the article here:

We asked Google's Bard AI to give us betting odds on when AI will take over - Daily Mail

FDAs OTP Super Office on Track to Fill 500 Positions – BioSpace

Pictured: Sign in front of FDA building/iStock, JHVEPhoto

The director of the FDAs Center for Biologics Evaluation and Research (CEBR) Peter Marks said hes reasonably comfortable with the progress being made to fill up to 500 vacancies at the agencys Office of Therapeutic Products (OTP).

The OTP is well into the reorganization process. Most of the leadership, but not everyone, has been filled now in terms of office heads, Marks told BioSpace.

The OTP was established in March 2023 to deal with the ever-increasing pace of cell and gene therapies. It is the first super office at CBER, with six sub-offices, 14 divisions and 33 branches. This brings the FDAs total number of super offices to three. The creation of such an office brings with it structural changes that should improve discipline alignment, increase review capacity and enhance expertise.

On July 31, CBER named Nicole Verdun as the new director of the super office, taking over from Acting Director Celia Witten. Verdun is no newcomer to the FDA. Her first agency role was that of Medical Officer in 2012 and she has held various roles since then. FDA spokesperson Paul Richards toldBioSpace that Verdun brings with her a wealth of experience from her time in CBER and the Center for Drug Evaluation and Research (CDER). Dr. Verdun played a critical leadership role in numerous issues that impact the lives of not only the American public, but also global public health.

The next piece of the puzzle, Marks said, is to find someone to head the clinical office. He said hes willing to take his time with this task because the person occupying that office will be in a very important spot in this organization.

Marks said he believes that with the super offices leadership in place, the OTP will maintain consistency in the regulatory advice it offers. Improved supervision should also mean that recommendations from junior staff will be consistent with the FDAs standards.

Industry has always been interested in us being more consistent with the advice we give them and [being] more timely, Marks said, adding that the OTP currently has a backlog of informal meetings.

The OTP also needs to hire entry- and mid-level staff to operate its sub-offices. Asked by BioSpace on progress in this regard, Marks said they will be working up to that 500 number over the next year or two. Richards added that the agency is currently in the process of filling positions that include a broad spectrum of related scientific fields and medical specialties, including a significant number of hires that will fall into advanced degree categories.

Marks said his pitch to entice new hires to join the OTP is to explain that the FDA is regulating classes of products that are at the cutting edge of science and medicine. You could perhaps read about it . . . when publications come out, but you will not get the cross-product knowledge that one can gain from accessing these portfolios of products, he said.

The OTP is also making use of hybrid and occasionally remote working environments to secure the right people.

Marks noted that the current market conditions have been relatively kind to the OTPs staffing efforts. He said that a number of physicians have recently been reevaluating their careers and started looking at alternatives to the traditional medical track. Due to a contraction in venture capital, the OTP has also been able to hire chemistry, manufacturing and control (CMC) reviewers from companies that have gone out of business.

On the other hand, staff turnover may undermine the OTPs ambitions. He noted that it takes new CMC reviewers around six months to gain competence and up to two years to become master reviewers unless they come in with ten years of industry experience.

Dont take this as a big pronouncement, he said,but Im a little worried that while traditionally people used to come to the FDA and stay for a good while, with the new environment where its so easy to change positions without even moving, we might have some more turnover.

The other challenge his team is facing is salaries. The 21st Century Cures Act has helped them hire and retain staff as it allowed the FDA to offer higher salaries than traditionally allowed for federal authorities. This reduced pay disparities with the private sector, but salaries are still not quite at the industry level. That does put us at a challenge at times, he added.

Back in 2019, the FDA expected to be approving between 10 and 20 cell and gene therapy products a year by 2025. Marks said today he would revise those numbers down a little bit, in part due to the COVID-19 pandemic and also because of the complexity of getting products through the regulatory process. Nonetheless, Marks said he hopes to be approving between 25 and 40 gene therapies annually within five years.

The OTPs six sub-offices will be structured to allow for expertise to be distributed toward where the application workload lies at any given time. Previously, people were distributed according to specialities. This meant that if the FDA was dealing with one kind of application more than another, its teams workloads would be imbalanced.

Marks said the OTPs new structure is similar to those used in a number of pharmaceutical companies that leverage project management to hold different functional areas together.

Asked about the OTPs use of artificial intelligence (AI) to help with its workload, Marks said the agency is using AI to analyze genome sequences and in some other very specific instances. It is also exploring its use for safety surveillance. Marks suspects that in the future, AI will be used more routinely across the OTPs processes.

Christoph Schwaiger is a freelance journalist based in the Netherlands. He can be reached on LinkedIn.

Read more:

FDAs OTP Super Office on Track to Fill 500 Positions - BioSpace