Category Archives: Artificial Intelligence
University of Maryland Launches Artificial Intelligence Interdisciplinary Institute – elblog.pl
The University of Maryland is spearheading the future of technology with the establishment of its Artificial Intelligence Interdisciplinary Institute at Maryland (AIM). Recognizing the rapid advancements in artificial intelligence (AI) and the skepticism surrounding the intentions of big tech companies, the university aims to foster trust and ensure that AI is developed to benefit society.
The newly established program, housed in the Department of Computer Science, will offer a range of courses open to students from all academic disciplines. In addition, the university will introduce a new major in artificial intelligence, available in two tracks Bachelor of Science (B.S.) and Bachelor of Arts (B.A.).
According to Hal Daum, the inaugural director of the AIM program, the institute intends to provide a path to AI for all students, regardless of their major. Within their first year or two of study, students will gain a strong understanding of modern AI technology, equipping them for various career and educational paths.
While the B.S. and B.A. tracks will share a common base of knowledge, the B.A. track will delve deeper into the humanistic and social science aspects, while the B.S. track will focus on the mathematical and algorithmic aspects of AI.
Addressing concerns about the impact and trustworthiness of AI technology, Daum emphasized the importance of showcasing the positive outcomes that can arise from AI while also providing a realistic understanding of potential risks. The AIM program aims to incorporate perspectives from the arts, humanities, and social sciences to ensure a multidisciplinary approach to AI development that considers societal needs and values.
Daum believes that universities are well-positioned to lead this kind of work due to their diverse range of faculties, including humanists and social scientists. By bringing experts from various fields together, the university aims to develop AI that is beneficial for all.
Even students majoring in non-STEM fields will have the opportunity to minor in artificial intelligence, recognizing the widespread impact of AI on various professions. As technology continues to evolve, career trajectories may change, making it essential for individuals in all fields to have a solid understanding of AI.
The university expects that students will be able to enroll in the artificial intelligence major soon, with over 100 faculty members dedicated to the AIM program. The goal is to ensure that AI is developed responsibly and ethically, with a clear understanding of its implications.
The establishment of the Artificial Intelligence Interdisciplinary Institute at the University of Maryland reflects the growing importance of AI in various industries. AI technology has the potential to revolutionize sectors such as healthcare, finance, manufacturing, and transportation. With the rapid advancements in AI, it has become crucial to educate students from diverse academic backgrounds on how this technology works and its implications for society.
Market forecasts suggest that the global AI market will continue to expand in the coming years. According to a report by Market Research Future, the AI market is projected to reach a value of $190.61 billion by 2025, growing at a compound annual growth rate (CAGR) of 36.62%. This growth can be attributed to the increasing adoption of AI in various industries, as organizations recognize its potential to streamline processes, improve efficiency, and make data-driven decisions.
However, the development and implementation of AI also come with unique challenges and concerns. Ethics, bias, and privacy are key issues that need to be addressed. With AI-powered systems making decisions that impact individuals and society as a whole, it is vital to ensure that these systems are transparent, fair, and accountable.
To mitigate these challenges, universities like the University of Maryland are taking proactive steps. The AI Interdisciplinary Institute aims to foster trust in AI technology by incorporating perspectives from disciplines such as the arts, humanities, and social sciences. This multidisciplinary approach considers not only the technical aspects of AI but also the ethical and societal implications.
The integration of AI in non-STEM fields is another significant aspect of the University of Marylands efforts. By offering a minor in artificial intelligence, the university recognizes that AI will have an impact on various professions, regardless of the traditional STEM focus. Professionals in fields such as law, journalism, and marketing can benefit from understanding the applications and implications of AI.
For further information on AI and its impact in different industries, visit reputable sources like World Bank or PwC. These organizations provide valuable insights into the current state and future trends of the AI market, along with its potential benefits and challenges.
See the rest here:
University of Maryland Launches Artificial Intelligence Interdisciplinary Institute - elblog.pl
University of Maryland Launches Artificial Intelligence Interdisciplinary Institute to Foster Trust and Benefit Society – elblog.pl
The University of Maryland is making significant strides in shaping the future of technology with the creation of its Artificial Intelligence Interdisciplinary Institute at Maryland (AIM). By recognizing the rapid advancements in artificial intelligence (AI) and the concerns surrounding the intentions of big tech companies, the university aims to establish trust and ensure that AI is developed for the betterment of society.
AIM, housed within the Department of Computer Science, offers a wide range of courses that are open to students from all academic disciplines. In addition, the university has introduced a new major in artificial intelligence with two tracks: Bachelor of Science (B.S.) and Bachelor of Arts (B.A.).
With the AIM program, the university intends to provide a path to AI for all students, regardless of their major. Within the first year or two of study, students gain a strong understanding of modern AI technology, equipping them for various career and educational paths.
While both tracks share a common base of knowledge, the B.A. track delves deeper into the humanistic and social science aspects, while the B.S. track focuses on the mathematical and algorithmic aspects of AI.
Addressing concerns about the impact and trustworthiness of AI technology, the inaugural director of the AIM program, Hal Daum, emphasizes the importance of showcasing the positive outcomes that can arise from AI while providing a realistic understanding of potential risks. The AIM program incorporates perspectives from the arts, humanities, and social sciences to ensure a multidisciplinary approach to AI development that considers societal needs and values.
According to Daum, universities are uniquely positioned to lead this kind of work due to their diverse range of faculties, including humanists and social scientists. By bringing experts from various fields together, the university aims to develop AI that is beneficial for all.
Even students majoring in non-STEM fields will have the opportunity to minor in artificial intelligence, recognizing the widespread impact of AI on various professions. With technology continuously evolving, it is crucial for individuals in all fields to have a solid understanding of AI.
The University of Maryland expects to soon offer the artificial intelligence major, with over 100 dedicated faculty members for the AIM program. The goal is to ensure that AI is developed responsibly and ethically, with a clear comprehension of its implications.
This establishment of the Artificial Intelligence Interdisciplinary Institute at the University of Maryland signifies the increasing significance of AI in different industries. AI technology has the potential to revolutionize sectors such as healthcare, finance, manufacturing, and transportation.
Market forecasts suggest that the global AI market will continue to expand significantly. According to a report by Market Research Future, the AI market is projected to reach a value of $190.61 billion by 2025, growing at a compound annual growth rate (CAGR) of 36.62%. It is no surprise given the increasing adoption of AI in various industries, as organizations recognize its potential to streamline processes, improve efficiency, and make data-driven decisions.
However, the development and implementation of AI also come with unique challenges and concerns. Ethics, bias, and privacy are key issues that need to be addressed. With AI-powered systems making decisions that impact individuals and society at large, transparency, fairness, and accountability are vital.
To overcome these challenges, universities like the University of Maryland are proactively taking action. The AI Interdisciplinary Institute seeks to foster trust in AI technology by incorporating perspectives from disciplines beyond technology. This multidisciplinary approach considers not only the technical aspects of AI but also the ethical and societal implications.
The integration of AI into non-STEM fields is another crucial aspect of the University of Marylands efforts. By offering a minor in artificial intelligence, the university recognizes that AI will have an impact on various professions, regardless of their traditional focus on STEM. Professionals in fields such as law, journalism, and marketing can benefit from understanding the applications and implications of AI.
For further information on AI and its impact in different industries, it is recommended to visit reputable sources such as the World Bank or PwC. These organizations provide valuable insights into the current state and future trends of the AI market, along with its potential benefits and challenges.
The Artificial Intelligence Interdisciplinary Institute at Maryland (AIM) is a program established by the University of Maryland to advance the development of artificial intelligence (AI) and foster trust in its applications. AIM offers courses and majors in AI, open to students from all academic disciplines.
The University of Maryland offers two tracks in their artificial intelligence major: Bachelor of Science (B.S.) and Bachelor of Arts (B.A.). The B.S. track focuses on the mathematical and algorithmic aspects of AI, while the B.A. track delves deeper into the humanistic and social science aspects.
Incorporating perspectives from non-STEM fields, such as the arts, humanities, and social sciences, is crucial in AI development. It ensures a multidisciplinary approach that considers societal needs, values, and ethical implications. By bringing experts from various fields together, the University of Maryland aims to develop AI that benefits all.
Yes, the University of Maryland recognizes the widespread impact of AI on various professions. They offer a minor in artificial intelligence, allowing students majoring in non-STEM fields to gain a solid understanding of AI and its applications and implications in their respective fields.
For further information on AI and its impact in different industries, it is recommended to visit reputable sources such as the World Bank or PwC. These organizations provide valuable insights into the current state and future trends of the AI market, as well as its potential benefits and challenges.
The establishment of the Artificial Intelligence Interdisciplinary Institute at the University of Maryland highlights the growing importance of AI in various industries. AI has the potential to revolutionize sectors such as healthcare, finance, manufacturing, and transportation. To support this, market forecasts predict that the global AI market will continue to expand significantly. According to a report by Market Research Future, the AI market is projected to reach a value of $190.61 billion by 2025, with a compound annual growth rate (CAGR) of 36.62%.
However, the development and implementation of AI also come with unique challenges and concerns. Ethical considerations, bias, and privacy are key issues that need to be addressed. With AI-powered systems making decisions that impact individuals and society at large, transparency, fairness, and accountability are vital.
To address these challenges, universities like the University of Maryland are taking proactive measures. The AI Interdisciplinary Institute incorporates perspectives from disciplines beyond technology, fostering trust in AI technology. This multidisciplinary approach considers not only the technical aspects of AI but also the ethical and societal implications.
The integration of AI into non-STEM fields is another significant aspect of the University of Marylands efforts. Recognizing that AI will impact various professions, regardless of their traditional focus on STEM, the university offers a minor in artificial intelligence. Professionals in fields such as law, journalism, and marketing can benefit from understanding the applications and implications of AI in their respective fields.
For further information on AI and its impact in different industries, reputable sources such as the World Bank or PwC provide valuable insights into the current state and future trends of the AI market, along with its potential benefits and challenges.
Read the original post:
University of Maryland Launches Artificial Intelligence Interdisciplinary Institute to Foster Trust and Benefit Society - elblog.pl
How AI risks creating a black box at the heart of US legal system – The Hill
Artificial intelligence (AI) is playing an expanding — and often invisible — role in America’s legal system. While AI tools are being used to inform criminal investigations, there is often no way for defendants to challenge their digital accuser or even know what role it played in the case.
“Under current law in most jurisdictions, [prosecutors] don’t have to disclose artificial intelligence use to the judge or defense counsel,” Rebecca Wexler, professor of law at the University of California, Berkeley, told The Hill.
AI and machine learning tools are being deployed by police and prosecutors to identify faces, weapons, license plates and objects at crime scenes, survey live feeds for suspicious behavior, enhance DNA analysis, direct police to gunshots, determine how likely a defendant is to skip bail, forecast crime and process evidence, according to the National Institute of Justice.
But trade secrets laws are blocking public scrutiny of how these tools work, creating a “black box” in the criminal justice system, with no guardrails for how AI can be used and when it must be disclosed.
“There’s no standard at any level,” said Brandon Garrett of Duke University School of Law. “The big picture point is that just like there need to be standards for the product, there needs to be standards on how and when they’re used.”
Concerns about AI in the criminal justice system are compounded by research showing how tools like facial recognition are prone to bias — for example, misidentifying people of color because it was trained on mostly white faces.
For the past three congresses, Rep. Mark Takano (D-Calif.), joined twice by Rep. Dwight Evans (D-Pa.), has introduced legislation that addresses issues of testing and transparency in criminal justice, so far failing to garner enough traction to pass the bill.
“Nobody had really addressed this particular issue of black box technologies that are being marketed to prosecutors, police and law enforcement folks on the basis of their alleged efficacy,” Takano said in an interview with The Hill.
“Every American wants to feel that they can get a fair trial if they are accused of something wrong — that’s one of the hallmarks of being an American,” he added. “But what do you do when the witness and evidence brought against you is a machine protected as a trade secret, how do you contend with that?”
The term artificial intelligence refers to the broad discipline of making machines that learn from experience and mimic humanlike intelligence in making predictions. Unlike other forensic technologies law enforcement uses, AI is responsive to its environment and sensitive to its users, meaning it can produce different outcomes throughout its life cycle.
Without testing and transparency, these nuances are lost and the likelihood of error isn’t accounted for, Garrett said.
Currently, public officials are essentially taking private firms at their word that their technologies are as robust or nuanced as advertised, despite expanding research exposing the potential pitfalls of this approach.
Take one of its most common use cases: facial recognition.
Clearview AI, one of the leading contractors for law enforcement, has scraped billions of publicly available social media posts of Americans’ faces to train its AI, for example.
This initial training teaches an AI program a set of patterns and rules that will guide its predictions. Developers tweak the program by instructing it to consider some factors more than others. Theoretically, the AI becomes an expert at matching human faces — at a speed that far outpaces human capacity.
But when the machine goes out into the field, it may see a population that looks different from its training set. Individual facial recognition algorithms generate notably different findings from their peer products, a 2019 National Institute for Standards and Technology (NIST) report found.
Researchers have found that facial recognition AI has concerning failure rates when handling images of Black Americans, especially Black women, either failing to identify a person at all or making an inaccurate match.
The Gender Shades project from the Massachusetts Institute of Technology’s Media Lab found consistently high error rates, as high as 33 percent, across AI recognition of females with darker skin tones.
Products from Amazon, IBM and Microsoft each exhibited this problem in the study, and some of their products have since been taken off the market. Multiple academic institutions — George Mason University, the University of Texas at Dallas, and New York University (NYU) — have corroborated persistent demographic disparities in facial identification rates.
But studies like the Gender Shades project test facial recognition accuracy on comparatively ideal image quality.
Footage used by police is not often ideal, and a selling point of AI to law enforcement is that it can make use of poor-quality data previously useless to human investigators or traditional forensic algorithms.
To account for the possibility of faulty matches, police commonly treat facial recognition matches as a tip for further investigation and not evidence against the person identified.
But tips still narrow law enforcement’s focus in an investigation, said Wexler at Berkeley. If supporting evidence against a suspect is found, that becomes the basis for an indictment while the use of AI is never disclosed.
That means neither the defense, the prosecution nor the judge often know that police have used AI to guide an investigation, and they never get the chance to interrogate its findings.
“At no point, from pretrial investigations through to conviction, does law enforcement have any constitutional, legal, or formal ethical obligation to affirmatively investigate evidence of innocence,” Wexler said at a Senate Judiciary Committee hearing in January.
Creators of the forensic machine learning models have defended the opaqueness of their products by arguing that disclosure will effectively require revealing trade secrets to competitors in their industry.
However, the companies have been largely supportive of government regulation of its use in criminal justice settings.
Amazon’s Rekognition software “should only be used to narrow the field of potential matches,” according to its site.
Matt Wood, vice president of product at Amazon Web Services, is quoted by the company as saying it’s a “very reasonable idea for the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet.”
IBM sunsetted its AI facial recognition products shortly after the Gender Shades study, and IBM CEO Arvind Krishna wrote a letter to Congress calling for “precision regulation” of the tech.
Microsoft discontinued sale of facial recognition AI to police departments in 2020, saying it wouldn’t budge “until strong regulation [on facial recognition AI], grounded in human rights, has been enacted.”
In March, Clearview AI obtained “awardable” status from the Department of Defense’s Tradewinds Solutions Marketplace, a vetting body that creates a suite of technologies ready for “rapid acquisition.”
In a statement to The Hill, Clearview AI CEO Hoan Ton-That said his product survived testing from NIST with higher than a 99 percent accuracy rate “across all demographics.”
“As a person of mixed race, having non-biased technology is important to me,” he said.
“According to the Innocence Project, 70% of wrongful convictions come from eyewitness lineups. Technology like Clearview AI is much more accurate than the human eye, and can be used to exonerate people and eliminate bias from the criminal justice system,” he added.
Still, defense counsel faces a high bar to prove errors in an AI lead. They often must show that AI source code was likely to be “necessary” for a criminal case, a higher standard than for most subpoenas in search of evidence.
“The reason that is so troubling is that it creates a Catch-22. It may be impossible to prove that information you’ve never seen is necessary to a case,” Wexler said.
Defense attorneys have already lost major cases seeking disclosure of non-AI algorithm source code. And in addition to fighting the “necessary” standard, defense counsel often meets resistance from the state, said Mitha Nandagopalan, staff attorney at the Innocence Project.
“In pretty much any case I’ve touched that has involved a request for underlying source code or machine learning model, prosecution has opposed it,” Nandagopalan told The Hill.
Judges frequently don’t see the relevance if AI-generated leads are not considered evidence, she said. And in her work as a defense attorney in Albuquerque, N.M., Nandagopalan said police often fail to disclose it.
“In a lot of cases, we got police reports that said, ‘We looked at the surveillance footage from the store, and using state mugshot databases or other databases, we found a match,’” she said. “Nowhere in their report did it say, ‘We used AI recognition software to identify the suspect.’”
Those concerns extend well beyond facial recognition, encompassing the risk of “dirty data” perpetuating injustices in various uses of AI tools.
The potential for biased AI predictions informed by dirty data is “enormous,” said Vincent Southerland, director of the Center for Race, Inequality and the law at NYU, in an article for the American Civil Liberties Union.
Southerland cited police behavior in Ferguson, Mo.; Newark, N.J.; Baltimore; and New York City as examples of biased policing that would give AI “a distorted picture” in its handling of risk assessments or crime forecasting, for example.
Crime forecasting refers to AI that takes historical crime data in a community and makes predictions of where future criminal behavior will take place, allowing police, theoretically, to efficiently allocate scarce resources.
Risk assessments broadly refer to AI’s assignment of a risk score to a person based on factors like their criminal history. These scores inform decisions on worthiness for bail, parole and even the severity of sentences.
“The failure to adequately interrogate and reform police data creation and collection practices elevates the risks of skewing predictive policing systems and creating lasting consequences that will permeate throughout the criminal justice system and society more widely,” an NYU Law Review case study said.
Ideally, government users of AI would take an informed approach to AI’s conclusions that accounts for its specific features and limitations, Karen Howard, director of science, technology and analytics assessment at the Government Accountability Office, told The Hill.
But that’s often not possible as long as AI remains in a “black box,” she said, as public officials can’t even confirm the tools are reliable and unbiased in the first place.
Testifying before the Senate Judiciary Committee in January, Howard said any AI program in use by law enforcement without independent review “should set off alarms.”
“The riskiest AI tool would be one where the training data set is not understood, not representative and it’s being handled by somebody who really doesn’t understand what the technology is and isn’t telling them,” she said.
The Biden administration has announced a series of efforts to ensure AI tools aren’t hurting Americans, both in the legal system and elsewhere.
The National Institute for Standards and Technology released an AI Risk Management Framework in January 2023.
“Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities,” it said. “With proper controls, AI systems can mitigate and manage inequitable outcomes.”
The White House Office of Science and Technology Policy also released the Blueprint for an AI Bill of Rights in October 2022, which includes “algorithmic discrimination protections.”
However, these measures do not have the force of law, and they place no binding mandate for testing or transparency on AI products the government uses in the criminal justice system.
The legislation sponsored by Takano and Evans would prohibit the use of trade secret privilege to deny cross-examination of forensic AI to defense attorneys, direct NIST to establish a testing program for forensic algorithms adopted by law enforcement and mandate vetting before use.
“AI would be another layer of source code that would be required to be open under my bill,” Takano said. “That technology is not infallible, that technology should be subjected to tests of reliability and fairness.”
See original here:
How AI risks creating a black box at the heart of US legal system - The Hill
Spotify leans further into artificial intelligence with new AI Playlist generator, which turns text prompts into personalized … – Music Business…
Rumors have been circulating online for some time that Spotify was planning to launch an AI-powered playlist feature. Now, the Sweden-headquartered music streaming service has unveiled that feature at least in a few markets.
The new AI Playlist tool is rolling out in beta form to Spotify Premium subscribers on Android and iOS devices in the UK and Australia, the company announced in a statement on Sunday (April 7).
The tool will enable users to enter a text prompt from which the AI Playlist will generate music suggestions. For instance, users will be able to type in prompts along the lines of an indie folk playlist to give my brain a big warm hug, relaxing music to tide me over during allergy season, or a playlist that makes me feel like the main character.
Users will be able to preview and delete the tracks offered by AI Playlist, and refine their playlist with additional prompts (more pop or less upbeat).
No word yet on when the feature will be available to users outside the UK and Australia, but Spotify says its still beta-testing the tool and will contin[ue] to iterate on this new feature to best serve our listeners.
To activate the AI Playlist, UK and Australian users can select Your Library at the bottom-right corner of the screen on their mobile Spotify app, tap the + button and select AI Playlist.
Just like that, Spotify will help you curate a personalized playlist based on the tracks, artists, and genres we think youll like, the company said in its announcement.
While its designed to be fun, the tool is still in beta and wont produce results for non-music-related prompts, like current events or specific brands. We also have measures in place around prompts that are offensive so please prompt responsibly!
I think youre going to see a lot more of that, where we can contextualize and personalize content across the entire platform to make it more accessible.
Daniel Ek, Spotify
AI Playlist is one of a number of artificial intelligence-powered features that Spotify has been working on. In February 2023, the streaming platform launched an AI-powered personalized DJ feature, initially in the US and Canada, before rolling it out globally in August of last year.
The company is also developing a groundbreaking AI voice translation tool that will make podcasts available in numerous languages, all in the podcasters own voice.
During an earnings call last summer, CEO Daniel Ek said AI can improve the user experience on Spotify, the advertiser experience and the companys performance on the back end.
AI at Spotify is going to be massive, and you can see some of those improvements already paying off very nicely [with] higher engagement and higher retention, which then lowers churn, Ek said.
This is a trend thats been going on now for many years [and] I still think that theres quite a lot that we can do there that will improve engagement and retention over time.
AI DJ has seen strong consumer interactions, Ek said, and I think youre going to see a lot more of that, where we can contextualize and personalize content across the entire platform to make it more accessible.
Ek said advertisers will be able to drive greater value from their ads on Spotify thanks to AI tech as well.
By using generative AI and AR tools here, I think youre going to be able to see that we can significantly reduce the cost that it takes for advertisers to develop new ad formats, and that obviously means that you, as an advertiser, instead of having one ad, you can imagine having thousands and [having them] tested across the Spotify networks, Ek said.
Ek added that, on the backend, AI will enable Spotify to be a lot more efficient, which will drive more value for all stakeholders consumers, creators and Spotify itself.Music Business Worldwide
Read the original here:
Spotify leans further into artificial intelligence with new AI Playlist generator, which turns text prompts into personalized ... - Music Business...
Measuring Success: Balancing Efficiency and Effectiveness in the Age of Artificial Intelligence – Regulation Asia
Julia is a partner with PwC Singapores Digital Regulatory Risk & Compliance practice. She is a risk and transformation specialist advising banks, wealth managers, capital markets intermediaries and non-FIs on digital transformation strategy and execution, risk & regulations, controls and governance.
With over 20 years of experience, she has led core banking, compliance, operational risk, finance transformation programmes, corporate governance, ERM and internal audit reviews. Clients value her partnership to connect the dots and co-create pragmatic, best-fit solutions for them.
Prior to her current role, she has assumed various leadership roles at firm-level as private banking industry lead, banking governance, risk and compliance and digital transformation services leader.
Outside of client work, Julia is active in giving back with board directorship and mentoring. She currently serves on the Board of Trustees for ISEAS (Institute of Southeast Asian Studies), the board of Building Construction Authority and committee of the Governance and Audit Committee of Singapore Heart Foundation. She is a mentor with Young Women Leadership Connection and Mentorshub.
Read the rest here:
Measuring Success: Balancing Efficiency and Effectiveness in the Age of Artificial Intelligence - Regulation Asia
This Week in AI: Upskilling Workers and VC’s Secret Weapon – PYMNTS.com
This week in artificial intelligence (AI) news, the U.S. and the U.K. are collaborating on safety tests for advanced AI technologies. Tech giants likeGoogleandMicrosofthave formed a task force to prepare workers for the AI era, addressing the urgent need for AI skills in evolving job markets.
In venture capital, AI has shifted from a hot investment zone to a key tool for smart investment decisions.
Heres a lineup of the latest in AI this week.
The United States and the United Kingdom are working together toestablish safety testsfor cutting-edge AI technologies, PYMNTS reported on Thursday (April 4).
The collaboration seeks to synchronize the scientific methodologies of both nations and hasten the creation of effective testing procedures for AI models, systems and entities. This endeavor is part of an expanding worldwide initiative aimed at mitigating the safety concerns associated with AI.
This partnership signifies a greater level of accountability being expected of corporations to guarantee their offerings are secure, reliable and ethical,Andrew Pery, an AI ethics advocate fromABBYY, a global leader in intelligent automation, said.
He said theres a tendency among pioneers of groundbreaking technologies to rush their products to market under a launch now, correct later approach to secure an early market presence. As an illustration, he noted that while OpenAI has been relatively open about the potential dangers of ChatGPT, the company still proceeded with its wide commercial release despite the possible negative effects.
The U.S.-U.K. alliance is in line with the pledges made during the AI Safety Summit in November, where world leaders underscored the importance of international collaboration to tackle the potential dangers of AI technology. The summit, hosted in Bletchley Park, U.K., gathered government officials, industry leaders, scholars and representatives from civil society to explore both the challenges and prospects that AI poses.
Leading technology firms, including Google and Microsoft, have joined forces in a task force dedicated toequipping the workforcefor the artificial intelligence (AI) era, responding to the critical demand for AI-proficient professionals in changing job markets.
This alliance features major players like Accenture, Eightfold, Google, IBM, Indeed, Intel, Microsoft and SAP, alongside six expert advisors. With AI transforming various industries and job descriptions, theres an acknowledged urgency to train employees in skills essential for thriving in AI-driven settings.
Francine Katsoudas,Ciscos executive vice president and chief people, policy and purpose officer and a member of this initiative, emphasized the significant opportunity AI offers for the private sector to lead in upskilling and reskilling the workforce for what lies ahead. She highlighted the consortiums commitment to collaborating with governments, non-governmental organizations, and academic institutions to ensure the AI revolution benefits everyone.
PYMNTSprevious reportshave highlighted the dual-edged nature of generative AI technologies like ChatGPT, which promise to enhance productivity but also pose disruptions to the labor market.
A study byGoldman Sachsestimated that automation might affect 25% of jobs in the U.S. and Europe, especially in legal and administrative fields. While a large portion of jobs may face automation, the report suggested most employees will remain employed, with fewer than half of their tasks being automated. Only 7% of U.S. workers are in roles where AI could take over at least half of their duties, potentially leading to job displacements.
In venture capital (VC), AI has evolved from a lucrative investment area toa critical instrumentfor making astute investment choices.
AIs ability to swiftly process vast datasets on startups and market dynamics enables venture capitalists to spot the most promising investments and make more informed decisions regarding fund distribution. This technological advancement is revolutionizing the venture capital industry, and it is at the forefront of redefining startup funding methods.
Steve Brotman, founder and managing partner ofAlpha Partners, highlighted the role of AI in enhancing venture capital operations by complementing human insight with machine intelligence to precisely pinpoint real investment opportunities amidst the clutter.
AI empowers us to scrutinize market tendencies, evaluate startup performance indicators and other pivotal data at a magnitude and velocity that surpasses the capacity of human analysts, Brotman said. This not only boosts efficiency but also significantly upgrades our decision-making process by offering unparalleled insights into potential investments that were once beyond our reach.
According to business research giantGartner, by 2025, over 75% of VC and early-stage investor evaluations will beassisted by AIand data analytics. Currently, AI is deployed to examine consumer behavior, trends and preferences. Investors are increasingly relying on AI tools to assess compatibility between the founders vision, their team, the business model and the investors objectives, aiming to forecast the investments success.
Apples research team has developed an AI technology known asReALM(Reference Resolution As Language Modeling), designed to navigate through the complexities of ambiguous references and contextual clues. This innovation aims to transform interactions with voice assistants and could significantly alter the retail and service industries.
ReALM approaches the challenge of deciphering screen-based visual references by framing it as a language modeling task, leveraging the capabilities of advanced language models. This development is among various efforts aimed at refining AI-driven voice communication, which has the potential to enhance various commercial applications.
AI researcherDan Faggella, although not associated with Apple, shared with PYMNTS his insights on the implications of improved AI communications. He highlighted the current frustrations with chatbots and the potential for sophisticated AI systems to swiftly and courteously address straightforward inquiries, potentially boosting customer satisfaction, loyalty and sales.
The demand for voice technology is escalating. APYMNTS studyrevealed a growing consumer interest in voice technology, with 54% of participants eager to use it more extensively in the future owing to its convenience. Furthermore, 27% have used voice-activated devices within the past year, and 22% of Gen Z consumers are willing to spend over $10 monthly for enhanced voice assistant services.
However, another PYMNTS report on U.S. consumer attitudes unveiledsome skepticismregarding the effectiveness of voice AI in quick-service restaurants compared to human interaction. Only 8% of consumers said they believed voice assistants can currently match human performance, with just 16% hopeful for such equality within the next two years. A majority remain either anticipatory of a longer wait or doubtful of voice AI attaining human-like reliability and intelligence.
Read the rest here:
This Week in AI: Upskilling Workers and VC's Secret Weapon - PYMNTS.com
Artificial intelligence is already reshaping how some Colorado students learn. Is your school on the cutting edge? – Colorado Public Radio
Victor Oshmyan, a sophomore at Niwot High, clicks his mouse to engage a car. Its aimed at little model pedestrians. But he stops the car in the nick of time before it runs them over.
It actually sees the pedestrians, but the AI model isn't strong enough to recognize that all of them are people, so it was just going to run them over, he said during a demonstration with a toy car and the figurines.
But then Oshmyan shows how the car fully stops when it recognizes him, a real person, just like hes programmed.
It didn't move because it didn't want to run me over, he said.
Oshmyan is an early adopter, one of a group of students so intrigued by artificial intelligence that theyre on a special after-school AI project team at the St. Vrain Valley School Districts Innovation Center in Longmont. They develop and design products for clients and get paid to do it. These students are at the vanguard of discovering how artificial intelligence works in its many forms but are also helping educators learn how it may change instruction.
When artificial intelligence came on the scene, Colorados school districts tended to fall into three buckets. Some immediately banned any use of it. The vast majority seemed interested but too bogged down in other challenges.
A couple of districts blasted out of the gates trying to teach their students about AI like St. Vrain.
Teenagers already tend to know more about AI than adults, even if just for things like altering their image to look like a cute animal. Students are getting the message online that this technology will change the way we live and the world of work.
And then they walk into school and we tell them, Whatever you do, don't use this, said Rebecca Holmes, CEO and president of the Colorado Education Initiative, which has created a task force to help districts incorporate AI. Its just cognitive dissonance to the teenage brain. Its the kind of eye roll from teenagers that we should really pay attention to because they're right.
Oshmyan used a program called AutoAutoAI to code the car to detect a person with an image he plotted. He also programmed it to swerve at yellow lights and stop and play Happy Birthday on red lights. Oshmyan is also working on a pizza bot to take orders.
It will help pizza workers not spend so much time on the phone, he said.
Nearby, his classmate Malcom Smith demonstrates a classification system he built using AI, which can solve patterns at incredibly fast speeds. It can identify hundreds of unique parts for Vex Robotics that younger students use to build. His project is to help students, but also their teachers.
That's a lot of pressure on the teacher because the teacher has to know all of these different parts and that can be very tricky, he said.
Smith holds up a Lego-like piece. A machine voice identifies the part and describes what the part can be used for.
This is the kind of real-world learning that AI can foster.
I feel like AI is a powerful tool that will be incorporated in the future a lot, said Oshmyan. And I feel like understanding it better will help us work with it better so it doesn't just take over. And I feel like it's better to understand it right now than later.
Another student said hed love to one day develop an AI that could help recognize cancerous moles.
Marek Pearl, 15, who is more interested in engineering robotics rather than a career in AI, still decided to take a course called Intro to AI, which includes the ethics of AI. It sounded interesting and he wanted to learn how it could help him in his daily life like writing emails. But heres how he may use it at school: If the assignment is to write a short paragraph on the War of 1812 and some major historical figures, hed ask for an AI platform:
What was the Battle of 1812?
Like other students, Pearl said AI tends to explain things in a simpler way to start out with.
I try my best to use AI as an inspiration, rather than a writing tool, he said.
Hed get the historical figures' names and then do his own research on each individual. A lot of the students say they use it this way. Shaffer Piersol, a freshman at Niwot High School, uses AI to help her study. Many students use the Quizlet studying tool, which now employs artificial intelligence.
I don't want to go through all 20 pages of a textbook to make 10 Quizlet questions. So Quizlet will be like, Hey, if you just upload the PDF, well, we can do it for you.
Piersol has strong feelings about using AI to rip off artwork, something shes seen done to her favorite artist, which is not cool to do.
But for a lot of other teenagers, the temptation to cheat is real.
A lot of my classmates use ChatGPT to write their essays, so no matter how I think, people are always going to do whatever they want.
Pearl, on the other hand, thinks cheating is not easy for students to get away with.
Almost all teachers can tell, like, if they've seen your writing before, they know, huh, that person doesn't write like that.
Teachers have told students explicitly, that if they use ChatGPT to write their essays, theyre getting an F.
Nicholas Umpierrez, a senior, is working with his team on a project for the city of Longmont building an underwater robot for water collection. He wants to know the ideal flow rate the machine should use. Hes used ChatGPT for coding already so decides to ask the AI about the flow rate. AI spits out an answer. Umpierrez gives it more parameters; he gets the same answer.
He decides he should probably go back to the scientists in the city to get more information.
Nicholas teacher Nathan Wilcox interjects, recognizing an AI teachable moment. He praises Umpierrez for realizing that ChatGPT has huge limitations when it comes to hyper-specific questions.
Do we know that's the most recent, new, data? Do we know that that's the optimal data? Do we know if that data was collected related to water sampling for this type of purpose? the teacher asks.
Instead, its the Longmont scientists who will know the ideal flow rate based on research studies. The exchange is yet another opportunity for learning about an extraordinarily powerful tool that is rapidly changing K-12 education.
Joe McBreen, SVVSDs assistant superintendent of innovation, said districts leaning into AI doesnt mean accepting everything about it, lock, stock, and barrel. But he said AI is only going to become more pervasive and powerful.
I think we're ethically and morally compelled to prepare our kids for a competitive future, where they not only are aware of AI but they're empowered with next-level exposure and experiences so that they can confidently live in this world, he said. That begins today.
Schools can start by teaching kids the difference between a traditional search engine and generative AI, which can include images, music, and code, or large language model AIs which produce text and dont require computer science knowledge to use.
The world's most popular programming language right now is English. Literally, you can talk to the ChatGPT and get the code, McBreen said. And so what sorts of opportunities does that open up?
Other AI models can be used for data prediction and image recognition.
Districts must start with a set of educators who are aware, empowered, and skilled enough to help students, said McBreen.
St. Vrain, one of the first districts to offer professional development to educators, launched a soft introduction to AI for teachers, encouraging them to complete a bingo board that has them use AI in fun ways like finding a recipe or planning a trip. They earn credits for completing the cards. Along with coaching on safety and privacy in using AI, the district is continually analyzing whether there are gaps in its current cheating and plagiarism policy.
The district has created a task force of teachers and district leaders who are putting together an eighth-grade introduction to technology that focuses heavily on AI. But theyd eventually like to have exposure to AI in all grades.
Recently, Deagan Andrews, a curriculum leader for Greeley Evans School District 6, chatted with McBreen about the best way to begin developing an AI pathway for his district.
AI is painted with a broad brush, explained McBreen, but in reality, there are many different strands to it: from autonomous driving and AI in cybersecurity to how people use large language models to accelerate what they do. Other questions to consider: What is the right level of programming knowledge for students? How can they use AI to advance their own projects?
A new nationwide survey by the Center for Democracy and Technology finds massive changes in teacher and student use of generative AI. However, it shows teachers struggling with navigating many questions around responsible and safe student use and teachers distrustful of students resulting in more students getting into trouble. Many educators are stuck at that level, never mind how to teach students how to use AI as a tool and for application-based questions, much like the calculator did. The vast majority of educators are unequipped.
Greeley districts Andrews believes schools never really helped students effectively leverage calculators or even Google.
And now we take something that's 10 times more sophisticated. How are we going to help support students to really leverage it?
Thats where the Colorado Education Initiative comes in. The nonprofit will produce a statewide plan this summer identifying AI policies and practices needed for schools, as well as training for teachers. Rebecca Holmes is aware equity gaps are already starting.
If a kid happens to be in a district that's forward moving on something, they get lots of education about it and if they don't, they don't.
Adeel Khan, a former Colorado educator and founder and CEO of Magic School AI, said its crucial that AI become a competency in school, and not one that only affluent parents can buy their children.
We need to lead the charge here and not make the same mistakes of not bringing one-to-one laptops to schools (until) decades after they were being used in every professional work environment.
Holmes hopes to encourage the districts that have banned the use of AI to think of that as a first move.
Please don't let it be your last move and start to figure out how else you can engage with this and support young people in engaging with it.
Read the original here:
Artificial intelligence is already reshaping how some Colorado students learn. Is your school on the cutting edge? - Colorado Public Radio
AFRL receives $4M grant to build Artificial Intelligence and Machine Learning Research Cen – afmc.af.mil
WRIGHT-PATTERSON AIR FORCE BASE, Ohio (AFRL) The Air Force Research Laboratory, or AFRL, Materials and Manufacturing Directorate was recently awarded a $4 million Office of Secretary of Defense, or OSD, Centralized Laboratory Investment Program, or CLIP, award to build the Artificial Intelligence and Machine Learning Research Center Capability Project, or AIMR-2C. This is the first CLIP award given at AFRL.
The AIMR-2C will be a 6,000 square-foot lab space dedicated to interactive data visualization. The building will be built as an addition adjacent to the main entrance of building 653 at Wright-Patterson Air Force Base. Building 653 is home to the Materials and Manufacturing Directorates headquarters.
AI/ML is rapidly being incorporated into facets of our everyday lives, from things as mundane as the way we unlock our phones to complex systems like self-driving cars, said Dr. Eric Harper, a research materials engineer in the Materials and Manufacturing Directorate and technical lead of the AIMR-2C project. As researchers at AFRL, we need to understand how to leverage this massively disruptive capability to accelerate the materials design process. AIMR-2C does more than simply provide AFRL researchers and collaborators access to cutting-edge hardware. It allows us to more rapidly learn the latest methods and techniques, assist each other in developing the next generation of models, apply those to multidisciplinary materials science problems, and share our results more effectively, thereby accelerating our ability to identify, develop and transition the next generation of stuff that makes the future.
AFRL has a physical presence in 10 states and accomplishes its mission through nine technology directorates, the 711th Human Performance Wing, the Air Force Office of Scientific Research and AFWERX. Since scientific collaboration and experimentation often requires experts from several locations to be present, progress is often slowed by the need to travel.
The AIMR-2C will offer a number of features that will help make remote collaboration more seamless, offering AFRL and its partners the ability to maximize collaboration, data sharing and model development.
The on-site research server cluster provided by the project will be a dedicated high-power computational tool enabling researchers to process, visualize and interact with large datasets collaboratively in real-time, said Kurt Lamm, facility engineer at AFRLs Materials and Manufacturing Directorate. Research co-location with data visualization provides dedicated subject-matter expertise in AI/ML, data analysis, and visualization to assist researchers in accelerating the development of next-generation models, workflows, and visualization pipelines. Primarily, this project builds technology capability, but it also has second-order effects in construction job creation and STEM job creation. When finished, this advanced laboratory will be a talent recruitment and retention beacon. AIMR-2C is a win for the Miami Valley.
Additionally, the labs location at Wright-Patterson Air Force Base in Dayton, Ohio, will serve to maximize over 100 existing partnerships. It will also benefit local academic institutions such as the Air Force Institute of Technology or AFIT and Wright State University, based on new and existing partnership agreements. AFITs Hangar 18 Software Laboratory is a unique asset that can help facilitate rapid digital solutions and pipelines for data visualization and research.
The ability to collaborate remotely and conduct research, experiments and data analysis in real-time will dramatically speed up the progress of research and prototyping, said Dr. Lauren Ferguson, Digital Transformation Lead at AFRLs Materials and Manufacturing Directorate. Currently, we must either send someone to the location to run an experiment or there must be experts already on site who can run it and ship the data back to us to analyze. The AIMR-2C collaboration space will not only allow experts to collaborate on an experiment while in different locations, but it will also allow them to share and analyze data in real-time as well as correct and re-run experiments without the need for travel.
Prior to winning the award, the directorate had already invested $6.2 million in information technology upgrades to expand data speeds from 100MB per second to 10GB per second at the main facility campus. Technological advancement is occurring at an unprecedented rate and scale, and connectivity is often the limiting factor to the depth of advanced research collaboration. This upgrade was necessary to accelerate the speed of research and leverage infrastructure as a tool for modernizing laboratory environs into research projection platforms for the AFRL. Upon completion of the project, AFRL plans to employ AIMR-2C as a catalytic tool to expand the prevalence of artificial intelligence and machine learning as a force multiplier for novel defense research gains.
The tentative timeline: design start -
About AFRL The Air Force Research Laboratory, or AFRL, is the primary scientific research and development center for the Department of the Air Force. AFRL plays an integral role in leading the discovery, development, and integration of affordable warfighting technologies for our air, space and cyberspace force. With a workforce of more than 12,500 across nine technology areas and 40 other operations across the globe, AFRL provides a diverse portfolio of science and technology ranging from fundamental to advanced research and technology development. For more information, visitwww.afresearchlab.com.
Visit link:
AFRL receives $4M grant to build Artificial Intelligence and Machine Learning Research Cen - afmc.af.mil
5 Artificial Intelligence (AI) Hardware Stocks — Not Named Nvidia — That Are Crushing the Market and Are Still Cheap – The Motley Fool
Investors looking for alternatives to Nvidia to capitalize on the AI boom should consider taking a closer look at these names.
Semiconductor stocks have taken off big-time in the past year. ThePHLX Semiconductor Sector index gained an impressive 51% during this period thanks to the booming chip demand triggered by the growing adoption of artificial intelligence (AI) applications.
Semiconductors play a central role in the AI revolution. The training of AI models such as OpenAI's ChatGPT was made possible using chips designed by the likes of Nvidia (NVDA 2.45%). This explains why Nvidia's AI-focused graphics processing units (GPUs) have been selling like hotcakes over the past 18 months, leading to a sharp jump in Nvidia's revenue and earnings.
NVDA Revenue (TTM) data by YCharts
The market rewarded Nvidia handsomely for its stellar growth. That's evident from the 225% jump in the company's stock price in the past year. It won't be surprising to see Nvidia maintain its outstanding momentum as well thanks to expectations of strong top- and bottom-line growth. However, Nvidia is not the only company in the hardware supply chain driving the AI revolution.
Here's a closer look at five other names playing a central role in the AI hardware market.
While the likes of Nvidia, Intel, and Advanced Micro Devices make or design chips that are used for addressing AI workloads such as training large language models (LLMs) and for inferencing purposes, those chips need to be mounted on a server infrastructure that's specifically designed for running AI workloads. Dell Technologies (DELL 4.37%) and Super Micro Computer (SMCI -1.04%) are two names benefiting big-time from the growing need for AI-optimized servers.
While shares of Dell have jumped 184% in the past year, Supermicro stock has rocketed higher, with eye-popping gains of 847%. Despite these massive jumps, both stocks still trade at attractive multiples. Dell has a sales multiple of just 0.9, while Supermicro's reading stands at 5.6. Their sales multiples are lower than the Nasdaq-100 Technology Sector index's average price-to-sales ratio of 7.3.
A closer look at how AI drives their growth will demonstrate why investors bought their stocks hand over fist in the past year. It also explains why they are still worth buying. Dell, for instance, is set to return to growth in the current fiscal year after struggling in the previous one because of weak personal computer (PC) sales.
Its top line fell 14% in fiscal 2024 (which ended on Feb. 2, 2024) to $88.4 billion. Dell has guided for $93 billion in revenue in the current year at the midpoint of its guidance range, and the demand for AI servers is one of the reasons behind its turnaround. Dell shipped $800 million worth of AI-optimized servers last quarter. Even better, its AI server backlog almost doubled quarter over quarter to $2.9 billion thanks to a 40% sequential increase in AI server orders.
The good part is that Dell's AI server order book could keep getting fatter. The AI server market is set to grow from an estimated $30 billion last year to $150 billion in 2027, according to Foxconn. This is precisely the reason why even Super Micro Computer stock has turned out to be a stock market favorite in the past year.
Supermicro's growth has been stunning with its revenue in the ongoing fiscal year 2024 set to double to $14.5 billion from $7.1 billion in fiscal 2023. More importantly, the server manufacturer claims that its production capacity is solid enough to support more than $25 billion in annual revenue. This explains why analysts have significantly raised their revenue estimates for the next couple of years. However, it won't be surprising to see the company crushing those expectations thanks to its capacity expansion moves.
SMCI Revenue Estimates for Next Fiscal Year data by YCharts
Memory chips from the likes of Micron Technology (MU -0.41%) are helping Nvidia manufacture powerful AI GPUs. This explains why Micron sold out its entire 2024 capacity of high-bandwidth memory (HBM) chips that are deployed in AI chips, while also allocating an "overwhelming majority" of its HBM production line for next year.
HBM is deployed to increase the speed at which data is fed to AI processors, significantly boosting the performance of AI processors. This explains why Nvidia and AMD have been packing more HBM into their AI chips, driving impressive growth for Micron. The memory specialist's revenue was up 58% year over year last quarter to $5.8 billion. It is expecting stronger growth of 76% in revenue in the current quarter to $6.6 billion from $3.75 billion in the year-ago quarter.
Analysts expect Micron to finish the year with a 57% spike in revenue to $24.3 billion followed by another impressive year in fiscal 2025 with an estimated revenue jump of 42%. With shares of Micron currently trading at 6.4 times sales despite rising 95% in the past year, buying it looks like a smart thing to do as the HBM market could generate $49 billion in annual revenue in 2030 as compared to $857 million last year.
Another chipmaker benefiting from the AI chip market is Broadcom (AVGO 1.67%), a stock that has jumped 106% in the past year. In February, J.P. Morgan analyst Harlan Sur pointed out that Broadcom is set to become the second-largest AI chip company in 2024 with an estimated revenue of $8 billion to $9 billion thanks to customers such as Alphabet and Meta Platforms for whom it makes custom AI chips.
But when Broadcom released its fiscal 2024 first-quarter results, management said that its AI chip revenue is set to exceed $10 billion in the current fiscal year. However, don't be surprised to see Broadcom exceeding that mark as it recently added a new customer for its custom AI chips. Wall Street analysts believe that the newly added customer could be Amazon, Apple, or TikTok parent ByteDance.
It is easy to see why Broadcom's AI customer pipeline is increasing. Cloud companies are looking to develop custom chips for running AI workloads to reduce costs, consume less power, and improve performance. This explains why Morgan Stanley predicts custom chips will account for 30% of the $182 billion AI semiconductor market in 2027, indicating that Broadcom's addressable market in AI chips could be worth almost $55 billion in three years.
So, Broadcom's AI business could keep growing at a nice clip moving forward, which is why buying the stock now could be a good idea as it is trading at 28 times forward earnings, in line with the Nasdaq-100's forward earnings multiple and lower than Nvidia's reading of 36.
All the above-mentioned names supply AI hardware, but it eventually comes down to electronics manufacturing services providers such as Jabil (JBL 3.38%) to make the end products. This stock is trading up 52% in the past year but recently took a big hit. Savvy investors, however, would do well to take a look at the bigger picture.
From designing and sourcing components to assembling the final product, Jabil products and services are deployed to reduce costs and enhance efficiency. Not surprisingly, Jabil saw a big surge in AI-related orders. Management pointed out on its recent earnings conference call that its "AI GPU volume in the first half of 2024 is 200x that of the level of 2023."
Even better, Jabil points out that its AI-related revenue is set to increase by 20% in fiscal 2025 to $6 billion. That would be 20% of the company's top line based on the consensus estimate of $30 billion in revenue for fiscal 2025.
Jabil's AI business could keep growing in the future as the company claims that it is "winning share across multiple end markets in the AI data center infrastructure space." That's why investors would do well to take a closer look at this AI stock that's trading at just 0.5 times sales and 21 times trailing earnings before it steps on the gas.
John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. JPMorgan Chase is an advertising partner of The Ascent, a Motley Fool company. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Apple, JPMorgan Chase, Meta Platforms, and Nvidia. The Motley Fool recommends Broadcom and Intel and recommends the following options: long January 2023 $57.50 calls on Intel, long January 2025 $45 calls on Intel, and short May 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.
Read the original post:
5 Artificial Intelligence (AI) Hardware Stocks -- Not Named Nvidia -- That Are Crushing the Market and Are Still Cheap - The Motley Fool
A $400 toothbrush with AI is peak artificial intelligence mania – The Washington Post
Its time again for a reality check: Companies have lost their minds over artificial intelligence and other fancy technology that probably wont improve your life.
Let me give two examples: A $400 electric toothbrush with AI (dont ask) and Amazons cashier-less grocery stores that the company admitted this week are a flop.
(Amazon founder Jeff Bezos owns The Washington Post.)
Businesses large and small are racing to show off to their employees, shareholders and you that they are all-in on new magical AI. Ive seen a lot of tech frenzies come and go, but AI mania is truly out of control.
AI will have profoundly helpful uses. But the technology is also drowning in false promises that suck your time, energy, money and possibly your well being. (Example: New York has an official AI chatbot that habitually gives people wrong legal advice.)
This moment in technology demands more why.
Why does all this supposedly AI technology exist? Why is it worth your time or money? Why is a task proposed for AI say, asking a chatbot for the proper oven temperature at a fast food restaurant any better than simpler, cheaper, less privacy-invading ways of doing the same thing?
So here goes. Lets ask some whys.
I know many people like Amazons just walk out technology in its Go convenience stores and Fresh grocery stores.
When you grab apples and milk from store shelves, an array of cameras, sensors and AI software detect what youve plucked. Your account is charged with no need to scan the items or pay at a register.
As happens with most AI, humans are behind the scenes to check the accuracy of receipts after you leave the store and teach software that mistook an apple for an orange not to repeat that error. The technology is also finicky and expensive to install and troubleshoot.
It really doesnt matter if shoppers like cashier-less stores or not. A technology wont last if it cant bring in enough profit to justify the costs and complications. Just walk out failed that test, at least for bigger grocery stores.
Amazon says the cashier-less technology has proven its worth in smaller, busy stores like those at airports and sports stadiums.
Amazon grocery stores instead will use carts that let you scan items and skip a register. Thats less complicated tech that makes you do the work instead of AI.
If the technology improves, maybe it will be great. But why did Amazon ignore a lesson from its own history?
The biggest innovations to your shopping are often not from the flashy stuff. Its from dull technologies that you never see, said Sucharita Kodali, a retail and e-commerce principal analyst with the research firm Forrester.
Amazon has made deliveries lightning quick with its sophisticated technology (and building oodles of warehouses) to spread pallets of products around the country. When you order a blender, it can arrive quickly because theres one at a warehouse near you.
Thanks to improvements in inventory tracking software, your local Target store can ship your order for a cooking pot and still have plenty for other shoppers coming into the store.
Your grocery shopping has been improved by worker scheduling technology that ensures there are enough people to stock shelves and also unload truckloads of goods out back.
Flashy technology demands your attention like a loud red sports car. You probably wouldnt shop at a grocery store because it has amazing inventory management software. But maybe you would if it has robots or cashier-less checkout.
Companies can, of course, take cool-sounding technology to a ridiculous extreme. May I present, Oral-Bs line of toothbrushes with AI.
Marketing materials for the $400 version mention AI Position Detection which sounds like a straightforward sensor to detect which teeth youre brushing and for how long. Theres also 3D teeth tracking with AI to show whether youve brushed successfully.
I asked representatives of Procter & Gamble, which owns Oral-B, what exactly is AI about this toothbrush. They declined to comment.
If youre jazzed about nightly grades for your brushing, you do you. But this toothbrush doesnt seem to have AI even under the squishiest definition of that term.
Marc Benioff, CEO of software company Salesforce and a big AI booster, recently tweeted that the toothbrush was a sign of peak AI hype. (Sorry to Benioff and the rest of us, there will be way more AI hype.)
We will get past the mania of companies shoving mediocre or pretend AI into every product and telling you that its glorious. It just doesnt feel that way right now.
So heres your sanity check for AI hype: Ask a lot of whys, and please ignore 95 percent of whats happening.
Link:
A $400 toothbrush with AI is peak artificial intelligence mania - The Washington Post