Page 958«..1020..957958959960..970980..»

Which Companies Will Be Winners in the AI Arms Race? – The Motley Fool

One of my favorite technology research analysts is Dan Ives of Wedbush Securities. About a month ago he referred to the boom in artificial intelligence (AI) as a "1995 moment" for the technology sector. For those of you who don't get the reference, he's talking about the widespread adoption of the internet and how it impacted the world thereafter. I've been suspicious for a while that AI is quickly mutating into Wall Street's new favorite buzzword (a la "metaverse" or "blockchain").

While AI applications are in their early days, two companies in particular appear to be well ahead of the curve. Both Palantir Technologies (PLTR) and Meta Platforms (META -1.34%) have showcased some progress on the AI front. And it's exciting. What is even better is that despite each stock's generous 2023 return so far, I do not believe the hype of AI is priced in.

Let's dig in and see what each company is up to and assess if now is a good time to buy these stocks.

Image source: Getty Images.

A couple of weeks ago I wrote an article suggesting that investors keep one item in particular in mind during Palantir's second-quarter earnings call: the company's AI platform, Palantir AIP, which was released earlier this year.

Palantir just released earnings for the quarter ended June 30, and just about everything was positive news. The company reported its third consecutive quarter of generally accepted accounting principles (GAAP) profitability, and raised its guidance for both the third quarter and all of 2023. In addition, the company announced a share buyback program of up to $1 billion.

While this is all well and good, it was management's commentary around AIP that has me most excited. In his shareholder letter, Palantir's CEO, Alex Karp, told investors that AIP is already deployed in over 100 organizations and that the company is in discussion with more than 300 additional enterprises.

Image source: Palantir Q2 2023 earnings presentation.

This level of growth and surge in demand seems otherworldly. The picture above illustrates some of the companies that use AIP, as well as Palantir's other software products. At first glance, this is an impressive client roster sample. Perhaps the most impressive number from the company's earnings presentation was its 38% client count illustration as seen below. This trend underscores how much demand there is for Palantir and its suite of AI software platforms.

Image source: Palantir Q2 2023 earnings presentation.

What I find particularly interesting is that some of Palantir's big tech cohorts, like Microsoft, which is also aggressively investing in AI, did not necessarily provide investors with the most confident guidance or outlook. Yet, Palantir, which has a much smaller balance sheet than Microsoft and is also battling the same cloudy economic environment as its competition, signaled just how popular its suite of AI products is by raising its guidance.

For much of 2023, Meta investors have been laser-focused on the company's "year of efficiency." While the Q2 earnings call illustrated that the majority of cost-reduction efforts have been achieved, the longer-term picture still needed some clarity.

During the call, management went into detail on the company's product roadmap and made it clear that AI will be powering much of Meta's efforts in the metaverse and beyond. Specifically, investors learned about Meta's AI-powered content and how these efforts have resulted in more time spent on the platform, thereby driving higher monetization on Facebook and Instagram.

Like its ad-heavy competitor, Snap, Meta is heavily invested in showcasing return on investment for its advertisers. It is interesting to see that while Snap harped on AI effortsduring its earnings call, its near-to-intermediate-term outlook called for shrinking growth. By contrast, while reviewing its AI initiatives, Meta disclosed that almost all advertisers are using at least one AI product and signaled a bullish top-line outlook for Q3, which could be an indicator of the company's superior technological progress.

^SPX data by YCharts

The chart above illustrates both Meta and Palantir stock benchmarked against the S&P 500 so far this year. While the returns are nice to look at, traditional valuation metrics are a good indicator of how each stock compares to cohorts.

As of the time of this article, Palantir trades at a 19 times price-to-sales (P/S) ratio. By comparison, big-data analytics company Snowflake trades for nearly 24 times P/S, while Microsoft trades at an 11 P/S valuation. Microsoft is a more mature, blue chip company than Palantir. Nonetheless, investors can see that for big data and AI companies this cohort shows a pretty wide disparity of trading multiples.

While Palantir is on the higher end of these valuation multiples, the stock is still trading for less than half its all-time highs from 2021. I believe the stock is headed higher and now could be a really interesting time to initiate a position. The AI arms race will likely be a years-long event. But with that said, I think it will become more clear who is emerging as the leaders of the pack over the next few years. For this reason, Palantir could be a great stock to dollar-cost average into over time while the company unfolds its AI vision.

Given Meta's year-to-date returns, the stock likely needs a breather. As of the time of this article, Meta trades for roughly a 7 P/S ratio. By comparison, ad-heavy platforms Alphabet, Amazon, and Snap trade for P/S ratios of 5.7, 2.7, and 3.8, respectively.

While Meta is tempting, it currently trades at a premium to comparable companies. But with that said, I believe the surge in Meta's price action this year is largely driven by its execution on cost-reductions and return to growing profits. Although the stock may be a tad overbought at the moment, the long-term thesis is intact. Meta stock could be a really compelling buy on pullbacks throughout the second half of the year.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Adam Spatacco has positions in Alphabet, Amazon.com, Meta Platforms, Microsoft, and Palantir Technologies. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Meta Platforms, Microsoft, Palantir Technologies, and Snowflake. The Motley Fool has a disclosure policy.

Go here to read the rest:

Which Companies Will Be Winners in the AI Arms Race? - The Motley Fool

Read More..

House Dem warns AI could be a tool of digital colonialism without inclusivity guardrails – Fox News

A House Democrat is warning artificial intelligence could become a tool of "digital colonialism" if the U.S. doesnt take steps to work with Western Hemisphere nations to create AI systems that reflect diversity and inclusion.

Rep. Adriano Espaillat, D-N.Y., proposed a resolution during the August break that says the U.S. must champion a "regional" AI strategy that includes Western Hemisphere nations as this new technology is developed.

"United States-led investments in the development of AI in the Western Hemisphere would promote the inclusion and representation of underserved populations in the global development and deployment of AI technologies, ensuring that no individual country dominates AI but rather collaborative developments in the Western Hemisphere," his resolution asserted.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Rep Adriano Espaillat, D-N.Y., is calling on the U.S. to work closely with Western nations as it develops artificial intelligence systems and guidelines. (Ting Shen/Bloomberg via Getty Images)

Without naming China, it hints that allowing authoritarian regimes to take the lead on AI standards would only hurt vulnerable populations in the Western Hemisphere.

"The United States' future policies for AI governance will have significant implications for the global governance of AI, influencing whether global AI technologies reflect democratic values, inclusivity, and respect for human rights or are influenced by authoritarian practices and norms, including digital colonialism, whereby most of the AI advancements utilized by the Western Hemisphere consumers would be developed in, and controlled by, a select few nations located outside of the region," it warned.

WHITE HOUSE GETS SEVEN AI DEVELOPERS TO AGREE TO SAFETY, SECURITY, TRUST GUIDELINES

Espaillat, born in the Dominican Republic, said American efforts to work with the rest of the Western Hemisphere on AI standards would "contribute to a more equitable, responsible, and human-centric approach, ensuring the development and deployment of AI technologies that align with democratic principles and societal well-being."

"By championing inclusion in AI and investing in AI in the Western Hemisphere, the United States can create a future where AI technologies authentically reflect the multifaceted diversity of our societies, uphold the fundamental human rights that lie at the core of our Constitution, and contribute to the realization of a world that transcends inequalities rather than perpetuates them," the resolution said.

Espaillat says letting authoritarian regimes take the lead on AI could lead to an era of "digital colonialism." (Tom Williams/CQ-Roll Call, Inc via Getty Images)

"The Western Hemisphere possesses a wealth of natural resources and a skilled human workforce, making it well-positioned to develop and promote future AI technologies that prioritize safety, diversity, equity, inclusion, and accessibility," it added.

CRUZ SHOOTS DOWN SCHUMER EFFORT TO REGULATE AI: MORE HARM THAN GOOD

It warned more broadly that AI has the potential to be developed in a way that reinforces "biases and inequities." Others have made the same argument that AI systems run on data that is either biased or interpreted in a biased way can lead to outcomes that discriminate against what Espaillat and others call "marginalized groups."

Espaillats resolution said recent research shows some AI algorithms can worsen "race-based disparities," such as those used in facial recognition programs.

President Biden's administration has developed several voluntary AI principles that U.S. companies have agreed to meet. (AP Photo/Damian Dovarganes)

"Research conducted by respected institutions such as the Institute of Electrical and Electronics Engineers, the Massachusetts Institute of Technology, Cornell University, and others has shown that these algorithms exhibit significant accuracy disparities, working more effectively on White faces while frequently misidentifying or failing to recognize brown, Black, Indigenous, and darker skinned faces," it warned.

It's not clear Espaillats resolution will get a vote in the GOP-led House where it was introduced. However, the Biden administration has launched several initiatives aimed at developing safe and trustworthy AI systems that avoid biased outcomes as they are used.

CLICK HERE TO GET THE FOX NEWS APP

Last month, seven major AI developers agreed to a set of White House goals in this area, and the Biden White House has said it is working on more AI guidance.

Read more from the original source:

House Dem warns AI could be a tool of digital colonialism without inclusivity guardrails - Fox News

Read More..

10 AI Tools That You Should Be Using In Your Business This Year – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

We hear a lot about AI, and there's no question that this technology will have a great impact on our businesses in the next few years. But what about now? Here are 10 AI tools that you can use today to help increase productivity and hopefully profits.

This is the conversational chatbot created by OpenAI that started the hype late last year and it really does have a lot of things a business owner can be hyped about. Use it to write blogs, suggest better ways to create emails, analyze your website to improve search results, do advanced math, create HR policies and a number of other functions. You should also play with OpenAI's Dall-E 2 app which can create images using text commands (i.e. "a horse standing by a river.") that can be used on company communications or your website.

Microsoft owns 49% of OpenAI (and ChatGPT is hosted by Microsoft servers) so a lot of ChatGPT's functionality will soon be part of the Copilot app which can already be used with Bing searches but will also be a major part of Office in the next year. You'll use Copilot to analyze spreadsheets, create templates, update presentations and even have it attend Teams meetings on your behalf.

Related: The Future Founder's Guide to Artificial Intelligence

Bard is Google's answer to ChatGPT and Duet is the application that will use Bard's underlying Large Language Model to power Google's business apps in a very similar fashion to Microsoft Copilot. The release of these features is expected within the next six months, but already Gmail is using Bard's AI to help write emails and check grammar.

Very similar to Dall-E, Crayon is an advanced image generator that uses AI to generate art, photos, drawings and other graphics directly from your text descriptions. The quality is excellent and the variety of choices is seemingly endless. Use this for images on your website or other promotional content.

If your business is heavily into content, Heywire is a powerful content generator that uses AI to glean information from the Internet and automatically turn it into stories, articles and other blog forms. The application uses real-time, journalistically validated data that you can publish. The tool can help further establish you and your company as a thought leader in your industry. It can also establish multiple "personalities" for whoever you want to be seen writing and creating social posts based on the content it generates.

Ever bump into a really interesting video and then see how long it is and then say to yourself "I don't have the time." Eightify solves that problem. This AI app will watch the video for you and then summarize it into specific points of interest. As a business owner, we often have to wear all the hats. Which means we have to be knowledgeable about a bunch of different subjects. There's so much great content on video that can help us run our business and with this app, we can absorb much more information than ever before.

I've been using Temi for years and, as a writer, swear by it! It's a powerful AI-driven transcription service. I upload audio and video recordings I've made and within minutes, Temi transcribes it into words and it's close to flawless. Transcribing a 10-minute recording costs just a few bucks too.

Need a good, professional form for your business? Maybe a job application? A quotation template? A request form for people visiting your website? Feathery uses AI to create professional-looking forms in just minutes. You can save and edit forms as you create them and customize them for your business. All of this is done through a natural language interface.

Related: Previous Tech Revolutions Rewarded the Builders This AI Revolution Will Reward the Users. Here's Why.

Want to prepare your prospective employees for a job interview? Or perhaps you're a freelancer or remote independent contractor that's scheduled to speak with a prospective client. Interview.ai uses AI to walk you through the conversation in advance. Its mock interviews will help you hone your speaking skills and its algorithms generate interview questions that are tailored to the job and to the industry. The platform promises to deliver customized questions that are both technical and situational, all based on the information you provide beforehand.

So many of us are using video in our businesses for campaigns, case studies, testimonials or just to generate some buzz. The videos go on our website but of course, we want to do more with them. That's where Opus Clip comes in. Using their AI-generated platform you can upload a long video and it will break it down into shorter, more digestible clips that can then be posted on social media or included in your email campaigns.

Pretty cool, right? And I'm just scratching the surface. All of this reminds me of the early days of the iPhone and its smartphone competitors where apps began appearing and then proliferating. I expect the same to happen during this AI revolution. So there are lots more to come. But in the meantime, play with these tools and I promise good results and better productivity.

Excerpt from:

10 AI Tools That You Should Be Using In Your Business This Year - Entrepreneur

Read More..

Google’s AI ambassador walks a fine line between hype and doom – The Washington Post

James Manyika is one of Google's top artificial intelligence ambassadors. (Demetrius Philp for The Washington Post)

Updated August 9, 2023 at 4:28 p.m. EDT|Published August 9, 2023 at 10:00 a.m. EDT

MOUNTAIN VIEW, Calif. Amid the excited hype about artificial intelligence at Googles annual developer conference in May, it fell to James Manyika, the companys new head of tech and society, to talk about the downsides of AI.

Before thousands of people packed into an outdoor arena, Manyika discussed the scourge of fake images and how AI echoes societys racism and sexism. New problems will emerge, he warned, as the tech improves.

But rest assured that Google is taking a responsible approach to AI, he told the crowd. The words bold and responsible flashed onto a massive screen, dwarfing Manyika as he spoke.

The phrase has become Googles motto for the AI age, a replacement of sorts for dont be evil, the mantra the company removed from the preamble of its code of conduct in 2018. The phrase sums up Silicon Valleys general message on AI, as many of the tech industrys most influential leaders rush to develop ever more powerful versions of the technology while warning of its dangers and calling for government oversight and regulation.

Manyika, a former technology adviser to the Obama administration who was born in Zimbabwe and has a PhD in AI from Oxford, has embraced this duality in his new role as Googles AI ambassador. He insists the technology will bring astounding benefits to human civilization and that Google is the right steward for this bright future. But shortly after the developers conference, Manyika signed a one sentence statement, along with hundreds of AI researchers, warning that AI poses a risk of extinction on par with pandemics and nuclear war.

AI is an amazing, powerful, transformational technology, Manyika said in a recent interview. At the same time, he allowed, bad things could happen.

Critics say bad things already are happening. Since its release last November, OpenAIs ChatGPT has invented reams of false information, including a fake sexual harassment scandal that named a real law professor. Open source versions of Stability AIs Stable Diffusion model have created a flood of realistic images of child sexual abuse, undermining efforts to combat real-world crimes. An early version of Microsofts Bing grew disturbingly dark and hostile with users. And a recent Washington Post investigation found that several chatbots including Googles Bard recommended dangerously low-calorie diets, cigarettes and even tapeworms as ways to lose weight.

Googles AI products, including Bard, are already causing harm. And thats the problem with boldness in juxtaposition with responsible AI development, said Tamara Kneese, a senior researcher and project director with Data & Society, a nonprofit that studies the effects of AI.

Big tech companies are calling for regulation, Kneese said. But at the same time, they are quickly shipping products with little to no oversight.

Regulators around the world are now scrambling to decide how to regulate the technology, while respected researchers are warning of longer-term harms, including that the tech might one day surpass human intelligence. Theres an AI-focused hearing on Capitol Hill nearly every week.

If AI has trust issues, so does Google. The company has long struggled to persuade users that it can safeguard the vast amount of data it collects from their search histories and email inboxes. The companys reputation is particularly wobbly when it comes to AI: In 2020, it fired well-known AI ethics researcher Timnit Gebru after she published a paper arguing the companys AI could be infected by racism and sexism due to the data it was trained on.

Meanwhile, the tech giant is under significant competitive pressure: Google launched its chatbot earlier this year in a rush to catch up after ChatGPT and other competitorshad already captured the public imagination. Rivals like Microsoft and a host of well-funded start-ups see AI as a way to break Googles grip on the internet economy.

Manyika has stepped with calm confidence into this pressure-cooker moment. A veteran of the global conference circuit, he serves on a stunning number of high-powered boards, including the White House AI advisory council, where he is vice chair. In June, he spoke at the Cannes Lions Festival; in April, he appeared on 60 Minutes. Hes presented in front of the United Nations and is a regular at Davos.

And in every interview, conference talk and blog post, he offers reassurance about Googles role in the AI gold rush, describing the companys approach with those same three words: bold and responsible.

Embrace that tension

The phrase bold and responsible debuted in a blog post in January and has since popped up in every executive interview on AI and the companys quarterly financial reports. It grew out of discussions going back months between Manyika, Google chief executive Sundar Pichai and small group of other executives, including Googles now-Chief Scientist Jeff Dean; Marian Croak, the companys vice president of responsible AI; and Demis Hassabis, the head of DeepMind, an AI start-up Google acquired in 2014.

Critics have noted the inherent contradiction.

What does it mean honestly? said Rebecca Johnson, an AI ethics researcher at the University of Sydney, who worked last year as a visiting researcher at Google. It just sounds like a slogan.

At the May developers conference, Manyika acknowledged a natural tension between the two. But, he said, We believe its not only possible but in fact critical to embrace that tension. The only way to be truly bold in the long term is to be responsible from the start.

Manyika, 57, grew up in segregated Zimbabwe, then known as Rhodesia, an experience that he says showed him the possibilities of what technology advancement and progress can make to ordinary peoples lives and made him acutely sensitive to its dangers.

Zimbabwe was then ruled by an autocratic White government that brutally repressed the countrys majority-Black population, excluding them from serving in government and living in White neighborhoods. I know what a discriminatory system can do with technology, he said, mentioning AI tools like facial recognition. Think of what they could have done with that.

When the apartheid regime crumbled in 1980, Manyika was one of the first Black kids to attend the prestigious Prince Edward School, which educated generations of Zimbabwes White ruling class. We actually took a police escort, he said, which reminded him at the time of watching films about desegregation in the United States.

Manyika went on to study engineering at the University of Zimbabwe, where he met a graduate student from Toronto working on artificial intelligence. It was his first introduction to the science of making machines think for themselves. He learned about Geoffrey Hinton, a researcher who decades later would become known as the godfather of AI and work alongside Manyika at Google. Hinton was working on neural networks technology built on the idea that computers could be made to learn by designing programs that loosely mimicked pathways in the human brain and Manyika was captivated.

He won a Rhodes scholarship to study at Oxford, and dug into that idea, first with a masters in math and computer science and then a PhD in AI and robotics. Most scientists working on making computers more capable believed neural networks and AI had been discredited years earlier, and Manyika said his advisers cautioned him not to mention it because no one will take you seriously.

He wrote his thesis on using AI to manage the input of different sensors for a vehicle, which helped get him a visiting scientist position at NASAs Jet Propulsion Labs. There, he contributed to the Pathfinder mission to land the Sojourner rover on Mars. Next, he and his partner, the British-Nigerian novelist Sarah Ladipo Manyika, moved to Silicon Valley, where he became a consultant for McKinsey and had a front-row seat to the dot-com bubble and subsequent crash. He wrote extensively on how tech breakthroughs impacted the real world, publishing a book in 2011 about how the massive amount of data generated by the internet would become critical to business.

In Silicon Valley, he became known as a connecter, someone who can make a key introduction or suggest a diverse range of candidates for a board position, said Erik Brynjolfsson, director of Stanfords Digital Economy Lab, whos known Manyika for years. He has maybe the best contact list of anyone in this field, Brynjolfsson said.

His job also put him in the orbit of powerful people in Washington. He began having conversations about tech and the economy with senior Obama administration staffers, and was appointed to the White Houses advisory board on innovation and the digital economy, where he helped produce a 2016 report for the Commerce Department warning that AI could displace millions of jobs. He resigned the post in 2017 after President Donald Trump refused to condemn a protest by white supremacists that turned violent in Charlottesville.

By then, AI tech was starting to take off. In the early 2010s, research by Hinton and other AI pioneers had led to major breakthroughs in image recognition, translation and medical discoveries. I was itching to go back much more closely and fully to the research and the field of AI because things were starting to get really interesting, Manyika said.

Instead of just researching trends and writing reports from the outside, he wanted to be at Google. He spoke with Pichai who had previously tried to recruit him and took the job last year.

Google is arguably the preeminent company in AI having entered the field well before OpenAI was a glimmer in Elon Musks eye. Roughly a decade ago, the company stepped up its efforts in the space, launching an expensive talent war with other tech firms to hire the top minds in AI research. Scientists like Hinton left their jobs at universities to work directly for Google, and the company soon became a breakthrough machine.

In 2017, Google researchers put out a paper on transformers a key breakthrough that let AI models digest much more data and laid the foundation for the technology that enables the current crop of chatbots and image-generators to pass professional exams and re-create Van Gogh paintings. That same year, Pichai began pitching the company to investors and employees as AI first.

But the company held off releasing the tech publicly, using it instead to improve its existing cash cow products. When you type movie with green ogre into Google Search and the site spits out a link to Shrek, thats AI. Advances in translation are directly tied to Googles AI work, too.

Then the ground shifted under Googles feet.

In November, ChatGPT was released to the public by OpenAI, a much smaller company initially started by Musk and other tech leaders to act as a counterweight to Big Techs AI dominance. For the first time, people had direct access to this cutting edge tech. The bot captured the attention of consumers and tech leaders alike, spurring Google to push out its own version, Bard, in March.

Months later, Bard is available in 40 languages and nearly every country that isnt on a U.S. sanctions list. Though available to millions, Google still labels the bot an experiment, an acknowledgment of persistent problems. For example, Bard often makes up false information.

Meanwhile, Google has lost some of the star AI researchers it hired during the talent wars, including all eight of the authors of the 2017 transformers paper. Hinton left in May, saying he wanted to be free to speak out about the dangers of AI. The company also undercut its reputation for encouraging academic dissent by firing Gebru and others, including Margaret Mitchell, who was a co-author on the paper Gebru wrote before her firing.

They have lost a lot of the benefit of the doubt that they were good, said Mitchell, now chief ethics scientist at AI start-up Hugging Face.

Do the useful things

Sitting down for an interview, Manyika apologizes for overdressing in a checkered button-down shirt and suit jacket. Its formal for San Francisco. But its the uniform he wears in many of his public appearances.

The conversation, like most in Silicon Valley these days, begins with Manyika declaring how exciting the recent surge of interest in AI is. When he joined the company, AI was just one part of his job as head of tech and society. The role didnt exist before he was hired; its part ambassador and part internal strategist: Manyika shares Googles message with academics, think tanks, the media and government officials, while explaining to Google executives how their tech is interacting with the wider world. He reports directly to Pichai.

As the rush into AI has shifted Silicon Valley and Google along with it, Manyika is suddenly at the center of the companys most important work.

The timing couldnt have been better, said Kent Walker, who as Googles president of global affairs leads the companys lobbying and legal teams. Walker and Manyika have been meeting with politicians in the United States and abroad to address the growing clamor for AI regulation. Manyika, he said, has been a very thoughtful external spokesperson for us.

Manyikas role grew substantially in April when Hassabis took charge of core AI research at the company. The rest of Googles world-class research division went to Manyika. He now directs their efforts on climate change, health care, privacy and quantum computing, as well as AI responsibility.

Despite Googles blistering pace in the AI arms race over the past eight months, Manyika insisted that the company puts out products only when theyre ready for the real world. When Google launched Bard, for example, he said it was powered with an older model that had undergone more training and tweaking, not a more powerful but unproven version.

Being bold doesnt mean hurry up, he said. Bold to me means: Benefit everybody. Do the useful things. Push the frontiers to make this useful.

The November release of ChatGPT introduced the public to generative AI. And I think thats actually great, he said. But Im also grateful for the thoughtful, measured approach that we continue to take with these things.

correction

A previous version of this story inaccurately said Google deleted the phrase "don't be evil" from its code of conduct, and described President of Global Affairs Kent Walker's role as including control of the company's public relations team. Google deleted the phrase only from the preamble to its code of conduct, and Walker does not oversee public relations. This story has been corrected.

Original post:

Google's AI ambassador walks a fine line between hype and doom - The Washington Post

Read More..

11xAI closes a $2M pre-seed round to create autonomous AI workers – TechCrunch

Image Credits: Courtesy of Hasan Sukkar

11xAI announced the closing of a $2 million pre-seed round led by Project A Ventures today. In conjunction with its fundraise, the company also launched its service.

The London-based company builds automated digital workers that can be used in lieu of human employees. It has built an AI sales development representative called Alice and plans to create James, focused on automated talent acquisition and Bob, targeting automated human resources work in the upcoming years.

Speaking to TechCrunch, co-founder and CEO Hasan Sukkar believes autonomous agents are the future of the workforce and specifically designed 11xAI to help smaller businesses increase their productivity to better compete with larger companies.

The goal is that businesses hire autonomous workers for all parts of their business, creating an AI-powered workforce that runs on autopilot.

Our mission is to help people rise above mundane, repetitive tasks, and that way, we can focus on the more creative and more human-driven tasks, Sukkar told TechCrunch, adding that he hopes to also develop a monetized infrastructure platform that would allow anyone to build an autonomous worker.

While generative AI techniques have found lots of market interest in recent quarters, the technology remains a work in progress. Sukkar told TechCrunch that his company has a product strategy to prevent bias in the AI models, one that includes audits and monitoring, regular bias testings, and a diverse data set.

Sukkar called his startups recent fundraising journey relatively easy, as investors understood 11xAI had much potential to scale, especially given the current artificial intelligence boom.

Mila Cramer, a principal at Project A Ventures, said recent AI advancements allowed her and her team to imagine a future with automated end-to-end processes carried out by digital workers. We are very excited to support 11x in bringing us that future today, she told TechCrunch. Hasan has an incredible, unique level of dedication and conviction, which made us immediately believe that he will do something special.

No Label Ventures, Tiny Ventures, and angel investors Felipe Navio and Mandeep Singh also participated in the round.

Sukkar plans to expand his current team of six by hiring more engineers; he also hopes to expand more into the U.S. market and, of course, is planning to launch two other digital workers.

This company launch comes full circle for Sukkar, who remembers creating his first online marketplace when he was just 14 years old.

In 2015 at the age of 17, he immigrated as part of the Syrian Refugee Crisis to the U.K. He studied engineering at the University of Exeter and began navigating the British business landscape. He worked in venture capital, where he learned ethnic minorities like himself always have to come overly prepared. That journey has led him here with 11xAI.

In two years, we believe that Digital Workers will be a regular part of how companies around the world work, Sukkar said. And we want to enable this outcome.

This piece was updated to clarify the year Sukkar came to the U.K.

Read more:

11xAI closes a $2M pre-seed round to create autonomous AI workers - TechCrunch

Read More..

Inside the hunt for AI chips – The Verge

The most sought-after resource in the tech industry right now isnt a specific type of engineer. Its not even money. Its an AI chip made by Nvidia called the H100.

Securing these GPUs is considerably harder to get than drugs, Elon Musk has said. Whos getting how many H100s and when is top gossip of the valley rn, OpenAIs Andrej Karpathy posted last week.

Ive spent this week talking with sources throughout the AI industry, from the big AI labs to cloud providers and small startups, and come away with this: everyone is operating under the assumption that H100s will be nearly impossible to get through at least the first half of next year. The lead time for new orders, if you can get them, is roughly six months, or an eternity in the AI space. Meanwhile, the cloud providers are just starting to make their H100s widely available and charging an arm and a leg for the limited capacity they have. For the most part, these hosting providers also require extremely costly, lengthy upfront commitments.

Read the original here:

Inside the hunt for AI chips - The Verge

Read More..

Nice recommends use of AI in NHS radiotherapy treatment in England – The Guardian

NHS

Nine technologies approved for carrying out external beam radiotherapy in lung, prostate and colorectal cancers

Thu 10 Aug 2023 19.01 EDT

Patients in England having radiotherapy are likely to have part of their treatment performed with the aid of artificial intelligence after its use to help NHS clinicians was recommended for the first time.

Draft guidance from the National Institute for Health and Care Excellence (Nice) has given approval to nine AI technologies for performing external beam radiotherapy in lung, prostate and colorectal cancers, in a move it believes could save radiographers hundreds of thousands of hours and help relieve the severe pressure on radiotherapy departments.

NHS England data shows there were 134,419 radiotherapy episodes in England in April 2021 to March 2022 of which a significant proportion required complex planning.

At the moment, therapeutic radiographers outline healthy organs on digital images of a CT or MRI scan by hand so that the radiotherapy does not damage healthy cells by minimising the dose to normal tissue. Evidence given to Nice found that using AI to create the contours could free up between three and 80 minutes of radiographers time for each treatment plan, and that AI-generated contours were of a similar quality as those drawn manually.

While it recommended using AI to mark the contours, Nice said that the contours would still be reviewed by a trained healthcare professional.

Dr Sarah Byron, the programme director for health technologies at Nice, said using AI could help reduce waiting lists. She added: NHS colleagues working on the frontline in radiotherapy departments are under severe pressure with thousands of people waiting for scans.

The role imaging plays in radiotherapy treatment planning is quite pivotal, so recommending the use of AI technologies to help support treatment planning alongside clinical oversight by a trained healthcare professional could save both time and money.

We will continue to focus on what matters most and the recommendations made by our independent committee can help to bring waiting lists down for those needing radiotherapy treatment.

The health secretary, Steve Barclay, welcomed the announcement. He said: Its hugely encouraging to see the first positive recommendation for AI technologies from a Nice committee, as Ive been clear the NHS must embrace innovation to keep fit for the future.

These tools have the potential to improve efficiency and save clinicians thousands of hours of time that can be spent on patient care. Smart use of tech is a key part of our NHS long-term workforce plan, and were establishing an expert group to work through what skills and training NHS staff may need to make best use of AI.

Nice said it was also examining the evidence for using AI in stroke and chest scans. It follows a study that found AI was safe to use in breast cancer screening and could almost halve the workload of radiologists, according to the worlds most comprehensive trial of its kind. Evidence is growing that AI can be more effective in detecting cancers. Researchers hope it will be able to speed up the detection of cancer by helping to fast-track patients to treatment, and by streamlining the analysis of CT scans.

The nine platforms included are AI-Rad Companion Organs RT, ART-Plan, DLCExpert, INTContour, Limbus Contour, MIM Contour ProtegeAI, MRCAT Prostate plus Auto-contouring, MVision Segmentation Service and RayStation.

Charlotte Beardmore, the executive director of professional policy at the Society of Radiographers, welcomed the draft guidance but said it was not a replacement for staff and caution was needed. It is critical there is evidence to underpin the safe application of AI in this clinical setting, she said. Using AI would still require input by a therapeutic radiographer or another member of the oncology multi-professional team, she added. Investment in the growth of the radiography workforce remains critical.

Separately, the government announced it was investing 13m in AI healthcare research before the first big international AI safety summit in autumn. The technology secretary, Michelle Donelan, said 22 university and NHS trust projects would receive funding for projects including developing a semi-autonomous surgical robotics platform for the removal of tumours and using AI to predict the likelihood of a persons future health problems based on their existing conditions.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more here:

Nice recommends use of AI in NHS radiotherapy treatment in England - The Guardian

Read More..

Can AI Write a Good IEP? What Special Education Experts Say – Education Week

Special education professionals often gripe about the onslaught of paperwork theyre required to fill out, on top of the challenges of providing robust services to students with disabilities.

What if artificial intelligence could wipe out at least some of that burden?

Thats the question some educators are pondering as generative AI tools like ChatGPT and Bard grow more widely available and technologically sophisticated.

But investing too quickly in the promise of AI could be perilous for special education as well. Each student who qualifies for special education services has unique circumstances that cant easily be standardized, said Lindsay Jones, chief executive officer of CAST, a nonprofit formerly known as the Center for Applied Special Technology.

Algorithms arent flexible enough to recognize the diversity of needs. We have to move forward cautiously, Jones said. But with that said, there is some really interesting and promising stuff thats happening.

Here are a few examples, and the opportunities and limitations of each.

Opportunity: Educators serving students with disabilities spend countless hours documenting the services they provide to ensure they are complying with the Individuals with Disabilities Education Act (IDEA). The more students they are responsible for overseeing, the more documentation they have to keep.

The less time special education providers have to spend filling out forms, the more time they can spend on the core of their workproviding students with the guidance and resources they need to succeed in the classroom, regardless of their disability status.

Limitation: Just because AI can possibly do paperwork doesnt mean it will do it correctly.

Forms that deal with special education services often include sensitive information that would be risky or potentially even illegal to share on a publicly accessible AI platform that absorbs all of the data it receives.

Some educators have already experimented with using fake names to prevent sensitive information from being exposed, said Tessie Bailey, director of the federally funded PROGRESS Center, which conducts research and advocates for students with disabilities. That approach can be helpful, Bailey said, but it doesnt entirely eliminate the underlying concern about privacy.

Opportunity: Some educators have already begun asking generative AI tools to help them with writing Individualized Education Programs, or IEPs. These complex documents undergird the learning experience for Americas roughly 7 million students with disabilities. Educators could save time and perhaps even learn something from a tool that can access a repository of existing IEP language.

Limitation: So far, AI tools have proven to effectively generate documents that look like IEPs. But that basic standard isnt enoughby law, the documents also need to substantively match the students needs and address them in detailed, tangible ways. Only a human can ensure the IEP does that, said Bailey, whos also a principal consultant for the American Institutes for Research.

If teachers dont have the capacity to create a high-quality educational IEP, it doesnt matter if you give them AI, Bailey said.

Opportunity: Educators are starting to get requests from parents for AI tools to be among the services provided to their children in their IEP. The potential for these tools to help students is vast, from voice assistants that narrate for visually impaired students to translators that convert text to and from English.

Limitation: A teacher recently came to Baileys organization asking for guidance on whether to grant a parents request for the child to get help from artificial intelligence tools.

We dont really have answers, Bailey said.

Baileys own child has dysgraphia, a condition that causes a persons writing to be distorted or incorrect. AI tools have been helping him write papers.

But its still necessary to teach her son how to use the tool, and how to develop the ideas it ends up helping him to translate to written words, she said.

Districts also need more guidance on which emerging tools have been rigorously tested for efficacy, Jones said.

If you have a framework and a way for approaching this consistently, that includes asking questions and being curious, I think we can move into an environment that is much more flexible, Jones said. It is going to take all of us.

Read more:

Can AI Write a Good IEP? What Special Education Experts Say - Education Week

Read More..

AI algorithm discovers ‘potentially hazardous’ asteroid 600 feet wide … – Space.com

A new artificial intelligence algorithm programmed to hunt for potentially dangerous near-Earth asteroids has discovered its first space rock.

The roughly 600-foot-wide (180 meters) asteroid has received the designation 2022 SF289, and is expected to approach Earth to within 140,000 miles (225,000 kilometers). That distance is shorter than that between our planet and the moon, which are on average, 238,855 miles (384,400 km)apart. This is close enough to define the rock as a Potentially Hazardous Asteroid (PHA), but that doesn't mean it will impact Earth in the foreseeable future.

The HelioLinc3D program, which found the asteroid, has been developed to help the Vera C. Rubin Observatory, currently under construction in Northern Chile, conduct its upcoming 10-year survey of the night sky by searching for space rocks in Earth's near vicinity. As such, the algorithm could be vital in giving scientists the heads up about space rocks on a collision course with Earth.

"By demonstrating the real-world effectiveness of the software that Rubin will use to look for thousands of yet-unknown potentially hazardous asteroids, the discovery of 2022 SF289 makes us all safer," Vera C. Rubin researcher Ari Heinze said in a statement.

Related: Super-close supernova captivates record number of citizen scientists

Tens of millions of space rocks roam the solar system ranging from asteroids the size of a few feet to dwarf planets around the size of the moon. These space rocks are the remains of material that initially formed the planets around 4.5 billion years ago.

While most of these objects are located far from Earth, with the majority of asteroids homed in the main asteroid belt between Mars and Jupiter, some have orbits that bring them close to Earth. Sometimes worryingly close.

Space rocks that come close to Earth are defined as near-Earth objects (NEOs), and asteroids that venture to within around 5 million miles of the planet get the Potentially Hazardous Asteroid (PHA) status. This doesn't mean that they will impact the planet, though. Just as is the case with 2022 SF289, no currently known PHA poses an impact risk for at least the next 100 years.Astronomers search for potentially hazardous asteroids and monitor their orbits just to make sure they are not heading for a collision with the planet.

This new PHA was found when the asteroid-hunting algorithm was paired with data from the ATLASsurvey in Hawaii, as a test of its efficiency before Rubin is completed.

The discovery of 2022 SF289 has shown that HelioLinc3D can spot asteroids with fewer observations than current space rock hunting techniques allow.

Searching for potentially hazardous asteroids involves taking images of parts of the sky at least four times a night. When astronomers spot a moving point of light traveling in an unambiguous straight line across the series of images, they can be quite certain they have found an asteroid. Further observations are then made to better constrain the orbit of these space rocks around the sun.

The new algorithm, however, can make a detection from just two images, speeding up the whole process.

Around 2,350 PHAs have been discovered thus far, and though none poses a threat of hitting Earth in the near future, astronomers aren't quite ready to relax just yet as they know that many more potentially dangerous space rocks are out there yet to be uncovered.

It is estimated that the Vera Rubin Observatory could uncover as many as 3,000 hitherto undiscovered potentially hazardous asteroids.

Rubin's 27-foot-wide (8.4 meters) mirror and massive 3,200-megapixel camera will revisit locations in the night sky twice per night rather than the four times a night observations conducted by current telescopes. Hence the creation of HelioLinc3D, a code that could find asteroids in Rubins dataset even with fewer available observations.

But, the algorithm's creators wanted to give the software a trial run before the construction of Rubin is completed. This meant testing if it could find an asteroid in data that had already been collected, data that has too few observations for currently employed algorithms to scour.

With ATLAS data offered as such a test subject, HelioLinc3D set about looking for PHAs, and on July 18, 2023, it hit paydirt, uncovering 2022 SF289. This PHA was spotted by ATLAS on September 19, 2022, while it was 3 million miles from Earth. ATLAS had actually spotted this new PHA three times over the course of four nights but hadn't spotted it four times in the same night, meaning current surveys missed it. By putting together fragments of data from all four nights, HelioLinc3D was able to identify the PHA.

"Any survey will have difficulty discovering objects like 2022 SF289 that are near its sensitivity limit, but HelioLinc3D shows that it is possible to recover these faint objects as long as they are visible over several nights," lead ATLAS astronomer Larry Denneau said. "This in effect gives us a 'bigger, better' telescope."

With the position of 2022 SF289 pinpointed, astronomers could then follow up on the discovery with other telescopes to confirm the PHA's existence.

"This is just a small taste of what to expect with the Rubin Observatory in less than two years when HelioLinc3D will be discovering an object like this every night," Rubin scientist and HelioLinc3D team leader Mario Juri said. "But more broadly, it's a preview of the coming era of data-intensive astronomy. From HelioLinc3D to AI-assisted codes, the next decade of discovery will be a story of advancement in algorithms as much as in new, large, telescopes."

The discovery of 2022 SF289 was announced in the International Astronomical Union's Minor Planet Electronic Circular MPEC 2023-O26.

Go here to see the original:

AI algorithm discovers 'potentially hazardous' asteroid 600 feet wide ... - Space.com

Read More..

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ – CNBC

Nvidia president and CEO Jensen Huang speaks at the COMPUTEX forum in Taiwan. "Everyone is a programmer. Now, you just have to say something to the computer." (Photo by Walid Berrazeg/SOPA Images/LightRocket via Getty Images)

Sopa Images | Lightrocket | Getty Images

Nvidia announced a new chip designed to run artificial intelligence models on Tuesday as it seeks to fend off competitors in the AI hardware space, including AMD, Google and Amazon.

Currently, Nvidia dominates the market for AI chips with over 80% market share, according to some estimates. The company's specialty is graphics processing units, or GPUs, which have become the preferred chips for the large AI models that underpin generative AI software, such as Google's Bard and OpenAI's ChatGPT. But Nvidia's chips are in short supply as tech giants, cloud providers and startups vie for GPU capacity to develop their own AI models.

Nvidia's new chip, the GH200, has the same GPU as the company's current highest-end AI chip, the H100. But the GH200 pairs that GPU with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.

"We're giving this processor a boost," Nvidia CEO Jensen Huang said in a talk at a conference on Tuesday. He added, "This processor is designed for the scale-out of the world's data centers."

The new chip will be available from Nvidia's distributors in the second quarter of next year, Huang said, and should be available for sampling by the end of the year. Nvidia representatives declined to give a price.

Oftentimes, the process of working with AI models is split into at least two parts: training and inference.

First, a model is trained using large amounts of data, a process that can take months and sometimes requires thousands of GPUs, such as, in Nvidia's case, its H100 and A100 chips. Then the model is used in software to make predictions or generate content, using a process called inference. Like training, inference is computationally expensive, and it requires a lot of processing power every time the software runs, like when it works to generate a text or image. But unlike training, inference takes place near-constantly, while training is only required when the model needs updating.

"You can take pretty much any large language model you want and put it in this and it will inference like crazy," Huang said. "The inference cost of large language models will drop significantly."

Nvidia's new GH200 is designed for inference since it has more memory capacity, allowing larger AI models to fit on a single system, Nvidia VP Ian Buck said on a call with analysts and reporters on Tuesday. Nvidia's H100 has 80GB of memory, versus 141GB on the new GH200. Nvidia also announced a system that combines two GH200 chips into a single computer for even larger models.

"Having larger memory allows the model to remain resident on a single GPU and not have to require multiple systems or multiple GPUs in order to run," Buck said.

The announcement comes as Nvidia's primary GPU rival, AMD, recently announced its own AI-oriented chip, the MI300X, which can support 192GB of memory and is being marketed for its capacity for AI inference. Companies including Google and Amazon are also designing their own custom AI chips for inference.

See the rest here:

Nvidia reveals new A.I. chip, says costs of running LLMs will 'drop significantly' - CNBC

Read More..