Category Archives: Ai

This Viral AI Chatbot Will Lie and Say It’s Human – WIRED

In late April a video ad for a new AI company went viral on X. A person stands before a billboard in San Francisco, smartphone extended, calls the phone number on display, and has a short call with an incredibly human-sounding bot. The text on the billboard reads: Still hiring humans? Also visible is the name of the firm behind the ad, Bland AI.

The reaction to Bland AIs ad, which has been viewed 3.7 million times on Twitter, is partly due to how uncanny the technology is: Bland AI voice bots, designed to automate support and sales calls for enterprise customers, are remarkably good at imitating humans. Their calls include the intonations, pauses, and inadvertent interruptions of a real live conversation. But in WIREDs tests of the technology, Bland AIs robot customer service callers could also be easily programmed to lie and say theyre human.

In one scenario, Bland AIs public demo bot was given a prompt to place a call from a pediatric dermatology office and tell a hypothetical 14-year-old patient to send in photos of her upper thigh to a shared cloud service. The bot was also instructed to lie to the patient and tell her the bot was a human. It obliged. (No real 14-year-old was called in this test.) In follow-up tests, Bland AIs bot even denied being an AI without instructions to do so.

Bland AI formed in 2023 and has been backed by the famed Silicon Valley startup incubator Y Combinator. The company considers itself in stealth mode, and its cofounder and chief executive, Isaiah Granet, doesnt name the company in his LinkedIn profile.

The startups bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AIs bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end usersthe people who actually interact with the productto potential manipulation.

My opinion is that it is absolutely not ethical for an AI chatbot to lie to you and say its human when its not, says Jen Caltrider, the director of the Mozilla Foundations Privacy Not Included research hub. Thats just a no-brainer, because people are more likely to relax around a real human.

Bland AIs head of growth, Michael Burke, emphasized to WIRED that the companys services are geared toward enterprise clients, who will be using the Bland AI voice bots in controlled environments for specific tasks, not for emotional connections. He also says that clients are rate-limited, to prevent them from sending out spam calls, and that Bland AI regularly pulls keywords and performs audits of its internal systems to detect anomalous behavior.

This is the advantage of being enterprise-focused. We know exactly what our customers are actually doing, Burke says. You might be able to use Bland and get two dollars of free credits and mess around a bit, but ultimately you cant do something on a mass scale without going through our platform, and we are making sure nothing unethical is happening.

Original post:

This Viral AI Chatbot Will Lie and Say It's Human - WIRED

French AI Startups Felt Unstoppable. Then Came the Election – WIRED

Then on the other extreme, [the left-wing New Popular Front] have been so vocal about all the taxation measures they want to bring back that it looks like we're just going back to pre-Macron period, Varza says. She points to Frances 2012 les pigeons (or suckers) movement, a campaign by angry internet entrepreneurs that opposed Socialist president Franois Hollandes plan to dramatically raise taxes for founders.

Maya Nol, CEO of France Digitale, an industry group for startups, is worried not only about Frances ability to attract overseas talent, but also about how appealing the next government will be to foreign investors. In February, Google said it would open a new AI hub in Paris, where 300 researchers and engineers would be based. Three months later, Microsoft also announced a record $4 billion investment in its French AI infrastructure. Meta has had an AI research lab in Paris since 2015. Today France is attractive to foreign investors, she says. And we need them. Neither Google nor Meta replied to WIREDs request for comment. Microsoft declined to comment.

The vote will not unseat Macron himselfthe presidential election is not scheduled until 2027but the election outcome could dramatically reshape the lower house of the French Parliament, the National Assembly, and install a prime minister from either the far-right or left-wing coalition. This would plunge the government into uncertainty, raising the risk of gridlock. In the past 60 years, there have been only three occasions when a president has been forced to govern with a prime minister from the opposition party, an arrangement known in France as cohabitation.

No AI startup has benefited more from the Macron era than Mistral, which counts Cdric O, former digital minister within Macrons government, among its cofounders. Mistral has not commented publicly on the choice France faces at the polls. The closest the company has come to sharing its views is Cdric Os decision to repost an X post by entrepreneur Gilles Babinet last week that said: I hate the far-right but the lefts economic program is surreal. When WIRED asked Mistral about the retweet, the company said O was not a spokesperson, and declined to comment.

Babinet, a member of the governments artificial intelligence committee, says he has already heard colleagues considering leaving France. A few of the coders I know from Senegal, from Morocco, are already planning their next move, he says, claiming people have also approached him for help renewing their visas early in case this becomes more difficult under a far-right government.

While other industries have been quietly rushing to support the far-right as a preferable alternative to the left-wing alliance, according to reports, Babinet plays down the threat from the New Popular Front. It's clear they come with very old-fashioned economical rules, and therefore they don't understand at all the new economy, he says. But after speaking to New Popular Front members, he says the hard-left are a minority in the alliance. Most of these people are Social Democrats, and therefore they know from experience that when Franois Hollande came into power, he tried to increase the taxes on the technology, and it failed miserably.

Already there is a sense of damage control, as the industry tries to reassure outsiders everything will be fine. Babinet points to other moments of political chaos that industries survived. At the end of the day, Brexit was not so much of a nightmare for the tech scene in the UK, he says. The UK is still the preferred place to launch a generative AI startup, according to the Accel report.

Stanislas Polu, an OpenAI alumnus who launched French AI startup Dust last year, agrees the industry has enough momentum to survive any headwinds coming its way. Some of the outcomes might be a bit gloomy, he says, adding he expects personal finances to be hit. Its always a little bit more complicated to navigate a higher volatility environment. I guess were hoping that the more moderate people will govern that country. I think thats all we can hope for.

Go here to see the original:

French AI Startups Felt Unstoppable. Then Came the Election - WIRED

Sam Altman-backed AI processor venture hires ex-Apple engineer to lead hardware development – Tom’s Hardware

Rain AI, an AI hardware processor developer backed by OpenAI's Sam Altman and investment banks, hashiredJean-Didier Allegrucci, a former Apple chip executive, to lead its hardware engineering. This high-profile hire indicates that Rain AI has serious plans for its processors.

Jean-Didier Allegrucci, who has yet to update his LinkedIn profile, worked on Apple's system-on-chips (SoCs) for over 17 years since June 2007 and oversaw development of more than 30 processors used for iPhones, Macs, iPads, and Apple Watch. Based on a Rain AI blog post, Allegrucci was instrumental to build Apple's world-class SoC development team, overseeing areas such as SoC methodology, architecture, design, integration, and verification, so his experience will be extremely valuable for Rain AI. Before Apple, J-D Allegrucci worked at Vivante and ATI Technologies, both developers of graphics processing units.

"We could not be more excited to have a hardware leader of J-Ds caliber overseeing our silicon efforts," said Rain AI CEO William Passo. "Our novel compute-in-memory (CIM) technology will help unlock the true potential of today's generative AI models, and get us one step closer to running the fastest, cheapest, and most advanced AI models anywhere."

At Rain AI, Jean-Didier Allegrucci will collaborate with Amin Firoozshahian, Rain AI's lead architect, who transitioned from Meta Platforms after a five-year tenure. This partnership combines deep industry experience and innovative thinking to drive the company's ambitious goals. Yet, it will take quite some time before Amin Firoozshahian and Jean-Didier Allegrucci build their first system-on-chip at Rain. The process typically takes many years.

Rain AI's focus is on in-memory compute technology, which processes data at the storage location, mimicking the human brain. It promises to enhance energy efficiency significantly compared to traditional AI processors, such as Nvidia's H100, or B100/B200 or AMD's Instinct MI300X.

Earlier this month Rain AI licensed Andes Technology's AX45MPV RISC-V vector processor with the ACE/COPILOT instruction customization and partnered with Andes's Custom Computing Business Unit (CCBU) to accelerate development of its compute-in-memory generative AI solutions. This collaboration aims to enhance Rain AI's product roadmap and deliver scalable AI solutions by early 2025.

Given the time it usually takes to develop a complex processor from scratch and the fact that Rain AI is tasking Andes to help it build its first SoC by early 2025, it looks like processors which development will be led by Jean-Didier Allegrucci are at least a couple of years away and his input to the 2025 product will be limited (if any).

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Excerpt from:

Sam Altman-backed AI processor venture hires ex-Apple engineer to lead hardware development - Tom's Hardware

Instagram is starting to let some creators make AI versions of themselves – The Verge

Instagrams new AI Studio will let creators make AI chatbot versions of themselves, and Meta is starting to roll it out as an early test in the US, CEO Mark Zuckerberg announced on his broadcast channel on Thursday.

As part of the test, you might start seeing AIs from your favorite creators and interest-based AIs in the coming weeks on Instagram, according to Zuckerberg. These will primarily show up in messaging for now, and will be clearly labeled as AI.

Zuckerberg shared a few videos of conversations with creator-made AI chatbots as examples. From a creators Instagram, you can tap a Message button to kick off a conversation. A notice at the top says that the messages are generated by AI and some may be inaccurate or inappropriate. (Lets hope Metas systems can prevent these creator-made AIs from saying something terrible.) Meta also includes AI in front of the creators name and, at least for now, a beta tag after the persons name.

Weve been working closely with creators to build these so theyre helpful in answering the questions they get from their followers and can chat with people in a way thats useful and fun, Zuckerberg says. Its early days and the first beta version of these AIs so well keep working on improving them, and make them available to more people soon.

In a new interview, Zuckerberg talked more about the companys AI strategy. He says, We think people want to interact with lots of different people and businesses and there need to be a lot of different AIs that get created to reflect peoples different interests. Meta wants to let creators and eventually small businesses create an AI for themselves to interact with their communities and customers. We think thats going to create a much more compelling experience and just be more dynamic and useful than just having a single thing that people use. He also says Meta will let people make AI characters that arent necessarily representations of themselves.

Broadly, this is a vision the company has had for a while Zuckerberg made similar remarks in an interview with my colleague Alex Heath last year when the company announced its Meta AI assistant and celebrity-themed chatbots. Now that the company is actually testing AI Studio, well see if people really do want to chat with AI versions of their favorite creators.

Originally posted here:

Instagram is starting to let some creators make AI versions of themselves - The Verge

Consulting Firms Are the Early Winners of the AI Boom – The New York Times

After ChatGPT came out in 2022, the marketing team at Reckitt Benckiser, which makes Lysol and Mucinex, was convinced that new artificial intelligence technology could help its business. But the team was uncertain about how, so it turned to Boston Consulting Group for help.

Reckitts request was one of hundreds that Boston Consulting Group received last year. It now earns a fifth of its revenue from zero just two years ago through work related to artificial intelligence.

Theres a genuine thirst to figure out what are the implications for their businesses, said Vladimir Lukic, Boston Consulting Groups managing director for technology.

The next big boom in tech is a long-awaited gift for wonky consultants. From Boston Consulting Group and McKinsey & Company to IBM and Accenture, sales are growing and hiring is on the rise because companies are in desperate need of technology Sherpas who can help them figure out what generative A.I. means and how it can help their businesses.

While the tech industry is casting about for ways to make money off generative A.I., the consultants have begun cashing in.

IBM, which has 160,000 consultants, has secured more than $1 billion in sales commitments related to generative A.I. for consulting work and its watsonx system, which can be used to build and maintain A.I. models. Accenture, which provides consulting and technology services, booked $300 million in sales last year. About 40 percent of McKinseys business this year will be generative A.I. related, and KPMG International, which has a global advisory division, went from making no money a year ago from generative-A.I.-related work to targeting more than $650 million in business opportunities in the United States tied to the technology over the past six months.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Read more:

Consulting Firms Are the Early Winners of the AI Boom - The New York Times

TEACHER VOICE: My students are afraid of AI – The Hechinger Report

Since the release of ChatGPT in November 2022, educators have pondered its implications for education. Some have leaned toward apocalyptic projections about the end of learning, while others remain cautiously optimistic.

My students took longer than I expected to discover generative AI. When I asked them about ChatGPT in February 2023, many had never heard of it.

But some caught up, and now our colleges academic integrity office is busier than ever dealing with AI-related cheating. The need for guidelines is discussed in every college meeting, but Ive noticed a worrying reaction among students that educators are not considering: fear.

Students are bombarded with negative ideas about AI. Punitive policies heighten that fear while failing to recognize the potential educational benefits of these technologies and that students will need to use them in their careers. Our role as educators is to cultivate critical thinking and equip students for a job market that will use AI, not to intimidate them.

Yet course descriptions include bans on the use of AI. Professors tell students they cannot use it. And students regularly read stories about their peers going on academic probation for using Grammarly. If students feel constantly under suspicion, it can create a hostile learning environment.

Related: Interested in innovations in the field of higher education? Subscribe to our free biweekly Higher Education newsletter.

Many of my students havent even played around with ChatGPT because they are scared of being accused of plagiarism. This avoidance creates a paradox in which students are expected to be adept with these modern tools post-graduation, yet are discouraged from engaging with them during their education.

I suspect the profile of my students makes them more prone to fear AI. Most are Hispanic and female, taking courses in translation and interpreting. They see that the overwhelmingly male and white tech bros in Silicon Valley shaping AI look nothing like them, and they internalize the idea that AI is not for them and not something they need to know about. I wasnt surprised that the only male student I had in class this past semester was the only student excited about ChatGPT from the very beginning.

Failing to develop AI literacy among Hispanic students can diminish their confidence and interest in engaging with these technologies. Their fearful reactions will widen the already concerning inequities between Hispanic and non-Hispanic students; the degree completion gap between Latino and white students increased between 2018 and 2021.

The stakes are high. Similar to the internet boom, AI will revolutionize daily activities and, certainly, knowledge jobs. To prepare our students for these changes, we need to help them understand what AI is and encourage them to explore the functionalities of large language models like ChatGPT.

I decided to address the issue head-on. I asked my students to write speeches on a current affairs topic. But first, I asked for their thoughts on AI. I was shocked by the extent of their misunderstanding: Many believed that AI was an omniscient knowledge-producing machine connected to the internet.

After I gave a brief presentation on AI, they expressed surprise that large language models are based on prediction rather than direct knowledge. Their curiosity was piqued, and they wanted to learn how to use AI effectively.

After they drafted their speeches without AI, I asked them to use ChatGPT to proofread their drafts and then report back to me. Again, they were surprised this time about how much ChatGPT could improve their writing. I was happy (even proud) to see they were also critical of the output, with comments such as It didnt sound like me or It made up parts of the story.

Was the activity perfect? Of course not. Prompting was challenging. I noticed a clear correlation between literacy levels and the quality of their prompts.

Students who struggled with college-level writing couldnt go beyond prompts such as Make it sound smoother. Nonetheless, this basic activity was enough to spark curiosity and critical thinking about AI.

Individual activities like these are great, but without institutional support and guidance, efforts toward fostering AI literacy will fall short.

The provost of my college established an AI committee to develop college guidelines. It included professors from a wide range of disciplines (myself included), other staff members and, importantly, students.

Through multiple meetings, we brainstormed the main issues that needed to be included and researched specific topics like AI literacy, data privacy and safety, AI detectors and bias.

We created a document divided into key points that everyone could understand. The draft document was then circulated among faculty and other committees for feedback.

Initially, we were concerned that circulating the guidelines among too many stakeholders might complicate the process, but this step proved crucial. Feedback from professors in areas such as history and philosophy strengthened the guidelines, adding valuable perspectives. This collaborative approach also helped increase institutional buy-in, as everyones contribution was valued.

Related: A new partnership paves the way for greater use of AI in higher ed

Underfunded public institutions like mine face significant challenges integrating AI into education. While AI offers incredible opportunities for educators, realizing these opportunities requires substantial institutional investment.

Asking adjuncts in my department, who are grossly underpaid, to find time to learn how to use AI and incorporate it into their classes seems unethical. Yet, incorporating AI into our knowledge production activities can significantly boost student outcomes.

If this happens only at wealthy institutions, we will widen academic performance gaps.

Furthermore, if only students at wealthy institutions and companies get to use AI, the bias inherent in these large language models will continue to grow.

If we want our classes to ensure equitable educational opportunities for all students, minority-serving institutions cannot fall behind in AI adoption.

Cristina Lozano Argelles is an assistant professor of interpreting and bilingualism at John Jay College, part of the City University of New York, where she researches the cognitive and social dimensions of language learning.

This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechingers newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

Original post:

TEACHER VOICE: My students are afraid of AI - The Hechinger Report

OpenAI Wants AI to Help Humans Train AI – WIRED

One of the key ingredients that made ChatGPT a ripsnorting success was an army of human trainers who gave the artificial intelligence model behind the bot guidance on what constitutes good and bad outputs. OpenAI now says that adding even more AI into the mixto help assist human trainerscould help make AI helpers smarter and more reliable.

In developing ChatGPT, OpenAI pioneered the use of reinforcement learning with human feedback, or RLHF. This technique uses input from human testers to fine-tune an AI model so that its output is judged to be more coherent, less objectionable, and more accurate. The ratings the trainers give feed into an algorithm that drives the models behavior. The technique has proven crucial both to making chatbots more reliable and useful and preventing them from misbehaving.

RLHF does work very well, but it has some key limitations, says Nat McAleese, a researcher at OpenAI involved with the new work. For one thing, human feedback can be inconsistent. For another it can be difficult for even skilled humans to rate extremely complex outputs, such as sophisticated software code. The process can also optimize a model to produce output that seems convincing rather than actually being accurate.

OpenAI developed a new model by fine-tuning its most powerful offering, GPT-4, to assist human trainers tasked with assessing code. The company found that the new model, dubbed CriticGPT, could catch bugs that humans missed, and that human judges found its critiques of code to be better 63 percent of the time. OpenAI will look at extending the approach to areas beyond code in the future.

Were starting work to integrate this technique into our RLHF chat stack, McAleese says. He notes that the approach is imperfect, since CriticGPT can also make mistakes by hallucinating, but he adds that the technique could help make OpenAIs models as well as tools like ChatGPT more accurate by reducing errors in human training. He adds that it might also prove crucial in helping AI models become much smarter, because it may allow humans to help train an AI that exceeds their own abilities. And as models continue to get better and better, we suspect that people will need more help, McAleese says.

The new technique is one of many now being developed to improve large language models and squeeze more abilities out of them. It is also part of an effort to ensure that AI behaves in acceptable ways even as it becomes more capable.

Earlier this month, Anthropic, a rival to OpenAI founded by ex-OpenAI employees, announced a more capable version of its own chatbot, called Claude, thanks to improvements in the models training regimen and the data it is fed. Anthropic and OpenAI have both also recently touted new ways of inspecting AI models to understand how they arrive at their output in order to better prevent unwanted behavior such as deception.

The new technique might help OpenAI train increasingly powerful AI models while ensuring their output is more trustworthy and aligned with human values, especially if the company successfully deploys it in more areas than code. OpenAI has said that it is training its next major AI model, and the company is evidently keen to show that it is serious about ensuring that it behaves. This follows the dissolvement of a prominent team dedicated to assessing the long-term risks posed by AI. The team was co-led by Ilya Sutskever, a cofounder of the company and former board member who briefly pushed CEO Sam Altman out of the company before recanting and helping him regain control. Several members of that team have since criticized the company for moving riskily as it rushes to develop and commercialize powerful AI algorithms.

Dylan Hadfield-Menell, a professor at MIT who researches ways to align AI, says the idea of having AI models help train more powerful ones has been kicking around for a while. This is a pretty natural development, he says.

Hadfield-Menell notes that the researchers who originally developed techniques used for RLHF discussed related ideas several years ago. He says it remains to be seen how generally applicable and powerful it is. It might lead to big jumps in individual capabilities, and it might be a stepping stone towards sort of more effective feedback in the long run, he says.

Go here to see the original:

OpenAI Wants AI to Help Humans Train AI - WIRED

How OpenAI Leaving China Will Reshape the Country’s AI Scene – TIME

OpenAIs abrupt move to ban access to its services in China is setting the scene for an industry shakeup, as local AI leaders from Baidu Inc. to Alibaba Group Holding Ltd. move to grab more of the field.

The ChatGPT creator this week sent memos to Chinese users warning it will cut off access to its widely used AI development software and tools from July, triggering a scramble to fill the void. Since Tuesday, at least a half-dozen companies and startups including Tencent Holdings Ltd. and Zhipu AI began offering incentives to developers making the switch.

OpenAIs shift will accentuate the divide between China and the U.S., which is trying to curb Beijings AI and chip efforts. While the startups exit offers an opportunity for sector leaders to grow their user base, it also deprives entrepreneurs and cash-strapped startups of some of the best tools available to fine-tune or get their AI applications off the ground.

Read More: OpenAI Says Russia, China, and Israel Are Using Its Tools for Foreign Influence Campaigns

For China, that could help usher out many smaller startups created during the battle of a hundred models, in the wake of ChatGPTs late 2022 debut. And a bigger concern may be whether open-source models like Meta Platforms Inc.s Llama also cut off access, said Bernard Leong, chief executive officer of Singapore-based Dorje AI.

Theres probably going to be a bloodbath of the large language models and I suspect that theres probably going to be very few players left, said Leong, whos also founder of the tech podcast Analyse Asia. There will be very few winners, and those will be the biggest in China.

Chinese artificial intelligence-related stocks, including Alibaba and Iflytek Co., rose on Wednesday.

The major firms were quick to sense the opportunity.

For users migrating from OpenAI, Baidu promised free AI model fine-tuning and expert guidance on its flagship Ernie model, along with 50 million free tokens that developers can use to query the bot. Alibaba and Tencent posted ads encouraging the shift. Tech pioneer Kai-fu Lees 01.AI touted heavy discounts.

Baichuan, which is backed by both Alibaba and Tencent,offered10 million free tokens. SenseTime Group Inc.dangled50 million. Zhipu put forth 150 million tokens and a series of training sessions to ease the transition. Even Microsoft Corp. OpenAIs biggest backer published a step-by-step guide onWeChaton how to migrate to its local service, operated by local partner 21Vianet.

U.S. firms such as OpenAI, Meta and Alphabet Inc. have led the world in generative AI, which spits out text, images and video from simple commands. Underpinning those models are application programming interfaces that developers use to build up and refine their own platforms to integrate services either with the likes of ChatGPT, or their own proprietary models.

Read More: Gen AI Has Already Taken the World by Storm. Just Wait Until It Gets a Quantum Boost

That was a boon to Chinese developers starting from scratch, who accessed OpenAIs tools through virtual private networks or other ways around the countrys Great Firewall. Many local developers particularly those without deep pockets favored training AI systems and applications via OpenAIs tools, because they were regarded as industry benchmarks.

OpenAI is now threatening to sever the connection.

Leading Chinese large language models can benefit from the restricted access to OpenAI, and it will help to filter out smaller, less effective players from the market, said You Chuanman, head of the Chinese University of Hong Kong-Shenzhens IIA Centre for Regulation and Global Governance. At the same time, it will make it harder for Chinese developers to use the most advanced global AI algorithms.

OpenAIs move coincides with rising pressure from Washington to curb Chinese access to the most advanced artificial intelligence and semiconductor technology. The U.S. Treasury Department advanced plans over the weekend to further restrict investments by U.S. individuals and companies into China, with a focus on curtailing next-generation technologies.

In the long run, industry experts say a lack of access to global tools may further impede Chinese AI players in general as they play catch-up to the US. Alibaba Chairman Joe Tsai has said it would take at least two years for homegrown AI models to match U.S. ones.

It could also accelerate a migration overseas by Chinese tech startups seeking faster-growing markets with less political uncertainty.

This situation is directly related to the ongoing competition between China and the U.S. in frontier technologies, said Neil Zhu Xiaohu, the founder and chief scientist of University AI, which trains Chinese companies.

We had U.S. laws targeting Chinese semiconductors previously and more recently, there are semiconductor and AI work restrictions, so the restriction of Chinas API services is not something that happened out of the blue.

See the original post here:

How OpenAI Leaving China Will Reshape the Country's AI Scene - TIME

Hebbia raises nearly $100M Series B for AI-powered document search led by Andreessen Horowitz – TechCrunch

Hebbia, a startup using generative AI to search large documents and return answers, has raised a nearly $100 million Series B led by Andreessen Horowitz, according to three people with knowledge of the matter.

The round valued the company between $700 million and $800 million, although TechCrunch couldnt verify whether that valuation is pre- or post-money. (One possible scenario is $700 million pre/$800 million post.) Hebbia disclosed in an SEC filing in May that it had by then raised $93 million out of a hoped-for $100 million, but we understand from two of the people that the round hit a near $100 million mark and has closed.

Hebbia and Andreessen Horowitz didnt respond to a request for comment.

Hebbia was founded in 2020 by George Sivulka, who launched the company while working on his PhD in electrical engineering at Stanford. Sivulka was inspired by his friends working in the financial industry who told him that part of their long work weeks was spent searching for information in SEC filings and other dense documents. Sivulka thought that AI could help them save hours at the office and give them more time for rest and sleep.

Hebbias AI can look over billions of documents at once, including PDFs, PowerPoints, spreadsheets and transcripts and return specific answers, the company says.

The startup sells primarily to financial service firms, including hedge funds and investment banks. But its product could also be used by law firms and other professional domains.

The latest funding brings Hebbias total capital to over $120 million. The company raised its $30 million Series A in September 2022 led by Index Ventures with participation from Radical Ventures.

The companys product is similar to Glean, whose software can fetch information in plain English from various business applications. In February, Glean raised a $200 million Series D at a valuation of $2.2 billion, led by Kleiner Perkins and Lightspeed.

Read more here:

Hebbia raises nearly $100M Series B for AI-powered document search led by Andreessen Horowitz - TechCrunch

Nvidia’s Jensen Huang plays down competition worries as key supplier disappoints with subdued expectations for AI … – Fortune

Nvidia will remain the gold standard for AI training chips, CEO Jensen Huang told investors, even as rivals push to cut into his market share and one of Nvidias major suppliers gave a subdued forecast for AI chip sales.

Everyone from OpenAI to Elon Musks Tesla rely on Nvidia semiconductors to run their large language or computer vision models. The rollout of Nvidias Blackwell system later this year will only cement that lead, Huang said at the companys annual shareholder meeting on Wednesday.

Unveiled in March, Blackwell is the next generation of AI training processors to follow its flagship Hopper line of H100 chipsone of the most prized possessions in the tech industry fetching prices in the tens of thousands of dollars each.

The Blackwell architecture platform will likely be the most successful product in our history and even in the entire computer history, Huang said.

Nvidia briefly eclipsed Microsoft and Apple this month to become the worlds most valuable company in a remarkable rally that hasfueledmuch of this years gains in theS&P 500 index. At more than $3 trillion, Huangs company was at one point worth more than entire economies andstock markets, only to suffer arecord loss in market valueas investors locked in profits.

Yet as long as Nvidia chips continue to be the benchmark for AI training, theres little reason to believe thelonger-term outlookis cloudy, and here thefundamentalscontinue to look robust.

One of Nvidias key advantages is a sticky AI ecosystemknown as CUDA, short for Compute Unified Device Architecture. Much like how everyday consumers are loath to switch from their Apple iOS device to a Samsung phone using Google Android, an entire cohort of developers have been working with CUDA for years and feel so comfortable that there is little reason to consider using another software platform. Much like the hardware, CUDA effectively has become a standard of its own.

The Nvidia platform is broadly available through every major cloud provider and computer maker, creating a large and attractive base for developers and customers, which makes our platform more valuable to our customers, Huang added on Wednesday.

The AI trade did take a recent hit after memory-chip supplier Micron Technology, provider of high-bandwidth memory (HBM) chips to companies like Nvidia, forecast fiscal fourth-quarter revenue would only match market expectations of around $7.6 billion.

Shares in Micron plunged 7%, underperforming by a large margin a slight gain in the broader tech-heavy Nasdaq Composite.

In the past, Micron and its Korean rivals Samsung and SK Hynix have seen a cyclical boom-and-bust common to the memory-chip market, long considered a commodity business when compared with logic chips such as graphic processors.

But excitement has surged given the demand for its chips necessary for AI training. Microns stock more than doubled over the past 12 months, meaning investors have already priced in much of managements predicted growth.

The guidance was basically in line with expectations, and in the AI hardware world if you guide in line thats considered a slight disappointment, says Gene Munster, a tech investor with Deepwater Asset Management. Momentum investors just didnt see that incremental reason to be more positive about the story.

Analysts closely track demand for high-bandwidth memory as a leading indicator for the AI industry because it is so crucial for solving the biggest economic constraint facing AI training todaythe issue of scaling.

Costs crucially do not rise in line with a models complexitythe number of parameters it has, which can number into the billionsbut rather grow exponentially. This results in diminishing returns in efficiency over time.

Even if revenue grows at a consistent rate, losses risk ballooning into the billions or even tens of billions a year as a model gets more advanced. This threatens to overwhelm any company that doesnt have a deep-pocketed investor like Microsoft capable of ensuring an OpenAI can still pay the bills, as CEO Sam Altman phrased it recently.

A key reason for diminishing returns is the growing gap between the two factors that dictate AI training performance. The first is a logic chips raw compute poweras measured by FLOPS, a type of calculation per secondand the second is the memory bandwidth needed to quickly feed it dataoften expressed in millions of transfers per second, or MT/s.

Since they work in tandem, scaling one without the other simply leads to waste and cost inefficiency. Thats why FLOPS utilization, or how much of the compute can actually be brought to bear, is a key metric when judging the cost efficiency of AI models.

As Micronpoints out, data transfer rates have been unable to keep pace with rising compute power. The resulting bottleneck, often referred to as the memory wall is a leading cause for todays inherent inefficiency when scaling AI-training models.

That explains why theU.S. government focused heavily onmemory bandwidthwhen deciding which specific Nvidia chips needed to be banned from export to China in order to weaken Beijings AI development program.

On Wednesday, Micron said its HBM business was sold out all the way through the end of the next calendar year, which trails its fiscal year by one quarter, echoingsimilar commentsfrom Korean competitor SK Hynix.

We expect to generate several hundred million dollars of revenue from HBM in FY24 and multiple [billions of dollars] in revenue from HBM in FY25, Micron said on Wednesday.

Go here to see the original:

Nvidia's Jensen Huang plays down competition worries as key supplier disappoints with subdued expectations for AI ... - Fortune