Page 35«..1020..34353637..4050..»

How strategic visionary Atul Gupta is charting new frontiers in quantum computing and AI and setting global benchmarks – Business Insider Africa

Atul Gupta has significantly contributed to the fields of quantum computing and artificial intelligence (AI), influencing both organizational growth and industry standards. His work demonstrates the practical applications of these advanced technologies, showing how they can solve complex problems and optimize operations across various sectors.

In quantum computing, Gupta's initiatives have led to advancements that enable organizations to tackle problems previously unsolvable by classical computers. This has resulted in tangible benefits such as accelerated drug discovery processes in the pharmaceutical industry and enhanced design optimizations in CRM systems. These contributions have not only reduced processing times for complex data but also improved overall computational capabilities, demonstrating Gupta's role in pushing the boundaries of technological applications. "Our vision is to make the impossible possible by leveraging quantum computing to solve real-world challenges," Gupta states, emphasizing the transformative potential of this technology.

Gupta's expertise in AI has been crucial in transforming business operations from healthcare to cybersecurity. By integrating AI into key processes, he has improved efficiency and facilitated real-time decision-making. Applications like natural language processing for virtual assistants and data analysis for detecting financial fraud have been implemented under his guidance, streamlining operations and enabling businesses to adapt to the dynamic market landscape.

A significant aspect of Gupta's work is his impact on the Salesforce ecosystem. His proficiency with Salesforce Data Cloud AI has enabled businesses to analyze extensive customer data from multiple sources, providing actionable insights that enhance personalization and predict future behaviors. This has led to improved customer satisfaction and loyalty. Furthermore, his use of Salesforce's Einstein platform has helped businesses automate responses and anticipate customer needs, thereby refining marketing efforts and boosting sales performance. "Our goal is to empower businesses to understand and serve their customers better through intelligent data utilization," Gupta notes, highlighting the vision behind his work with Salesforce technologies.

In addressing the challenges of technological integration and security, Gupta has led efforts to develop robust hardware and sophisticated software solutions. Being an expert in the field, he has been particularly focused on mitigating the security risks posed by quantum computing, promoting the adoption of quantum-safe cryptographic solutions and stringent governance standards. This approach has set new benchmarks in digital security, ensuring organizations remain protected against evolving cyber threats.

He has played a crucial role in the integration of the Dynatrace platform, which tremendously helped in enhancing system observability and security for customer operations. By leveraging Dynatrace's analytics and automation capabilities, businesses now have comprehensive visibility across their digital ecosystems, enabling real-time threat detection and neutralization. This optimization of system performance has ensured high levels of security and reliability, showcasing the practical benefits of advanced technological solutions.

Gupta has shown consistent efforts in leveraging his skills beyond a particular industry or organization. His commitment to fostering investment and collaboration is evident in his efforts and vision for partnerships between governments and industries. These collaborations are crucial for overcoming existing limitations and unlocking the full potential of quantum computing and AI. Furthermore, his dedication to ensuring equitable access and ethical deployment of these technologies plays a critical role in preventing disparities and promoting responsible innovation.

Atul Guptas work in integrating quantum computing and artificial intelligence is transformative, setting new standards for computational power and cognitive capabilities across industries. By managing the challenges and harnessing the opportunities these technologies present, Mr. Gupta shows his leadership and commitment to a technologically advanced and ethically guided future. His contributions enhance competitive edges and operational efficiencies, ensuring the sustainable and equitable expansion of these advanced technologies.

View original post here:
How strategic visionary Atul Gupta is charting new frontiers in quantum computing and AI and setting global benchmarks - Business Insider Africa

Read More..

Qubit Pharmaceuticals And Sorbonne University Reduce The Number of Qubits Needed to Simulate Molecules – The Quantum Insider

Insider Brief

PRESS RELEASE Qubit Pharmaceuticals, a deeptech company specializing in the discovery of new drug candidates through molecular simulation and modeling accelerated by hybrid HPC and quantum computing, announces that it has drastically reduced the number of qubits needed to compute the properties of small molecules with its Hyperion-1 emulator1, developed in partnership with Sorbonne University. This world first raises hopes of a near-term practical application of hybrid HPC quantum computing to drug discovery.

As a result of these advances, Qubit Pharmaceuticals and Sorbonne Universit are announcing that they have been awarded 8 million in funding under the France 2030 national plan for the further development of Hyperion-1.

A world first that saves years in research

By developing new hybrid HPC and quantum algorithms to leverage the computing power of quantum computers in the field of chemistry and drug discovery, Sorbonne Universit and Qubit Pharmaceuticals have succeeded, with just 32 logic qubits, in predicting the physico-chemical properties of nitrogen (N2), hydrogen fluoride (HF), lithium hydride and water molecules that would normally require more than 250 perfect qubits. The Hyperion-1 emulator uses Genci supercomputers, Nvidias SuperPod EOS, and one of Scaleways many GPU clusters.

With this first proof of concept, the teams have demonstrated that the routine use of quantum computers coupled with high-performance computing platforms for chemistry and drug discovery is much closer than previously thought. Nearly 5 years could be gained, bringing us significantly closer to the era when quantum computers (noisy or perfect) could be used in production within hybrid supercomputers combining HPC, AI and quantum. The use of these new computing powers will improve the precision, speed and carbon footprint of calculations.

Soon to be deployed on todays noisy machines

To achieve this breakthrough, teams from Qubit Pharmaceuticals and Sorbonne University have developed new algorithms that break down a quantum calculation into its various components, some of which can be calculated precisely on conventional hardware. This strategy enables calculations to be distributed using the best hardware (quantum or classical), while automatically improving the complexity of the algorithms needed to calculate the molecules properties.

In this way, all calculations not enhanced by quantum computers are performed on classical GPUs. As the physics used allows the number of qubits required for the calculations, the team, by optimizing the approach to the extreme, has even managed to limit GPU requirements to a single card in some cases. As this hybrid classical/quantum approach is generalist, it can be applied to any type of quantum chemistry calculation, and is not restricted to molecules of pharmaceutical interest, but also to catalysts (chemistry, energy) or materials.

Next steps include deploying these algorithms on existing noisy machines to quantify the impact of noise, and compare performance with recent calculations by IBM and Google, and predicting the properties of molecules of pharmaceutical interest. To achieve this, the teams will deploy new software acceleration methods to reach regimes that would require more than 400 qubits with purely quantum approaches. In the short term, this hybrid approach will reduce the need for physical qubits on quantum machines.

Robert Marino, CEO of Qubit Pharmaceuticals, declares: At the end of 2023, we announced quantum chemistry calculations using 40 qubits. A few months later, weve managed to solve equations that would require 250 logic qubits. An extremely rapid development that confirms the near-term potential of hybrid HPC and quantum algorithms in the service of drug discovery.

Jean-Philip Piquemal, Professor at Sorbonne University and Director of the Theoretical Chemistry Laboratory (Sorbonne University/CNRS), co-founder and Chief Scientific Officer of Qubit Pharmaceuticals, states: This work clearly demonstrates the need to progress simultaneously on hardware and software development. It is by making breakthroughs on both fronts that we will be able to enter the era of quantum utility for drug discovery in the very short term.

lisabeth Angel-Perez, Vice-President Research and Innovation at Sorbonne Universit: These innovative approaches developed by Qubit Pharmaceuticals are an illustration of Sorbonne Universits commitment to serving society. The precision and power of quantum computers offer major performance gains. With Qubit Pharmaceuticals, we measure the enormous potential of theoretical computing for quantum chemistry.

Sbastien Luttringer, Head of R&D at Scaleway : We are proud to have participated in Qubit Pharmaceuticals major algorithmic breakthrough with the support of our GPU computing power, the largest in the European cloud. Quantum computing is not just a hardware challenge, its also a software one that we need to develop in order to solve real-world problems. Scaleways pragmatic strategy, with the introduction of its QaaS (Quantum as a Service) service, is to simplify access to the best resources to help this algorithm of tomorrow emerge.

See the article here:
Qubit Pharmaceuticals And Sorbonne University Reduce The Number of Qubits Needed to Simulate Molecules - The Quantum Insider

Read More..

New Method Could Pave The Way to Fast, Cross-Country Quantum Networks – The Quantum Insider

Insider Brief

PRESS RELEASE Quantum computers offer powerful ways to improve cybersecurity, communications, and data processing, among other fields. To realize these full benefits, however, multiple quantum computers need to be connected to build quantum networks or a quantum internet. Scientists have struggled to come up with practical methods of building such networks, which must transmit quantum information over long distances.

Now, researchers at theUniversity of Chicago Pritzker School of Molecular Engineering (PME)have proposed a new approach building long quantum channels using vacuum sealed tubes with an array of spaced-out lenses. These vacuum beam guides, about 20 centimeters in diameter, would have ranges of thousands of kilometers and capacities of more than 10 trillion qubits per second, better than any existing quantum communication approach. Photons of light encoding quantum data would move through the vacuum tubes and remain focused thanks to the lenses.

We believe this kind of network is feasible and has a lot of potential, saidLiang Jiang, professor of molecular engineering and senior author of the new work. It could not only be used for secure communication, but also for building distributed quantum computing networks, distributed quantum sensing technologies, new kinds of telescopes, and synchronized clocks.

Jiang collaborated with scientists at Stanford University and the California Institute of Technology on the new work,which is published inPhysical Review Letters.

Sending qubits

While classical computers encode data in conventional bits represented as a 0 or 1 quantum computers rely on qubits, which can exhibit quantum phenomena. These phenomena include superposition a kind of ambiguous combination of states as well as entanglement, which allows two quantum particles to be correlated with each other even across vast distances.

These properties give quantum computers the ability to analyze new types of data and store and pass along information in new, secure ways. Connecting multiple quantum computers can them even more powerful, as their data processing abilities can be pooled. However, networks typically used to connect computers are not ideal because they cannot maintain the quantum properties of qubits.

You cant send a quantum state over a classical network, explained Jiang. You might send a piece of data classically, a quantum computer can process it, but the result is then sent back classically again.

Some researchers have tested ways of using fiber-optic cables and satellites to transmit optical photons, which can act as qubits. Photons can travel a short distance through existing fiber-optic cables but generally lose their information quickly as photons are absorbed. Photons bounced to satellites and back to the ground in a new location are absorbed less because of the vacuum of space, but their transmission is limited by the atmosphere absorption and availability of the satellites.

What we wanted to do was to combine the advantages of each of those previous approaches, said PME graduate studentYuexun Huang, the first author of the new work. In a vacuum, you can send a lot of information without attenuation. But being able to do that on the ground would be ideal.

Learning from LIGO

Scientists working at the Laser Interferometer Gravitational-Wave Observatory (LIGO) the California Institute of Technology have built huge ground-based vacuum tubes to contain moving photons of light that can detect gravitational waves. Experiments at LIGO have shown that inside a nearly-molecule-free vacuum, photons can travel for thousands of kilometers.

Inspired by this technology, Jiang, Huang, and their colleagues began to sketch out how smaller vacuum tubes could be used to transport photons between quantum computers. In their new theoretical work, they showed that these tubes, if designed and arranged properly, could carry photons across the country. Moreover, they would only need medium vacuum (10^-4 atmosphere pressure), which is much easier to maintain than the ultra-high vacuum (10^-11 atmosphere pressure) required for LIGO.

The main challenge is that as a photon moves through a vacuum, it spreads out a bit, explained Jiang. To overcome that, we propose putting lenses every few kilometers that can focus the beam over long distances without diffraction loss.

In collaboration with researchers at Caltech, the group is planning tabletop experiments to test the practicality of the idea, and then plans to use larger vacuum tubes such as those at LIGO to work on how to align the lenses and stabilize the photon beams over long distances.

To implement this technology on a larger scale certain poses some civil engineering challenges that we need to figure out as well, said Jiang. But the ultimate benefit is that we have large quantum networks that can communicate tens of terabytes of data per second.

Citation: Vacuum Beam Guide for Large Scale Quantum Networks, Huang et al,Physical Review Letters, July 9, 2024. DOI:10.1103/PhysRevLett.133.020801

Funding: This work was supported by the Army Research Laboratory, Air Force Research Laboratory, National Science Foundation, NTT Research, Packard Foundation, the Marshall and Arlene Bennett Family Research Program, and the U.S. Department of Energy.

Go here to see the original:
New Method Could Pave The Way to Fast, Cross-Country Quantum Networks - The Quantum Insider

Read More..

AI’s Energy Demands Are Out of Control. Welcome to the Internet’s Hyper-Consumption Era – WIRED

Right now, generative artificial intelligence is impossible to ignore online. An AI-generated summary may randomly appear at the top of the results whenever you do a Google search. Or you might be prompted to try Metas AI tool while browsing Facebook. And that ever-present sparkle emoji continues to haunt my dreams.

This rush to add AI to as many online interactions as possible can be traced back to OpenAIs boundary-pushing release of ChatGPT late in 2022. Silicon Valley soon became obsessed with generative AI, and nearly two years later, AI tools powered by large language models permeate the online user experience.

One unfortunate side effect of this proliferation is that the computing processes required to run generative AI systems are much more resource intensive. This has led to the arrival of the internets hyper-consumption era, a period defined by the spread of a new kind of computing that demands excessive amounts of electricity and water to build as well as operate.

In the back end, these algorithms that need to be running for any generative AI model are fundamentally very, very different from the traditional kind of Google Search or email, says Sajjad Moazeni, a computer engineering researcher at the University of Washington. For basic services, those were very light in terms of the amount of data that needed to go back and forth between the processors. In comparison, Moazeni estimates generative AI applications are around 100 to 1,000 times more computationally intensive.

The technologys energy needs for training and deployment are no longer generative AIs dirty little secret, as expert after expert last year predicted surges in energy demand at data centers where companies work on AI applications. Almost as if on cue, Google recently stopped considering itself to be carbon neutral, and Microsoft may trample its sustainability goals underfoot in the ongoing race to build the biggest, bestest AI tools.

The carbon footprint and the energy consumption will be linear to the amount of computation you do, because basically these data centers are being powered proportional to the amount of computation they do, says Junchen Jiang, a networked systems researcher at the University of Chicago. The bigger the AI model, the more computation is often required, and these frontier models are getting absolutely gigantic.

Even though Googles total energy consumption doubled from 2019 to 2023, Corina Standiford, a spokesperson for the company, said it would not be fair to state that Googles energy consumption spiked during the AI race. Reducing emissions from our suppliers is extremely challenging, which makes up 75 percent of our footprint, she says in an email. The suppliers that Google blames include the manufacturers of servers, networking equipment, and other technical infrastructure for the data centersan energy-intensive process that is required to create physical parts for frontier AI models.

Go here to read the rest:

AI's Energy Demands Are Out of Control. Welcome to the Internet's Hyper-Consumption Era - WIRED

Read More..

This U.S. company is helping arm Ukraine against Russia with AI drones – NPR

This U.S. company is helping arm Ukraine against Russia with AI drones : Consider This from NPR Palmer Luckey launched his first tech company as a teenager. That was Oculus, the virtual reality headset for gaming. Soon after, he sold it to Facebook for $2 billion.

Now 31, Luckey has a new company called Anduril that's making Artificial Intelligence weapons. The Pentagon is buying them keeping some for itself and sending others to Ukraine.

The weapons could be instrumental in helping Ukraine stand up to Russia.

Ukraine needs more weapons and better weapons to fight against Russia. Could AI weapons made by a billionaire tech entrepreneur's company hold the answer?

For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org.

Email us at considerthis@npr.org.

Palmer Luckey, 31, founder of Anduril Industries, stands in front of the Dive-LD, an autonomous underwater drone at company headquarters in Costa Mesa, Calif. Anduril recently won a U.S. Navy contract to build 200 of them annually. Philip Cheung for NPR/NPR hide caption

Palmer Luckey, 31, founder of Anduril Industries, stands in front of the Dive-LD, an autonomous underwater drone at company headquarters in Costa Mesa, Calif. Anduril recently won a U.S. Navy contract to build 200 of them annually.

Palmer Luckey launched his first tech company as a teenager. That was Oculus, the virtual reality headset for gaming. Soon after, he sold it to Facebook for $2 billion.

Now 31, Luckey has a new company called Anduril that's making Artificial Intelligence weapons. The Pentagon is buying them keeping some for itself and sending others to Ukraine.

The weapons could be instrumental in helping Ukraine stand up to Russia.

Ukraine needs more weapons and better weapons to fight against Russia. Could AI weapons made by a billionaire tech entrepreneur's company hold the answer?

For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org.

Email us at considerthis@npr.org.

This episode was produced by Kathryn Fink and Jonaki Mehta. It was edited by Courtney Dorning and Andrew Sussman. Our executive producer is Sami Yenigun

Excerpt from:

This U.S. company is helping arm Ukraine against Russia with AI drones - NPR

Read More..

OpenAI defines five ‘levels’ for AI to reach human intelligence it’s almost at level 2 – Quartz

OpenAI CEO Sam Altman at the AI Insight Forum in the Russell Senate Office Building on Capitol Hill on September 13, 2023 in Washington, D.C. Photo: Chip Somodevilla ( Getty Images )

OpenAI is undoubtedly one of the leaders in the race to reach human-level artificial intelligence and its reportedly four steps away from getting there.

Spotify rolled out comments on podcasts

The company shared a five-level system it developed to track its artificial general intelligence, or AGI, progress with employees this week, an OpenAI spokesperson told Bloomberg. The levels go from the currently available conversational AI to AI that can perform the same amount of work as an organization. OpenAI will reportedly share the levels with investors and people outside the company.

While OpenAI executives believe it is on the first level, the spokesperson said it is close to level two, which is defined as Reasoners, or AI that can perform basic problem-solving and is on the level of a human with a doctorate degree but no access to tools. The third level of OpenAIs system is reportedly called Agents, and is AI that can perform different actions for several days on behalf of its user. The fourth level is reportedly called Innovators, and describes AI that can help develop new inventions.

OpenAI leaders also showed employees a research project with GPT-4 that demonstrated it has human-like reasoning skills, Bloomberg reported, citing an unnamed person familiar with the matter. The company declined to comment further.

The system was reportedly developed by OpenAI executives and leaders who can eventually change the levels based on feedback from employees, investors, and the companys board.

In May, OpenAI disbanded its Superalignment team, which was responsible for working on the problem of AIs existential dangers. The company said the teams work would be absorbed by other research efforts across OpenAI.

Originally posted here:

OpenAI defines five 'levels' for AI to reach human intelligence it's almost at level 2 - Quartz

Read More..

The sperm whale ‘phonetic alphabet’ revealed by AI – BBC.com

12 hours ago

ByKatherine Latham and Anna Bressanin,

Researchers studying sperm whale communication say they've uncovered sophisticated structures similar to those found in human language.

"At 1000m (3300ft) deep, many of the group will be facing the same way, flanking each other but across an area of several kilometres," says Young. "During this time they're talking, clicking the whole time." After about an hour, she says, the group rises to the surface in synchrony. "They'll then have their rest phase. They might be at the surface for 15 to 20minutes. Then they'll dive again," she says.

At the end of a day of foraging, says Young, the sperm whales come together at the surface and rub against each other, chatting while they socialise. "As researchers, we don't see a lot of their behaviour because they don't spend that much time at the surface," she says. "There's masses we don't know about them, because we are just seeing a tiny little snapshot of their lives during that 15minutes at the surface."

It was around 47 million years ago that land-roaming cetaceans began to gravitate back towards the ocean that's 47 million years of evolution in an environment alien to our own. How can we hope to easily understand creatures that have adapted to live and communicate under such different evolutionary pressures to ourselves?

"It's easier to translate the parts where our world and their world overlap like eating, nursing or sleeping," says David Gruber, lead and founder of the Cetacean Translation Initiative (Ceti) and professor of biology at the City University of New York. "As mammals, we share these basics with others. But I think it's going to get really interesting when we try to understand the areas of their world where there's no intersection with our own," he says.

Sperm whales live in multi-level, matrilineal societies groups of daughters, mothers and grandmothers while the males roam the oceans, visiting the groups to breed. They are known for their complex social behaviour and group decision-making , which requires sophisticated communication. For example, they are able to adapt their behaviour as a group when protecting themselves from predators like orcas or humans.

Sperm whales communicate with each other using rhythmic sequences of clicks , called codas. It was previously thought that sperm whales had just 21 coda types. However, after studying almost 9,000 recordings, the Ceti researchers identified 156 distinct codas. They also noticed the basic building blocks of these codas which they describe as a "sperm whale phonetic alphabet" much like phonemes, the units of sound in human language which combine to form words. ( Watch the video below to hear some of the variety in sperm whale vocalisations the AI identified.)

Pratyusha Sharma, a PhD student at MIT and lead author of the study, describes the "fine-grain changes" in vocalisations the AI identified. Each coda consists of between three and 40 rapid-fire clicks. The sperm whales were found to vary the overall speed, or the "tempo", of the codas, as well as to speed up and slow down during the delivery of a coda, in other words, making it "rubato". Sometimes they added an extra click at the end of a coda, akin, says Sharma, to "ornamentation" in music. These subtle variations, she says, suggest sperm whale vocalisations could carry a much richer amount of information than previously thought .

"Some of these features are contextual," says Sharma. "In human language, for example, I can say 'what' or 'whaaaat!?'. It's the same word, but to understand the meaning you have to listen to the whole sound," she says.

The researchers also found the sperm whale "phonemes" could be used in a combinatorial fashion, allowing the whales to construct a vast repertoire of distinct vocalisations. The existence of a combinatorial coding system, write the report authors, is a prerequisite for "duality of patterning " a linguistic phenomenon thought to be unique to human language in which meaningless elements combine to form meaningful words.

However, Sharma emphasises, this is not something they have any evidence of as yet. "What we show in sperm whales is that the codas themselves are formed by combining from this basic set of features. Then the codas get sequenced together to form coda sequences." Much like humans combine phonemes to create words, and then words to create sentences.

So, what does all this tell us about sperm whales' intelligence? Or their ability to reason, or store and share information?

"Well, it doesn't tell us anything yet," says Gruber. "Before we can get to those amazing questions, we need to build a fundamental understanding of how [sperm whales communicate] and what's meaningful to them. We see them living very complicated lives, the coordination and sophistication in their behaviours. We're at base camp. This is a new place for humans to be just give us a few years. Artificial intelligence is allowing us to see deeper into whale communication than we've ever seen before."

But not everyone is convinced, with experts warning of an anthropocentric focus on language which risks forcing us to view things from one perspective.

Young, though, describes the research as an "incremental step" towards understanding these giants of the deep. "We're starting to put the pieces of the puzzle together," she says. And perhaps if we could listen and really understand something like how important sperm whales' grandmothers are to them something that resonates with humans, she says, we could drive change in human behaviour in order to protect them.

Categorised as "vulnerable" by the International Union for Conservation of Nature (IUCN), sperm whales are still recovering from commercial hunting by humans in the 19th and 20th Centuries. And, although such whaling has been banned for decades, sperm whales face new threats such as climate change, ocean noise pollution and ship strikes.

However, Young adds, we're still a long way off from understanding what sperm whales might be saying to each other. "We really have no idea. But the better we can understand these amazing animals, the more we'll know about how we can protect them."

--

If you liked this story, sign up for The Essential List newsletter a handpicked selection of features, videos and can't-miss news, delivered to your inbox twice a week.

For more science, technology, environment and health stories from the BBC, follow us on Facebook andX .

Follow this link:

The sperm whale 'phonetic alphabet' revealed by AI - BBC.com

Read More..

The AI summer – Benedict Evans

A lot of these charts are really about what happens when the utopian dreams of AI maximalism meet the messy reality of consumer behaviour and enterprise IT budgets - it takes longer than you think, and its complicated (this is also one reason why I think doomers are naive). The typical enterprise IT sales cycle is longer than the time since Chat GPT3.5 was launched, and Morgan Stanleys latest CIO survey says that 30% of big company CIOs dont expect to deploy anything before 2026. They might be being too cautious, but the cloud adoption chart above (especially the expectation data) suggests the opposite. Remember, also, that the Bain Production data only means that this is being used for something, somewhere, not that its taken over your workflows.

Stepping back, though, the very speed with which ChatGPT went from a science project to 100m users might have been a trap (a little as NLP was for Alexa). LLMs look like they work, and they look generalised, and they look like a product - the science of them delivers a chatbot and a chatbot looks like a product. You type something in and you get magic back! But the magic might not be useful, in that form, and it might be wrong. It looks like product, but it isnt.

Microsofts failed and forgotten attempt to bolt this onto Bing and take on Google at the beginning of last year is a good microcosm of the problem. LLMs look like better databases, and they look like search, but, as weve seen since, theyre wrong enough, and the wrong is hard enough to manage, that you cant just give the user a raw prompt and a raw output - you need to build a lot of dedicated product around that, and even then its not clear how useful this is. Firing LLM web search out of the gate was falling into that trap. Satya Nadella said he wanted to make Google dance, but ironically the best way to compete with Bing Copilot might have been sit it out - to wait, watch, learn, and work this through before launching anything (if Wall Street had allowed that, of course).

The rush to bolt this into search came from competitive pressure, and stock market pressure, but more fundamentally from the sense that this is the next platform shift and you have to grab it with both hands. Thats much broader than Google. The urgency is accelerated by that standing on the shoulders of giants moment - you dont have time to to wait for people to buy devices - and from the way these things look like finished products. And meanwhile, the firehose of cash that these companies produced in the last decade has collided with the enormous capital-intensity of cutting-edge LLMs like matter meeting anti-matter.

In other words - These things are the future and will change everything, right now, and they need all this money, and we have all this money.

As a lot of people have now pointed out, all of that adds up to a stupefyingly large amount of capex (and a lot of other investment too) being pulled forward for a technology thats mostly still only in the experimental budgets.

Link:

The AI summer - Benedict Evans

Read More..

Intuits CEO continues to bet the company on AI and data – Fortune

Good morning. Big tech companies are readjusting personnel in the age of artificial intelligence. This includes Google, which informed its employees in April that it is restructuring its finance team to redistribute resources toward AI. The latest example is software giant Intuit.

I reported yesterday that the Fortune 500 companyknown for products like QuickBooks, Credit Karma, and TurboTaxis laying off approximately 1,800 of its global employees, which amounts to 10% of its workforce and includes some executives. CEO Sasan Goodarzi wrote an email to employees announcing the very difficult decisions my leadership team and I have made.

Goodarzi wrote that Intuits transformation journey, including parting with the 1,800 employees, is part of its strategy to increase investments in priority focus areas. Those areas include AI and generative AI like its GenAI-powered financial assistant called Intuit Assist, while Intuit at the same time reimagines its products from traditional workflows to AI-native experiences. The strategy also focuses on money movement, mid-market expansion for small businesses, and international growth.

We do not do layoffs to cut costs, and that remains true in this case, Goodarzi wrote. Intuit plans to hire approximately 1,800 new people with strategic functional skill sets primarily in engineering, product, and customer-facing roles such as sales, customer success, and marketingand expects its overall headcount to grow in its fiscal year 2025, which begins Aug. 1.

Of the employees who will depart Intuit, 1,050 are not meeting expectations based on a formal performance management process, according to the company. And its reducing the number of executivesdirectors, SVPs, and EVPsby approximately 10%, expanding certain executive roles and responsibilities.

All departing U.S. employees will receive a package that includes a minimum of 16 weeks of pay, and two additional weeks for every year of service. They will have 60 days before they leave the company, with a last day of Sept. 9. Employees outside the U.S. will receive similar support, according to the company.

Intuit earned $14.4 billion in revenue in its fiscal year 2023, moving up 24 spots on the Fortune 500. For the period ending April 30, Intuit reported revenue of $6.7 billion, up 12%.

AI is beginning to fundamentally change business, according to McKinsey. Interest in generative AI has intensified a spotlight on a broader set of AI capabilities at organizations. The firms recently published global survey finds AI adoption had risen this year to 72%. For the past six years, AI adoption by respondents organizations has hovered at about 50%. Half of respondents said their companies have adopted AI in two or more business functions, up from less than a third of respondents who said the same in 2023.

In September, my Fortune colleague Geoff Colvin reported on Intuits massive strategy reset putting AI at the center of the business. Colvin wrote: Intuit has a long AI head start against its competitors including H&R Block, Cash App, TaxSlayer, Xero, FreshBooks, and others. The company is hoping its early investment will produce a network effect, in which good AI-generated recommendations attract more customers, bringing in more data, improving the companys products, therefore attracting more customers.

Goodarzi told Colvin his objective since becoming CEO in 2019: The decision I made was, as a team, were going to bet the company on data and AI.

Sheryl Estrada sheryl.estrada@fortune.com

Monish Patolawala was named EVP and CFO at ADM (NYSE: ADM) effective Aug. 1, succeeding Ismael Roig, who has been serving as ADMs interim CFO since January. ADM CFO Vikram Luthar resignedamid an investigation of accounting issues. Patolawala brings to ADM more than 25 years of experience. He most recently served as president and CFO of 3M Company. Before 3M, Patolawala spent more than two decades at GE in various finance roles, including as CFO of GE Healthcare and also as head of operational transformation for all of GE.

Gordon Brooks was named interim CFO at Eli Lilly (NYSE: LLY), effective July 15, according to an SEC filing. Brooks is currentlyLilly's group vice president, controller and corporate strategy.Anat Ashkenazi resigned as CFO and EVP at Lilly in June and will join Alphabet Inc. as CFO. Brooks has worked at Lilly for almost 30 years serving in several divisional CFO roles.

Grant Thornton has released its Q2 2024 CFO survey which finds that 58% of CFOs surveyed are optimistic about the U.S. economy. Another key finding is that CFOs continue to prioritize AI and technology.

The portion of CFOs who are either using generative AI or exploring potential uses rose to an all-time high of 94% in the Q2 survey, compared to previous quarters, according to Grant Thornton. Of those using generative AI, 74% said it's being applied to data analysis and business intelligence in Q2, compared to 66% who said the same in Q1. And in Q2, 63% said they are deploying generative AI to assist with cybersecurity and risk management, compared to 47% in the previous quarter.

The business environment is ripe for growth, but CFOs must manage costs to capitalize on it, Paul Melville, national managing principal of CFO Advisory for Grant Thornton, said in a statement.

The findings are based on a survey of more than 225 senior financial leaders.

Pulse on Workforce Strategy: Biggest Concerns and Key Factors Driving Investment Decisions, is a new report by global consulting firm RGP. Some key findings include: 81% of financial decision-makers surveyed are planning to increase investment in workforce development this year. And 80% said their organization is currently investing in one or more digital transformation initiatives.

The data is based on a survey of 213 CFOs and finance leaders at the director level or above at U.S. companies earning from $50 million to more than $500 million in revenue.

The future of AI is not predestinedit is ours to shape.

Steve Hasker, the CEO of Thomson Reuters, writes in a Fortune opinion piece, Knowledge workers dont seem to think AI will replace thembut they expect it to save them 4 hours a week in the next year.

Read the original:

Intuits CEO continues to bet the company on AI and data - Fortune

Read More..

JPMorgan Chase Invests in Infrastructure, AI to Boost Market Share – PYMNTS.com

J.P. Morgan Chase is reportedly enhancing its competitive capabilities to remain the biggest bank in the United States.

The bank is modernizing its infrastructure and data and using artificial intelligence and payments, Marianne Lake, CEO of consumer and community banking at J.P. Morgan Chase, told Reuters in an interview posted Thursday (July 11).

These investments will ensure that we continue to be the leader even five to 10 years from now, Lake said, per the report.

Lake also said J.P. Morgan aims to boost its market share, increasing its share of U.S. retail deposits from 11.3% to 15% and its share of the nations spending on its credit cards from 17% to 20%, according to the report.

While we are not putting any timeline on it, our strategies are geared towards achieving it, Lake said, per the report.

J.P. Morgan added $92 billion in deposits with its acquisition of failed bank First Republic last year, the report said. Federal law prohibits banks that hold 10% of U.S. deposits to grow through acquisitions, unless theyre buying a failed bank.

Lake said J.P. Morgan would do so again if it was important to the ecosystem, adding that she did not hope for more bank failures, according to the report.

With J.P. Morgan set to report its earnings Friday (July 12), industry observers are watching for any news of a potential successor to CEO Jamie Dimon, who has served in that role since 2006, the report said.

The banks board has said that Lake is one of four potential successors to Dimon, per the report.

It was reported in February that J.P. Morgan plans to open more than 500 new bank branches over the next three years, expanding its presence in areas where it lacks representation.

The bank already has the largest branch network, with 4,897 branches. It added 650 new ones over the previous five years.

Lake said at the time that J.P. Morgan had less than a 5% branch share in 17 of the top 50 markets it aims to expand into.

The banks earnings report Friday will arguably set the tone for the macro-outlook governing consumer spending and business resilience, PYMNTS reported Monday (July 8).

Read this article:

JPMorgan Chase Invests in Infrastructure, AI to Boost Market Share - PYMNTS.com

Read More..