Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Nvidias market value closed on Friday above $2tn for the first time, with enthusiasm about the prospects of artificial intelligence fuelling an eighth straight week of gains for the chipmakers shares.
Apple, Microsoft and Google-parent Alphabet are the other US-listed companies to have reached intraday market values of $2tn, but only the former two have reached the end of a trading day with valuations above that threshold.
Nvidia shares rose 4 per cent on Friday, giving it a valuation of about $2.05tn. Its share price has now climbed 66 per cent since the start of 2024, or about $830bn in dollar terms. That followed a more than 230 per cent increase in 2023, as the company repeatedly blasted through analyst and investor forecasts.
In its most recent financial update last month, Nvidia reported a 265 per cent year-on-year increase in revenues, and chief executive Jensen Huang declared that AI had hit the tipping point with surging demand across companies, industries and nations.
The tech group added $277bn in market capitalisation on the day after the results, a record for a US-listed company.
Nvidia has an almost monopoly position, said Tim Murray, multi-asset strategist at T Rowe Price because the chips they make are the most essential tools to [AI].
Nvidias latest earnings report, coupled with broader enthusiasm about the potential of AI technology, have helped to fuel a wider rally across global stock markets with Wall Streets S&P 500 hitting multiple new records and the tech-heavy Nasdaq Composite surpassing levels seen in 2021 to hit a peak on Friday.
The chipmaker has single-handedly driven more than a quarter of the year to date gains in the S&P 500, directly lifting the index by 96 points even before considering the broader effect it has had on investor sentiment.
Nvidias earnings were always going to be this barometer of whats the demand for AI chips, said Murray.
This years dramatic ascent of Nvidias shares and those of other tech stocks riding the wave of AI enthusiasm has sparked debate over whether the AI boom may be approaching bubble territory.
Were in a period where with AI theres a lot of excitement and weve probably got some time before we really have to see it proven, said Murray. Theres going to be a period eventually where the companies that are spending on AI need to realise some return on investment.
Youve certainly got some time before theres this moment of truth for the AI craze, he added.
Zehrid Osmani, a portfolio manager at Martin Currie with a large investment in Nvidia, said many stocks had been rallying based only on the hope that AI enthusiasm will lead to future earnings, but Nvidias strength in graphics processing units made it one of the stocks that is genuinely monetising.
Yes, in due course there could be more competition, but if you look at the scale of their [research and development] spending...we believe they should be able to keep their technological edge, he said.
For Kristina Hooper, global chief markets strategist at Invesco, Nvidia has captured imagination while providing some real underpinning to those imaginations and that excitement.
Recommended
The late 1990s was a very similar time period for the stock market, Hooper added, in that there was a lot of excitement over technology. However, there wasnt that fundamental underpinning there werent real earnings, there werent solid cash flows.
It was really very much excitement...Sizzle without steak, she said.
This time around, theres sizzle but theres also steak.
Listen to this story. Enjoy more audio and podcasts on iOS or Android.
Your browser does not support the
IT HAS BEEN nearly a year since OpenAI released GPT-4, its most sophisticated artificial-intelligence model and the brain-of-sorts behind ChatGPT, its groundbreaking robot conversationalist. In that time the market capitalisation of Americas technology industry, broadly defined, has risen by half, creating $6trn in shareholder value. For some tech firms, growing revenue is starting to match sky-high share prices. On February 21st Nvidia, which designs chips used to train and run models like GPT-4, reported bumper fourth-quarter results, sending its market value towards $2trn. AI mania has also lifted the share prices of other tech giants, including Alphabet (Googles corporate parent), Amazon and Microsoft, which are spending big on developing the technology.
At the same time, big techs sales of AI software remain small. In the past year AI has accounted for only about a fifth of the growth in revenues at Azure, Microsofts cloud-computing division, and related services. Alphabet and Amazon do not reveal their AI-related sales, but analysts suspect they are lower than those of Microsoft.For the AI stockmarket boom to endure, these firms will at some point need to make serious money from selling their services to clients. Businesses across the world, from banks and consultancies to film studios, have to start using ChatGPT-like tools on a large scale. When it comes to real-world adoption of such generative AI, companies have trodden gingerly. Yet even these baby steps hint at the changing nature of white-collar work.
Previous technological breakthroughs have revolutionised what people do in offices. The spread of the typewriter put some workers out of a job: With the aid of this little machine an operator can accomplish more correspondence in a day than half a dozen clerks can with the pen, and do better work, said an observer in 1888. The rise of the computer about a century later eliminated some low-level administrative tasks even as it made highly skilled employees more productive. According to one paper, the computer explains over half the shift in demand for labour towards college-educated workers from the 1970s to the 1990s. More recently the rise of working from home, prompted by the covid-19 pandemic and enabled by video-conferencing, has changed the daily rhythms of white-collar types.
Could generative AI prompt similarly profound changes? A lesson of previous technological breakthroughs is that, economywide, they take ages to pay off. The average worker at the average firm needs time to get used to new ways of working. The productivity gains from the personal computer did not come until at least a decade after it became widely available. So far there is no evidence of an AI-induced productivity surge in the economy at large. According to a recent survey from the Boston Consulting Group (BCG), a majority of executives said it will take at least two years to move beyond the hype around AI. Recent research by Oliver Wyman, another consultancy, concludes that adoption of AI has not necessarily translated into higher levels of productivityyet.
That is unsurprising. Most firms do not currently use ChatGPT, Googles Gemini, Microsofts Copilot or other such tools in a systematic way, even if individual employees play around with them. A fortnightly survey by Americas Census Bureau asks tens of thousands of businesses whether they use some form of AI. This includes the newfangled generative sort and the older type that companies were using before 2023 for everything from improving online search results to forecasting inventory needs. In February only about 5% of American firms of all sizes said they used AI. A further 7% of firms plan to adopt it within six months (see chart). And the numbers conceal large differences between sectors: 17% of firms in the information industry, which includes technology and media, say they use it to make products, compared with 3% of manufacturers and 5% of health-care companies.
When the Census Bureau began asking about AI in September 2023, small firms were likelier to use the technology than big ones, perhaps because less form-ticking made adoption easier for minnows. Today AI is most prevalent in big companies (with more than 250 employees), which can afford to enlist dedicated AI teams and to pay for necessary investments.A poll of large firms by Morgan Stanley, a bank, found that between the start and end of 2023 the share with pilot AI projects rose from 9% to 23%.
Some corporate giants are frantically experimenting to see what works and what doesnt. They are hiring AI experts by the thousand, suggest data from Indeed, a job-search platform (see chart). Last year Jamie Dimon, boss of JPMorgan Chase, said that the bank already had more than 300 AI use cases in production today. Capgemini, a consultancy, says it will utilise Google Clouds generative AI to develop a rich library of more than 500 industry use cases. Bayer, a big German chemicals company, claims to have more than 700 use cases for generative AI.
This use-case sprawl, as one consultant calls it, can be divided into three big categories: window-dressing, tools for workers with low to middling skills, and those for a firms most valuable employees. Of these, window-dressing is by far the most common. Many firms are rebranding run-of-the-mill digitisation efforts as gen AI programmes to sound more sophisticated, says Kristina McElheran of the University of Toronto. Presto, a purveyor of restaurant tech, introduced a gen-AI assistant to take orders at drive-throughs. But fully 70% of such orders require a human to help. Spotify, a music-streaming firm, has rolled out an AI disc-jockey which selects songs and provides inane banter. Recently Instacart, a grocery-delivery company, removed a tool that generated photos of vendors food, after the AI showed customers unappetising pictures. Big tech firms, too, are incorporating their own AI breakthroughs into their consumer-facing offerings. Amazon is launching Rufus, an AI-powered shopping assistant that no shopper really asked for. Google has added AI to Maps, making the product more immersive, whatever that means.
Tools for lower-skilled workers could be more immediately useful. Some simple applications for things like customer service involve off-the-shelf AI. Most customers questions are simple and concern a small number of topics, making it easy for companies to train chatbots to deal with them. A few of these initiatives may already be paying off. Amdocs produces software to help telecoms companies manage their billing and customer services. The use of generative AI, the company says, has reduced the handling time of customers calls by almost 50%. Sprinklr, which offers similar products, says that recently one of its luxury-goods clients has seen a 25% improvement in customer-service scores.
Routine administrative tasks likewise look ripe for AI disruption. The top examples of Bayers 700 use cases include mundane jobs such as easily getting data from Excel files and creating a first draft in Word. Some companies are using generative AI as cleverer search. At Nasdaq, a financial-services firm, it helps financial-crime sleuths gather evidence to assess suspicious bank transactions. According to the company, this cuts a process which can take 30-60 minutes to three minutes.
Giving AI tools to a firms most valuable workers, whose needs are complex, is less widespread so far. But it, too, is increasingly visible. Lawyers have been among the earliest adopters. Allen & Overy, a big law firm, teamed up with Harvey, an AI startup, to develop a system that its lawyers use to help with everything from due diligence to contract analysis. Investment banks are using AI to automate part of their research process. At Bank of New York Mellon an AI system processes data for the banks analysts overnight and gives them a rough draft to work with in the morning. So rather than getting up at four in the morning to write research, they get up at six, the bank says. Small mercies. Sanofi, a French drugmaker, uses an AI app to provide executives with real-time information about many aspects of the companys operations.
Some companies are using the technology to build software. Microsofts GitHub Copilot, an AI coding-writing tool, has 1.3m subscribers. Amazon and Google have rival products. Apple is reportedly working on one. Fortive, a technology conglomerate, says that its operating companies are seeing a greater-than-20% acceleration in software-development time through the use of gen AI. Chirantan Desai, chief operating officer of ServiceNow, a business-software company, has said that GitHub Copilot produces single-digit productivity gains for his firms developers. With the help of AI tools, Konnectify, an Indian startup, went from releasing four apps per month to seven.Surveys from Microsoft suggest that few people who start using Copilot want to give it up.
Pinterest, a social-media company, says it has improved the relevance of users search results by ten percentage points thanks to generative AI. On a recent earnings call its boss, Bill Ready, said that new models were 100 times bigger than the ones his firm used before. LOral, one of the worlds largest cosmetics firms, has caught the eye of investors as it improves BetIQ, an internal tool to measure and improve the companys advertising and promotion. LOral claims that generative AI is already generating productivity increases of up to 10-15% for some of our brands that have deployed it.
This does not mean that those brands will need 10-15% fewer workers. As with earlier technological revolutions, fears of an AI jobs apocalypse look misplaced. So far the technology appears to be creating more jobs than it eliminates. A survey published in November by Evercore ISI, a bank, found that just 12% of corporations believed that generative AI had replaced human labour or would replace it within 12 months. Although some tech firms claim to be freezing hiring or cutting staff because of AI, there is little evidence of rising lay-offs across the rich world.
Generative AI is also generating new types of white-collar work. Companies including Nestl, a coffee-to-cat-food conglomerate, and KPMG, a consultancy, are hiring prompt engineers expert at eliciting useful responses from AI chatbots. One insurance firm employs explainability engineers to help understand the outputs of AI systems. A consumer-goods firm that recently introduced generative AI in its sales team now has a sales-bot manager to keep an eye on the machines.
Though such developments will not translate into overall productivity statistics for a while, they are already affecting what white-collar workers do. Some effects are clearly good. AI lets firms digitise and systematise internal data, from performance reviews to meeting records, that had previously remained scattered. Respondents to surveys conducted by Randy Bean, a consultant, reported big improvements in establishing an internal data and analytics culture, which plenty of businesses find stubbornly difficult to nurture.
AI adoption may also have certain unpredictable consequences. Although AI code-writing tools are helping software engineers do their jobs, a report for GitClear, a software firm, found that in the past year or so the quality of such work has declined. Programmers may be using AI to produce a first draft only to discover that it is full of bugs or lacking concision. As a result, they could be spending less time writing code, but more time reviewing and editing it. If other companies experience something similar, the quantity of output in the modern workplace may go upas AI churns out more emails and memoseven as that output becomes less useful for getting stuff done.
Polling by IBM, a tech firm, suggests that many companies are cagey about adopting AI because they lack internal expertise on the subject. Others worry that their data is too siloed and complex to be brought together. About a quarter of American bosses ban the use of generative AI at work entirely. One possible reason for their hesitance is worry about their companies data. In their annual reports Blackstone, a private-equity giant, and Eli Lilly, a pharmaceutical one, have warned investors about AI-related risks such as possible leakage of intellectual property to AI model-makers. Last year Marie-Hlne Briens Ware, an executive at Orange, a telecoms company, explained that the firm had put data guardrails in place before commencing a trial with Microsofts Copilot.
Ultimately, for more businesses to see it as an open-and-shut case, generative AI still needs to improve. In November Microsoft launched a Copilot for its productivity software, such as Word and Excel. Some early users find it surprisingly clunky and prone to crashingnot to mention cumbersome, even for people already adept at Office. Many bosses remain leery of using generative AI for more sensitive operations until the models stop making things up. Recently Air Canada found itself in hot water after its AI chatbot gave a passenger incorrect information about the airlines refund policy. That was embarrassing for the carrier, but it is easy to imagine something much worse. Still, even the typewriter had to start somewhere.
To stay on top of the biggest stories in business and technology, sign up tothe Bottom Line, our weekly subscriber-only newsletter.
Theres some in-fighting going on in the artificial intelligence (AI) world, and one prominent billionaire claims the future of the human race is at stake. Elon Musk is taking legal action against Microsoft-backed OpenAI and its CEO, Sam Altman, alleging the company has strayed from its original mission to develop artificial intelligence for the collective benefit of humanity.
Musks attorneys filed a lawsuit on Thursday (Feb. 29) in San Francisco, asserting that in 2015, Altman and Greg Brockman, co-founders of OpenAI, approached Musk to assist in establishing a nonprofit focused on advancing artificial general intelligence for the betterment of humanity.
Although Musk helped initiate OpenAI in 2015, he departed from its board in 2018. Previously, in 2014, he had voiced concerns about the risks associated with AI, suggesting it could pose more significant dangers than nuclear weapons.
The lawsuit highlights that OpenAI, Inc. still claims on its website to prioritize ensuring that artificial general intelligence benefits all of humanity. However, the suit contends that in reality, OpenAI, Inc. has evolved into a closed-source entity effectively operating as a subsidiary of Microsoft, the worlds largest technology company.
When it comes to cybersecurity, AI brings both risks and rewards. Google CEO Sundar Pichai and other industry leaders say artificial intelligence is key to enhancing online security. AI can accelerate and streamline the management of cyber threats. It leverages vast datasets to identify patterns, automating early incident analysis and enabling security teams to quickly gain a comprehensive view of threats, thus hastening their response.
Lenovo CTO Timothy E. Bates told PYMNTS that AI-driven tools, such as machine learning for anomaly detection and AI platforms for threat intelligence, are pivotal. Deep learning technologies dissect malware to decipher its composition and potentially deconstruct attacks. These AI systems operate behind the scenes, learning from attacks to bolster defense and neutralize future threats.
With the global shift toward a connected economy, cybercrime is escalating, causing significant financial losses, including an estimated $10.3 billion in the U.S. alone in 2022, according to the FBI.
Get set for lots more books that are authored or co-authored by AI. Inkitt, a startup leveraging artificial intelligence (AI) to craft books, has secured $37 million. Inkitts app enables users to self-publish their narratives. By employing AI and data analytics, it selects stories for further development and markets them on its Galatea app.
This technological shift offers both opportunities and challenges.
Zachary Weiner, CEO of Emerging Insider Communications, which focuses on publishing, shared his insights on the impact of AI on writing with PYMNTS. Writers gain significantly from the vast new toolkit AI provides, enhancing their creative process with AI-generated prompts and streamlining tasks like proofreading. AI helps them overcome traditional brainstorming limits, allowing for the fusion of ideas into more intricate narratives. It simplifies refining their work, letting them concentrate on their primary tasks.
But he warns of the pitfalls AI introduces to the publishing world. AI is making its way into all aspects of writing and content creation, posing a threat to editorial roles, he said. The trend towards replacing human writers with AI for cost reduction and efficiency gains is not just a possibility but a current reality.
The robots are coming, and they are getting smarter. New advancements in artificial intelligence (AI) are making it possible for companies to create robots with better features and improved abilities to interact with humans.
Figure AI has raised $675 million to develop AI-powered humanoid robots. Investors include Jeff Bezos Explore Investments and tech giants like Microsoft, Amazon, Nvidia, OpenAI, and Intel. Experts say this investment shows a growing interest in robotics because of AI.
According to Sarah Sebo, an assistant professor of computer science at the University of Chicago, AI can help robots understand their surroundings better, recognize objects and people more accurately, communicate more naturally with humans and improve their abilities over time through feedback.
Last March, Figure AI introduced the Figure 01 robot, designed for various tasks, from industrial work to household chores. Equipped with AI, this robot mimics human movements and interactions.
The company hopes these robots will take on risky or repetitive tasks, allowing humans to focus on more creative work.
The AI industry is undergoing a significant transformation with growing interest in more efficient and cost-effective models, emblematic of a broader trend in technological advancement. In the vanguard is Mistral AI, an innovator and trailblazer. Their commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions.
Today, we are announcing a multi-year partnership between Microsoft and Mistral AI, a recognized leader in generative artificial intelligence. Both companies are fueled by a steadfast dedication to innovation and practical applications, bridging the gap between pioneering research and real-world solutions.
Introducing Mistral Large, our most advanced large language model (LLM)
This partnership with Microsoft enables Mistral AI with access to Azures cutting-edge AI infrastructure, to accelerate the development and deployment of their next generation large language models (LLMs) and represents an opportunity for Mistral AI to unlock new commercial opportunities, expand to global markets, and foster ongoing research collaboration.
We are thrilled to embark on this partnership with Microsoft. With Azures cutting-edge AI infrastructure, we are reaching a new milestone in our expansion propelling our innovative research and practical applications to new customers everywhere. Together, we are committed to driving impactful progress in the AI industry and delivering unparalleled value to our customers and partners globally.
Microsofts partnership with Mistral AI is focused on three core areas:
In November 2023, at Microsoft Ignite, Microsoft unveiled the integration of Mistral 7B into the Azure AI model catalog accessible through Azure AI Studio and Azure Machine Learning. We are excited to announce Mistral AIs flagship commercial model, Mistral Large, available first on Azure AI and the Mistral AI platform, marking a noteworthy expansion of our offerings. Mistral Large is a general-purpose language model that can deliver on any text-based use case thanks to state-of-the-art reasoning and knowledge capabilities. It is proficient in code and mathematics, able to process dozens of documents in a single call, and handles French, German, Spanish, and Italian (in addition to English).
This latest addition of Mistral AIs premium models into Models as a Service (MaaS) within Azure AI Studio and Azure Machine Learning provides Microsoft customers with a diverse selection of the best state-of-the-art and open-source models for crafting and deploying custom AI applications, paving the way for novel AI-driven innovations.
We have tested Mistral Large through the Azure AI Studio in a use case aimed at internal efficiency. The performance was comparable with state-of-the-art models with even better latency. We are looking forward to exploring further this technology in our business.
After exploring Mistral Large during its early access period, weve been impressed by its performance on medical terminology. As we continue to innovate in healthcare, were open to collaborations that can help us and our partners grow together. Mistral AI represents an exciting opportunity for mutual advancement in artificial intelligence, both in France and internationally.
The Mistral AI models have been crucial in enhancing productivity and collaboration at CMA CGM. Their advanced capabilities have significantly improved the performance of our internal personal assistant, MAIA. Employees are now able to quickly access and engage with information like never before. We are confident that Mistral AI on Azure is the right choice to support our employees and drive innovation across our organization.
Microsoft is committed to supporting global AI innovation and growth, offering world-class datacenter AI infrastructure, and developing technology securely to empower individuals with the skills they need to leverage AI effectively. This partnership with Mistral AI is founded on a shared commitment to build trustworthy and safe AI systems and products. It further reinforces Microsofts ongoing efforts to enhance our AI offerings and deliver unparalleled value to our customers. Additionally, the integration into AI Studio ensures that customers can utilize Azure AI Content Safety and responsible AI tools, further enhancing the security and reliability of AI solutions.
Visit the Mistral Large model card and sign in with your Azure subscription to get started with Mistral Large on Azure AI today. You can also review the technical blog to learn how to use Mistral Large on Azure AI. Visit Mistral AIs blog to get deeper insights about the model.
Build intelligent apps at enterprise scale with the Azure AI portfolio
In November, a year after ChatGPTs release, a relatively unknown Chinese start-up leaped to the top of a leaderboard that judged the abilities of open-source artificial intelligence systems.
The Chinese firm, 01.AI, was only eight months old but had deep-pocketed backers and a $1 billion valuation and was founded by a well-known investor and technologist, Kai-Fu Lee. In interviews, Mr. Lee presented his A.I. system as an alternative to options like Metas generative A.I. model, called LLaMA.
There was just one twist: Some of the technology in 01.AIs system came from LLaMA. Mr. Lees start-up then built on Metas technology, training its system with new data to make it more powerful.
The situation is emblematic of a reality that many in China openly admit. Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war.
Chinese companies are under tremendous pressure to keep abreast of U.S. innovations, said Chris Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies. The release of ChatGPT was yet another Sputnik moment that China felt it had to respond to.
Jenny Xiao, a partner at Leonis Capital, an investment firm that focuses on A.I.-powered companies, said the A.I. models that Chinese companies build from scratch arent very good, leading to many Chinese firms often using fine-tuned versions of Western models. She estimated China was two to three years behind the United States in generative A.I. developments.
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.
Thank you for your patience while we verify access.
2023 was a year of exciting growth and innovation here at Microsoft. This years focus is to empower our customers and partners through AI transformation, and were excited to share what will be an impactful lineup of events for 2024. Attending any of these events provides you with the opportunity to learn, grow and make defining connections with experts from around the world.
Expect to see enhancements in some of this years events. Azure AI-powered natural language assistants will provide personalized session recommendations, summarize content and answer your event-related questions. To meet the needs of a global audience, we are also offering options to participate in person, online and on-demand so you can choose what format works best for you. We structure these events to support the goals of our audiences and ensure that anyone attending has a great experience.
Visit our events site to find out which ones are right for you.
Its not too late to register for this ongoing series of one-day, in-person experiences around the world. These events bring together those on the cutting edge of innovation including decision-makers, industry experts, thought leaders and developers to focus on how AI will revolutionize work. So far, weve welcomed thousands of senior leaders and developers in six locations around the globe with keynotes highlighting the latest innovations in AI. Tour stops remain in Berlin, Paris, So Paulo and Seoul, where you can attend interactive workshops and learn how you can unlock the power of AI. Go tothe Microsoft AI Tour site to sign up.
In-demand experts, distinguished engineers and developers are gathering in Seattle for our annual Microsoft Build. This celebration of technology is a chance to hear the latest announcements and get hands-on with new technology. Learn how to create new features and opportunities with AI and copilots, dive deep into the latest tech, and develop the skills that are needed for tomorrow today.
Seattle, Washington & online | May 21-23, 2024
We are evolving the event previously known as Microsoft Inspire. In July, we will kick off our fiscal year with partners in tandem with our Microsoft sellers by providing a digital engagement to share strategic priorities, investments and key program changes. We look forward to sharing more details soon. Online | July 2024
With this change, we will also welcome partners to join us at Microsoft Ignite for an in-person experience in November to see the latest Microsoft innovations, network and celebrate the Partner of the Year Award winners.
Our biggest event of the year is getting even bigger for customers and partners, and were returning to Chicago!
Join IT professionals, implementers, developers, architects and more in checking out the latest tech Microsoft has to offer. With demos and firsthand access to new AI solutions and copilots, this is your chance to explore the latest tools, receive deep technical training and get questions answered by Microsoft experts. Were bringing the best of our customer and partner event experiences to the Windy City and online so you can participate in the festivities and discover how AI can enhance your organization.
In addition to seeing the latest technology firsthand, senior leaders and decision-makers are invited to learn more about how to lead in the era of AI and find robust networking opportunities.
If youre looking to expand your AI knowledge, create connections and push the boundaries of what we can accomplish together, theres no better event than Microsoft Ignite.
Chicago, Illinois & online | Nov. 18-22, 2024
Find an event in your region
Visit our full global events catalog for a complete list of events, including some that could even be in your area. There, you can filter events by product, role or industry to find something specific to your needs or interests.
We hope to see you
Its very exciting to bring you opportunities that showcase the growth and innovation thats being done at Microsoft to help you do more with AI. Whether youre a customer, partner, IT professional, decision-maker or developer, if youre looking to achieve more theres an event for you.
Tags: Microsoft AI Tour, Microsoft Build, Microsoft Ignite, Microsoft Inspire
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Artificial intelligence is making its mark on the art world, encroaching even on fustier areas such as the Old Masters trade. AI will, for instance, be a talking point during the Tefaf art and antiques fair in Maastricht next week: Carina Popovici, chief executive of the Swiss-based AI company Art Recognition,will reveal at the accompanying Art Business Conference how it recently attributed a paintingto a Renaissance German artist.
Art Recognition, which was founded five years ago, has an AI system which, it says, offers a precise and objective authenticity evaluation of an artwork. On its website, the company says it has completed more than 500 authenticity evaluations, verifying contested works such as an 1889 self-portrait by Vincent van Gogh at the National Museum in Oslo.
Attributions matter in the art world: confirming the authorship of a work can increase the price if the artist is a star name, and can also boost scholarship in the field. The Adoration of the Kings,offered at auction in 2021 with an estimate of 10,500-16,000 as Circle of Rembrandt, was later attributed to the Dutch master himself and sold for 10.9mn with fees at Sothebys in December. And it is not only collectors and dealers who want queries settled: Popovici says Art Recognition is used by wealth management services and legal professionals as well. Christies says it is watching developments in the area of AI with interest
AI is excellent at so-called pattern recognition, says Jo Lawson-Tancred, author of the forthcoming publication AI and the Art Market, so it will have an easier time than humans learning distinguishing features if shown enough examples by a particular artist. It can usually flag any paintings that do not fit an artists pattern but AI does not excel at grasping context, so human reasoning is still integral, she adds.
I think that a lot depends on the data that is fed into the AI system, says Carlo Milano of Callisto Fine Arts, London. For example, if a questionable catalogue raisonn is used to input data about an artist, then the conclusions can be questionable. The work of an art dealer involves extensive psychology, he explains, outlining that AI will provide more information and reduce the margin of error, but will never completely replace hands-on experience.
Conservators are concerned whether AI can take into account factors such as a filthy layer of varnish, wear or damage. Art professionals are indeed largely sceptical about whether AI will ever supplement or replace the human eye in judging a work of art.
Art Recognition was caught up in a row last year over a painting known as thede Brcy Tondo, believed to be by the Renaissance master Raphael. In January 2023, an analysis by two UK universities (Bradford and Nottingham), using AI-assisted facial-recognition software, concluded that the faces in the work were identical to those in another Raphael painting, theSistine Madonna(c1513), thus claiming that thede Brcy Tondo was by the master. However, Art Recognition also analysed the piece, by contrast stating that de Brcy Tondo is not by Raphael, with an 85 per cent probability rating.
The Raphael dispute has since broadened, highlighting the strengths and weaknesses of AI authentication different programs can produce different results. In December 2023, a team led by scientists from the University of Bradford presented further findings about the art of Raphael in a peer-reviewed paper published in the Heritage Science journal. Their program compared the details of authentic Raphael paintings from the database with the test image, examining in depth the colour palette, tonal values, hues and brushstroke patterns.
They concluded that the face of Joseph in the artists work Madonna della rosa, housed at the Prado in Madrid, may not be by the Renaissance artist. The rest of the work is by his hand, say the university specialists, who include Prof Hassan Ugail, director of Bradfords Centre for Visual Computing. Ugail says his most recent algorithm recognises authentic works by Raphael with 98 per cent accuracy.
But Popovici challenges Ugails findings after investigating the training data made publicly available by the university research group. There are no negative examples of Madonna paintings works not by Raphael but resembling his style used by the group in its recent Raphael data set, she tells the Financial Times. The validity and scope of the data sets used in such AI programs the material the software is working from are crucial.
Like an expert learning from examples, says Popovici, an AIs ability to recognise patterns and make assessments depends heavily on how representative its training data is, she says. Without exposure to both authentic and imitative examples of a theme, an AI is inclined to classify as authentic those works that resemble the images in its positive training set, she adds.
Ugail says that there are other ways to train an AI program than the one Popovici suggests, but strikes a conciliatory note, addressing wider concerns about the impact of AI. This is not a case of AI taking peoples jobs, he says, adding that the process of authenticating a work of art involves looking at many aspects, from its provenance to the pigments used. Just like spectroscopy and dating techniques, AI can be one important tool in the main toolbox, he says.
The art historian Bendor Grosvenor says that AI can be useful for connoisseurs. But the main drawback at the moment is the quality of the inputs given to the AI attribution programs currently being used. It is simply not possible to determine whether a painting is by Rubens by relying only on poor-quality images of not much more than half his oeuvre...No human connoisseur would be trusted to do so neither can a computer.
His conclusion? Must do better is the report card on AI in this field so far.
Look, Im a Humane AI Pin doubter as much as the next person. And I still think the wearable, AI-powered assistant suffers from a case of this-thing-could-have-been-an-app. But I finally got to spend a little face-to-face time with the pin this morning, and you know what? Its a darn cool gadget. Its just buried under a layer of marketing so thick that its hard to appreciate what it actually could be if Humane wasnt so self-serious.
If you spend time on Tech Threads or the like, you probably already know what the pin does: you clip it to your shirt, talk to it, and it uses generative AI to answer. Its a standalone device with its own SIM card, and theres no screen just vibes. That, and a little laser that projects menus and text onto your palm so you can interact with mortal trifles like Wi-Fi settings and media playback controls.
The idea, reiterated as I watched a couple of Humane employees run through various demos, was that its meant to help keep you connected while unplugging a little bit less staring at screens and more living in the moment. AI helps fetch relevant bits from your calendar and email, and answers your questions when youre curious about the world around you.
Its all very lovely, but lets be real: this thing isnt a philosophy, its a gadget. Gadgets are fun, helpful, and frustrating and all of the above seems to apply to the Humane pin.
The AI Pin was genuinely impressive at times. Theres a vision feature that will use the camera to scan the scene in front of you when prompted, analyze whats there, and describe it out loud. I stood in front of a Humane spokesperson as he tried out this feature, and frankly, the pin nailed it. It described Mobile World Congress as an indoor event or exhibition with people walking around. Easy enough.
But it also pointed out the name Qualcomm on the signage behind me, and obviously reading the badge around my neck, identified me as a person wearing a lanyard from the The Verge. One too many thes, but pretty impressive when you consider I wasnt standing all that close to the pin and the lighting was dim.
The gesture navigation was also impressive more fluid and responsive than I thought it would be. I wasnt allowed to put the pin on myself, and its hard to get into the right spot to project the laser onto your own hand since its really a single-user device. I tried. But a couple of Humane employees demoing the product, who obviously had lots of practice with it, navigated the projected menus quickly and easily just by tilting their hands and tapping two fingers together.
But the pin isnt immune to the thing that gadgets often do: frustrate the hell out of you. Most of the AI is off-device, so theres a solid few seconds of waiting for responses to your requests and questions not helped by the convention centers spotty connectivity. It also shut down on one occasion after briefly flashing a notice that it had overheated and needed to cool off. The employee demoing the pin for me said that this doesnt happen very often, and that the continued use of the laser for demonstration purposes probably did it. I believe that, but still, this is a device meant to sit next to your chest and go with you into lots of different environments, presumably including warm ones. Not great!
The laser projection is clearer than I imagined it would be, but its still essentially light projected onto the palm of your hand. Hands arent uniformly flat, and theyre hard to keep perfectly still. Text kind of dances around in front of you, and while its not difficult to read, it is harder than reading, say, text on a smartphone.
Its also impossible to get a sense of what its like living with the thing in a convention center hall. Could a cotton shirt support its weight? How easy is the laser to see outside in direct sun? Would people understand why the trust light is illuminated? Does the Pin occasionally make things up, the way some AI tends to? I have a lot more questions than answers, but I guess at least I have more than zero answers now that Ive seen it with my own eyes.
My early impression of the Pin is that theres something there, but its not the thing. And the trouble is, all of Humanes marketing has built it up to be the thing. It was first introduced at a TED talk, for petes sake: thats like ground zero for people who take themselves too seriously. Humanes Sai Kambampati told me that the AI Pin isnt intended as a smartphone replacement. But it has its own data connection, its own monthly subscription fee, and its own smartphone-esque price of $699. And its... not supposed to replace your phone?
Whatevers ahead of us in mobile computing, I have a feeling its not exactly the AI Pin as I saw it demonstrated today. Theres a lot more testing I want to do when the pin officially arrives in April. In the meantime, I didnt see the future exactly, but I did see a darn cool gadget just dont take it too seriously.
In a 2023 Nature survey of scientists, 30% of respondents had used generative AI tools to help write manuscripts.Credit: Nicolas Maeterlinck/Belga MAG/AFP via Getty
ChatGPT continues to steal the spotlight, more than a year after its public debut.
The artificial intelligence (AI) chatbot was released as a free-to-use tool in November 2022 by tech company OpenAI in San Francisco, California. Two months later, ChatGPT had already been listed as an author on a handful of research papers.
ChatGPT one year on: who is using it, how and why?
Academic publishers scrambled to announce policies on the use of ChatGPT and other large language models (LLMs) in the writing process. By last October, 87 of 100 top scientific journals had provided guidance to authors on generative AI, which can create text, images and other content, researchers reported on 31 January in the The BMJ1.
But thats not the only way in which ChatGPT and other LLMs have begun to change scientific writing. In academias competitive environment, any tool that allows researchers to produce more publications is going to be a very attractive proposition, says digital-innovation researcher Savvas Papagiannidis at Newcastle University in Newcastle upon Tyne, UK.
Generative AI is continuing to improve so publishers, grant-funding agencies and scientists must consider what constitutes ethical use of LLMs, and what over-reliance on these tools says about a research landscape that encourages hyper-productivity.
Before its public release, ChatGPT was not nearly as user-friendly as it is today, says computer scientist Debora Weber-Wulff at the HTW Berlin University of Applied Sciences. The interfaces for the older GPT models were something that only a computer scientist could love.
In the past, researchers typically needed specialized expertise to use advanced LLMs. Now, GPT has democratized that to some degree, says Papagiannidis.
This democratization has catalysed the use of LLMs in research writing. In a 2023 Nature survey of more than 1,600 scientists, almost 30% said that they had used generative AI tools to help write manuscripts, and about 15% said they had used them to help write grant applications.
And LLMs have many other uses. They can help scientists to write code, brainstorm research ideas and conduct literature reviews. LLMs from other developers are improving as well, such as Googles Gemini and Claude 2 by Anthropic, an AI company in San Francisco. Researchers with the right skills can even develop their own personalized LLMs that are fine-tuned to their writing style and scientific field, says Thomas Lancaster, a computer scientist at Imperial College London.
About 55% of the respondents to the Nature survey felt that a major benefit of generative AI is its ability to edit and translate writing for researchers whose first language is not English. Similarly, in a poll by the European Research Council (ERC), which funds research in Europe, 75% of more than 1,000 ERC-grant recipients felt that generative AI will reduce language barriers in research by 2030, according to a report released in December2.
Of the ERC survey respondents, 85% thought that generative AI could take on repetitive or labour-intensive tasks, such as literature reviews. And 38% felt that generative AI will promote productivity in science, such as by helping researchers to write papers at a faster pace.
Although ChatGPTs output can be convincingly human-like, Weber-Wulff warns that LLMs can still make language mistakes that readers might notice. Thats one of the reasons she advocates for researchers to acknowledge LLM use in their papers. Chatbots are also notorious for generating fabricated information, called hallucinations.
How ChatGPT and other AI tools could disrupt scientific publishing
And there is a drawback to the productivity boost that LLMs might bring. Speeding up the paper-writing process could increase throughput at journals, potentially stretching editors and peer reviewers even thinner than they already are. With this ever-increasing number of papers because the numbers are going up every year there just arent enough people available to continue to do free peer review for publishers, Lancaster says. He points out that alongside researchers who openly use LLMs and acknowledge it, some quietly use the tools to churn out low-value research.
Its already difficult to sift through the sea of published papers to find meaningful research, Papagiannidis says. If ChatGPT and other LLMs increase output, this will prove even more challenging.
We have to go back and look at what the reward system is in academia, Weber-Wulff says. The current publish or perish model rewards researchers for constantly pushing out papers. But many people argue that this needs to shift towards a system that prioritizes quality over quantity. For example, Weber-Wulff says, the German Research Foundation allows grant applicants to include only ten publications in a proposal. You want to focus your work on getting really good, high-level papers, she says.
According to the study in The BMJ, 24 of the 100 largest publishers collectively responsible for more than 28,000 journals had by last October provided guidance on generative AI1. Journals with generative-AI policies tend to allow some use of ChatGPT and other LLMs, as long as theyre properly acknowledged.
Springer Nature, for example, states that LLM use should be documented in the methods or another section of the manuscript, a guideline introduced in January 2023. Generative AI tools do not, however, satisfy criteria for authorship, because that carries with it accountability for the work, and AI tools cannot take such responsibility. (Natures news team is editorially independent of its publisher, Springer Nature.)
Enforcing these rules is easier said than done, because undisclosed AI-generated text can be difficult for publishers and peer reviewers to spot. Some sleuths have caught it through subtle phrases and mistranslations. Unlike cases of plagiarism, in which there is clear source material, you cant prove that anything was written by AI, Weber-Wulff says. Despite researchers racing to create LLM-detection tools, we havent seen one that we thought produced a compelling enough result to screen journal submissions, says Holden Thorp, editor in chief of the Science family of journals.
Although as of November, the American Association for the Advancement of Science which publishes Science allows for some disclosed use of generative AI in the preparation of manuscripts, it still bans the use of LLMs during peer review, Thorp says. This is because he and others at Science want reviewers to devote their full attention to the manuscript being assessed, he adds. Similarly, Springer Natures policy prohibits peer reviewers from uploading manuscripts into generative-AI tools.
Scientific sleuths spot dishonest ChatGPT use in papers
Some grant-funding agencies, including the US National Institutes of Health and the Australian Research Council, forbid reviewers from using generative AI to help examine grant applications because of concerns about confidentiality (grant proposals are treated as confidential documents, and the data entered into public LLMs could be accessed by other people). But the ERC Scientific Council, which governs the ERC, released a statement in December recognizing that researchers use AI technologies, along with other forms of external help, to prepare grant proposals. It said that, in these cases, authors must still take full responsibility for their work.
Many organizations come out now with very defensive statements requiring authors to acknowledge all use of generative AI, says ERC Scientific Council member Tom Henzinger, a computer scientist at the Institute of Science and Technology Austria in Klosterneuburg.
To him, ChatGPT seems no different from running text by a colleague for feedback. Use every resource at your disposal, Henzinger says.
Regardless of the ever-changing rules around generative AI, researchers will continue to use it, Lancaster says. There is no way of policing the use of technology like ChatGPT.
As artificial intelligence gains traction in office operations, some companies are giving employees a day to step back.
Working four days while getting paid for five is a dream for many employees. Yet the dramatic shifts in the pandemic-era workplace have turned this once unfathomable idea into a reality for some workers. And as more global data emerges, an increasing number of companies are courting the approach after positive trial-run results across countries including the UK, Iceland, Portugal and more.
Now, as pilots continue in Germany, a trial of 45 companies has just begun , for instance another factor has entered the mix. Artificial intelligence (AI) is gathering pace in the workplace, and some experts believe it could accelerate the adoption of the four-day workweek.
Data from London-based news-and-events resource Tech.co collected in late 2023 lends credence to this idea. For their 2024 Impact of Technology on the Workplace, the company surveyed more than 1,000 US business leaders. The researchers found 29% of organisations with four-day workweeks use AI extensively in their firms' operations, implementing generative AI tools such as ChatGPT as well as other programmes to streamline operations. In comparison, only 8% of five-day working week organisations use AI to this extent. And 93% of businesses using AI are open to a four-day work week, whereas for those who don't, fewer than half are open to working shorter weeks.
At London-based digital design agency Driftime, adopting AI technology has been crucial to enable the business to operate a flexible four-day work week. "By handing over simple tasks to AI tools, we gain invaluable time previously lost to slow aspects of the process," says co-founder Abb-d Taiyo. "With tools like Modyfi, the graphics are all live and modifiable, making it so much easier and quicker for our designers to create concepts and ideas."
Taiyo believes it makes sense for both his employees and his bottom line to work the condensed week. "Instead of a dip in the quantity of work created over just four days, we've seen a remarkably high quality of work matched by a high staff satisfaction return. The health and happiness of our team is in direct correlation to the high standard of work produced," he says.
Shayne Simpson, group managing director of UK-based TechNET IT Recruitment, also believes AI has been fundamental to the success of the company's four-day work week policy. The firm has found AI tools save each of their recruitment consultants 21 hours per week, primarily by automating previously manual tasks like data input, confirmation emails, resume screening and candidate outreach. This has reduced the time to fill permanent roles at the company by an average of 10 days. "This timesaving allows our team to achieve their weekly goals earlier in the week and the flexibility liberates our consultants from being tethered to their desks, enabling them to enjoy a well-deserved Friday off," says Simpson.
Not only has the company's abridged workweek boosted productivity and morale, Simpson says it's also been key to attracting talent to work within the company itself. "Seasoned recruitment professionals are enticed by our streamlined processes while entry-level talent is eager to embrace new tools." It's lifted the entire business, he adds.
While AI tools are certainly paving the way for a four-day work week within some industries, the technology can't usher in the change alone. Organisational culture within a business is also fundamental, says Na Fu, a professor in human resource management at Trinity Business School, Ireland. "An openness to innovative work structures, an experimental mindset and, importantly, a culture grounded in high levels of trust are all important for the four-day work week to be successfully adopted," she says.
As the digital transformation with AI progresses, employees themselves also must be willing to level up, she adds: "Rather than becoming mere caretakers or servants of machines, human workers need to develop new skills that can leverage, complement and lead AI, achieving the enhanced outcomes."
Some industries will benefit from AI more than others, however notably those who are able to use generative AI tools for such tasks including software development, content creation, marketing and legal services, says Fu. Plus, artificial intelligence development still has a way to go if it is to substantially reduce human working hours across the board.
What may drive the shift to a four-day workweek in an AI-powered business landscape may not ultimately be up to the robots, however. Executive buy-in is required, and whether leaders will embrace the unconventional concept will vary depending on a firm's overarching purpose and values, says Fu. Instead of letting AI supplement the work of humans, for instance, some businesses could use it to automate certain tasks while piling other work on employees to fill newly open hours.
Still, despite some reservation, an increasing number of business leaders including those from some of the world's highest-earning companies see a technology-driven shortened workweek as an inevitable future. In October 2023, JPMorgan Chase & Co CEO Jamie Dimon told Bloomberg TV: "Your children are going to live to 100, and they'll probably be working three-and-a-half days a week." Employees will have to wait and see.