Category Archives: Ai

NVIDIA Unveils 6G Research Cloud Platform to Advance Wireless Communications With AI – NVIDIA Blog

Ansys, Keysight, Nokia, Samsung Among First to Use NVIDIA Aerial Omniverse Digital Twin, Aerial CUDA-Accelerated RAN and Sionna Neural Radio Framework to Help Realize the Future of Telecommunications

GTCNVIDIA today announced a 6G research platform that empowers researchers with a novel approach to develop the next phase of wireless technology.

The NVIDIA 6G Research Cloud platform is open, flexible and interconnected, offering researchers a comprehensive suite to advance AI for radio access network (RAN) technology. The platform allows organizations to accelerate the development of 6G technologies that will connect trillions of devices with the cloud infrastructures, laying the foundation for a hyper-intelligent world supported by autonomous vehicles, smart spaces and a wide range of extended reality and immersive education experiences and collaborative robots.

Ansys, Arm, ETH Zurich, Fujitsu, Keysight, Nokia, Northeastern University, Rohde & Schwarz, Samsung, SoftBank Corp. and Viavi are among its first adopters and ecosystem partners.

The massive increase in connected devices and host of new applications in 6G will require a vast leap in wireless spectral efficiency in radio communications, said Ronnie Vasishta, senior vice president of telecom at NVIDIA. Key to achieving this will be the use of AI, a software-defined, full-RAN reference stack and next-generation digital twin technology.

The NVIDIA 6G Research Cloud platform consists of three foundational elements:

Industry-leading researchers can use all elements of the 6G development research cloud platform to advance their work.

The future convergence of 6G and AI holds the promise of a transformative technological landscape, said Charlie Zhang, senior vice president of Samsung Research America. This will bring seamless connectivity and intelligent systems that will redefine our interactions with the digital world, ushering in an era of unparalleled innovation and connectivity.

Testing and simulation will play an essential role in developing the next generation of wireless technology. Leading providers in this space are working with NVIDIA to contribute to the new requirements of AI with 6G.

Ansys is committed to advancing the mission of the 6G Research Cloud by seamlessly integrating the cutting-edge Ansys Perceive EM solver into theOmniverse ecosystem, said Shawn Carpenter, program director of 5G/6G and space at Ansys. Perceive EM revolutionizes the creation of digital twins for 6G systems. Undoubtedly, the convergence of NVIDIA and Ansys technologies will pave the way toward AI-enabled 6G communication systems.

Access to wireless-specific design tools is limited yet needed to build robust AI, said Kailash Narayanan, president and general manager of Keysight Communications Solutions Group. Keysight is pleased to bring its wireless network expertise to enable the next generation of innovation in 6G communications networks.

The NVIDIA 6G Research Cloud platform combines these powerful foundational tools to let telcos unlock the full potential of 6G and pave the way for the future of wireless technology. To access the platform, researchers can sign up for theNVIDIA 6G Developer Program.

Here is the original post:

NVIDIA Unveils 6G Research Cloud Platform to Advance Wireless Communications With AI - NVIDIA Blog

Get Ahead in the AI Race: 3 Stocks to Multiply Your Money – InvestorPlace

Nearly every company under the sun is touting its AI product plans and integrations on earnings calls and in interviews. Investor interest in artificial intelligence technology remains red-hot, and this is a trend thats going to continue. The question is, which AI stocks will ultimately win the race, or at least be in the race long-term?

There are plenty of generative AI stocks out there, or companies seeing direct impacts of artificial intelligence that have seen their valuations balloon. Im interested in companies that may be more under the radar from the AI lens but could benefit disproportionately relative to their peers.

Here are three such stocks I think investors should focus on right now.

Source: Sundry Photography / Shutterstock.com

With a wide array of security solutions and options, Santa Clara-based cybersecurity leader Palo Alto Networks (NASDAQ:PANW) makes a great AI stock to buy and hold long-term. The company focuses on a range of products, offering everything from cloud security to firewalls. Currently, it has a whopping $93 billion market cap and has been increasingly integrating AI technology into its core offerings.

Significant achievements have been seen in the companys recent quarterly results. Palo Alto saw robust growth in its annual recurring revenue (ARR) for its Secure Access Service Edge (SASE) sector and increased multi-module adoption within Prisma Cloud. In network security, PANW sustained a fifth consecutive quarter of 50% ARR growth in SASE, with over 30% of new SASE customers being new to the company.

PANWs average price target as it stepped in 2024 anticipates a 16% upside at a price target of around $335 per share. Palo Alto CEO Nikesh Arora, one of the key drivers of the companys success, noted excellent financial performance via the companys strategic and practical plans. Palo Altos ability to achieve above-grade revenue growth indicates potential for long-term value accretion to investors. This isnt a stock without short-term challenges, but its subscription model and AI integrations could drive outsized growth for years to come. This stock is on my buy list now.

Source: Sundry Photography / Shutterstock.com

A recent collaboration with AI semiconductor king Nvidia (NASDAQ:NVDA) has propelled ServiceNow (NYSE:NOW) higher in recent days. The company aims to achieve even greater efficiency in 2024, with this partnership aimed at focusing on optimizing large language model deployments. Utilizing Nvidias NIM inference microservices, ServiceNow aims for efficient and scalable generative AI (GenAI) enterprise applications. The integration of NIM into ServiceNows Now LLMs, including Now Assist, is set to broaden GenAI usage across diverse customer cases.

ServiceNow also claims it can leverage AI and technology to power Saudi Arabias Vision 2030 strategic growth plans. With its strong financials, record, and recent AI innovations, ServiceNow is on track to offer more efficiency and streamlined processes in its products.

Notable achievements include implementing over 180 automated methods for the Ministry of Justice and creating an integrated employee portal for the Ministry of Human Resources and Social Development.

The companys extensive AI integration across its platform sets it apart, with offerings spanning IT, HR and customer service. That strategic approach positions it as a digital transformation leader. Despite a 73% surge in the past year, analysts see a 10% near-term upside with a $851.67 target, hinting at potential long-term growth.

Source: JHVEPhoto / Shutterstock.com

In recent benchmark tests by Advanced Micro Devices (NASDAQ:AMD), the Ryzen 7 7840U APU outperformed the Intel Core Ultra 7 155H in AI tasks. Despite similar configurations, AMDs chip showed 14% and 17% faster performance in Llama 2 and Mistral AI, respectively.

Mizuho analysts raised AMDs stock price target to $235 from $200, maintaining a Buy rating, foreseeing growth in the AI chip market and multiple expansions. AMDs introduction of a new AI chip tailored for the Chinese market, complying with U.S. trade restrictions, signals potential earnings and stock price boosts if approved for sale. AMDs stock has surged approximately 30% in 2024, and plenty more upside could be on the horizon if these tailwinds persist.

Of course, like the other stocks on this list, AMDs relatively high multiple could provide some headwind to its appreciation potential over the medium term. Currently, I view this stock as one worth buying for the long term on dips. I think AMD has the potential to take some market share from Nvidia over time, as the market will grow at a rate that will push the production abilities of Nvidia and its peers. If AMD can continue to innovate and push out higher-performance chips over time, theres market share to be had. And its going to be a lucrative market share, for a very long time.

On the date of publication, Chris MacDonald did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Chris MacDonalds love for investing led him to pursue an MBA in Finance and take on a number of management roles in corporate finance and venture capital over the past 15 years. His experience as a financial analyst in the past, coupled with his fervor for finding undervalued growth opportunities, contribute to his conservative, long-term investing perspective.

See the rest here:

Get Ahead in the AI Race: 3 Stocks to Multiply Your Money - InvestorPlace

Pricing and Packaging Your B2B or Prosumer Generative AI Feature – Andreessen Horowitz

Effectively monetizing any new technology is a race to capture market share while still giving yourself room to grow your business. But the stakes are much higher with generative AI: though it promises to deliver unprecedented value to businesses, it can also be very expensive to serve to each incremental customer. At the growth stages in particular, founders need to be mindful of their unit economics and margins. We often hear these founders ask: how can I capture the value created by genAI? Should I eat the cost of my genAI features, or pass it along to customers? Will my customers actually be willing to pay for genAI and if so, how much?

Were in the very early days of genAI and until adoption curves and costs stabilize, there wont be any tried-and-true pricing or packaging frameworks. That said, weve outlined how were thinking about pricing and packaging in a part of the market thats debating how to monetize their new genAI featureB2B SaaS and prosumer companiesand how were seeing other companies approach the same question so you can better understand where your strategy fits in today.

As with any pricing and packaging exercise, the best place to start is to understand:

Then, square those findings with how core you think genAI will be to your overall product offering.

Some of this will be more art than science, since founders are still figuring out what value genAI delivers to customers and how much it costs. That said, your early usage and customer personas can give you insight into both of these vectors.

For beta and early usage, you want to understand which customers are using your product, how often, how much it costs you to serve them, and how much theyre willing to pay for a genAI feature. For instance, its important to dig into the following:

For customer personas, you want to figure out which personas are willing to pay and which arent. Do all your customers realize value from the genAI feature, or just some?

A good way to learn more about your customer personas and segments is through interviews, surveys, and sales team data.

One note here: be wary of AI touristsor customers who sign up for your product because they either have a company-wide mandate to experiment with genAI (B2B) or because theyre excited to try a new genAI feature themselves (prosumer). These users can be difficult to retain, even in cases where theyre willing to pay to try your product. (That said, we are seeing more enterprise companies reallocate genAI spend from innovation lines to standard software lines, which indicates that genAI is evolving into a key part of many enterprises.)

This is where your vision as a founder comes in. Maybe only a subset of your customers are excited about generative AI right now, but you believe that generative AI will eventually reshape the customer experience of your product and present a much richer value prop. Or maybe youre still determining how generative AI will benefit your customers, and right now its a nice-to-have for certain users. This part of the exercise is qualitative and vision-oriented, and its up to you to decide how central genAI will be to your product roadmap and value prop going forward.

Once you have both a good sense of the value and cost of your genAI feature, and a working hypothesis for how generative AI figures into your current product offering and future roadmap, you can start thinking about packaging and pricing concretely.

We generally see B2B and prosumer genAI features fall into 3 buckets: as a core offering, as part of an upgrade tier, or as an add-on. Many of the best practices for packaging non-genAI prosumer and B2B SaaS still apply in the genAI era at this point, so it can also be helpful to reference how companies have packaged their new non-genAI features in the past.

If all your customers are excited and willing to pay for your genAI feature, early usage data shows that it significantly increases adoption and conversion, and genAI is mission-critical to your value prop, it can be smart to include it as part of your core offering. In this case, you might not need to directly monetize your genAI feature because it has significant downstream effects on your TAM and conversion. The example we cited earlier applies here: maybe you could serve 10 customers with your existing product and now you can serve 100 thanks to your genAI feature.

Since were in the land grab phase of generative AI adoption, including your generative AI feature in your core offering can also make your product more appealing than offerings from incumbents and other startups. Because theres demand for this feature across all segments, however, we imagine that some companies will eventually increase the total price of their core package to better account for the cost of serving the feature.

Packaging your genAI feature as an upgrade works when its a nice-to-have feature that can act as an upsell lever for the majority of your customers. The feature doesnt radically change the way customers use your product, but it can help most of them unlock more value. Some genAI companies might offer more data sets in an upgrade feature, for instance. Or take Mailchimp as an example. Most of their users might not need a genAI feature in the core offeringthey likely just want to be able to build and serve an email listbut genAI-generated email copy, segmentation, and analytics could enhance most users experience of the product. Weve also seen some companies use their genAI feature as an upsell lever to increase conversion to a higher pricing tier or cover part of the cost of serving genAI.

Packaging a genAI feature as an add-on is wise if your genAI feature delivers significant value to a small set of customers willing to pay a premium and you want to directly manage your margins when serving the feature. Packaging as an add-on can:

We think of the add-on as the power user packagecompanies can charge a premium for their products because a set of power users will disproportionately benefit from that feature. That said, we have seen some other companies who package their add-on genAI features sell the add-on to the entire company when doing enterprise deals to prevent users from sharing logins, even if only a few individual users want it. This might cover costs in the short-term, but be aware that buyers might not always want to buy software packages with this mandate.

One note: weve seen some companies include basic genAI features in a core or lower-paid tier and gate better-performing genAI features or more genAI usage in a higher tier. In these cases, the logic of segmenting for value remains the same. If a genAI feature can expand the number of users you can serve, for instance, consider offering it as a core feature. If your other, more high-powered genAI feature enables your power users, you could gate that feature and charge more for it.

Because most B2B SaaS and prosumer companies sell software-to-human products, it makes sense to monetize through subscriptions instead of usageno customer wants to estimate how much genAI theyll use. But subscription pricing in the genAI era can exaggerate the gap between how much your customers use your product and how much revenue you actually bring in. In fact, selling seats for your genAI feature can actually put you in the position of hoping your customers dont use your products. Power users pay the same flat fee as customers who barely use your product, which means your most important customers can eat into your margins.

So how can you better align your incentives with your customers? Because were still in the early phase of genAI adoption, it remains an open question.

When we take a look at the current landscape, however, we see core and upgrade packaging priced by seat by default. GenAI is either part of a base product or an added feature in an existing subscription tier. That said, some companies are experimenting with hybrid subscriptionconsumption models when pricing their add-on features in order to better cover costs and monetize power users. These hybrid approaches include credit drawdown approaches (like Box) or flat-rate seats with credits for incremental levels of consumption (like Adobe Creative Cloud). (When adding a usage-based element to an existing subscription motion, consider how to provide predictability to customers and handle overconsumption.)

We examined 31 companies with new genAI offerings to see how theyre pricing and packaging their new genAI feature. Heres what we see in the data.

As genAI starts to offer customers significant productivity gains, some companies are thinking ahead to implementing outcome-based pricing, in which vendors charge companies for the outcomes of their software instead of for the software itself.

Outcome-based pricing is harder to get right today, since founders are still figuring out how to quantify the value genAI provides their customers. But if genAI features make companies significantly more productive down the road, it wont make economic sense to price your offerings on a contracting user base. So we see outcome-based pricing having a potentially significant impact on companies selling a software-to-human product, like a workflow or human resources tool.

The advantage of this pricing model is getting tightly aligned with your customers incentives, but making sure you and your customers agree on what defines an outcome or resolution can be difficult and youd need to trust that genAI could reliably resolve your customers questions. That said, were already seeing some companies experiment with this, like Intercoms Fin Chat product, and were excited to see how this evolves.

The cost of inference is stabilizing, open source is booming, and different model providers are constantly driving down the price of their models in a bid to attract more users. Given this, companies should be ready to adapt their pricing models as model providers continue to lower the cost of APIs.

For now, its probably wise to price at a level thats at least somewhat economical for your business in the short term while expecting the cost to service to decline over the long term (as it already has!) and drive future margin expansion. That said, if youre banking on your genAI feature to become a core part of your business and youre not seeing results soon, dont hesitate to revisit your pricing and packaging structure.

Generative AI stands to deliver an unprecedented amount of value to software end users, and the goal for growth-stage founders now is to figure out how to best capture that value while maintaining stable unit economics and solid margins. With no one-size-fits-all solution, successful founders will have to take past and real-time learnings to build a clear, nimble pricing and packaging structure that communicates the value of their product roadmap.

Though best practices are still emerging, we hope these frameworks help you better navigate the pricing and packaging process for your new genAI feature.

More here:

Pricing and Packaging Your B2B or Prosumer Generative AI Feature - Andreessen Horowitz

Mustafa Suleyman, DeepMind and Inflection Co-founder, joins Microsoft to lead Copilot – The Official Microsoft Blog – Microsoft

Satya Nadella, Chief Executive Officer, shared the below communication today with Microsoft employees.

I want to share an exciting and important organizational update today. We are in Year 2 of the AI platform shift and must ensure we have the capability and capacity to boldly innovate.

There is no franchise value in our industry and the work and product innovation we drive at this moment will define the next decade and beyond. Let us use this opportunity to build world-class AI products, like Copilot, that are loved by end-users! This is about science, engineering, product, and design coming together and embracing a learning mindset to push our innovation culture and product building process forward in fundamental ways.

In that context, Im very excited to announce that Mustafa Suleyman and Karn Simonyan are joining Microsoft to form a new organization called Microsoft AI, focused on advancing Copilot and our other consumer AI products and research.

Mustafa will be EVP and CEO, Microsoft AI, and joins the senior leadership team (SLT), reporting to me. Karn is joining this group as Chief Scientist, reporting to Mustafa. Ive known Mustafa for several years and have greatly admired him as a founder of both DeepMind and Inflection, and as a visionary, product maker, and builder of pioneering teams that go after bold missions.

Karn, a Co-founder and Chief Scientist of Inflection, is a renowned AI researcher and thought leader, who has led the development of some of the biggest AI breakthroughs over the past decade including AlphaZero.

Several members of the Inflection team have chosen to join Mustafa and Karn at Microsoft. They include some of the most accomplished AI engineers, researchers, and builders in the world. They have designed, led, launched, and co-authored many of the most important contributions in advancing AI over the last five years. I am excited for them to contribute their knowledge, talent, and expertise to our consumer AI research and product making.

At our core, we have always been a platform and partner-led company, and well continue to bring that sensibility to all we do. Our AI innovation continues to build on our most strategic and important partnership with OpenAI. We will continue to build AI infrastructure inclusive of custom systems and silicon work in support of OpenAIs foundation model roadmap, and also innovate and build products on top of their foundation models. And todays announcement further reinforces our partnership construct and principles.

As part of this transition, Mikhail Parakhin and his entire team, including Copilot, Bing, and Edge; and Misha Bilenko and the GenAI team will move to report to Mustafa. These teams are at the vanguard of innovation at Microsoft, bringing a new entrant energy and ethos, to a changing consumer product landscape driven by the AI platform shift. These organizational changes will help us double down on this innovation.

Kevin Scott continues as CTO and EVP of AI, responsible for all-up AI strategy, including all system architecture decisions, partnerships, and cross-company orchestration. Kevin was the first person I leaned on to help us manage our transformation to an AI-first company and Ill continue to lean on him to ensure that our AI strategy and initiatives are coherent across the breadth of Microsoft.

Rajesh Jha continues as EVP of Experiences & Devices and Im grateful for his leadership as he continues to build out Copilot for Microsoft 365, partnering closely with Mustafa and team.

There are no other changes to the senior leadership team or other organizations.

We have been operating with speed and intensity and this infusion of new talent will enable us to accelerate our pace yet again.

We have a real shot to build technology that was once thought impossible and that lives up to our mission to ensure the benefits of AI reach every person and organization on the planet, safely and responsibly. Im looking forward to doing so with you.

Satya

Read the original here:

Mustafa Suleyman, DeepMind and Inflection Co-founder, joins Microsoft to lead Copilot - The Official Microsoft Blog - Microsoft

NVIDIA Launches Generative AI Microservices for Developers to Create and Deploy Generative AI Copilots Across … – NVIDIA Blog

GTCNVIDIA today launched dozens of enterprise-grade generative AI microservices that businesses can use to create and deploy custom applications on their own platforms while retaining full ownership and control of their intellectual property.

Built on top of theNVIDIA CUDAplatform, the catalog of cloud-native microservices includesNVIDIA NIMmicroservices for optimized inference on more than two dozen popular AI models from NVIDIA and its partner ecosystem. In addition, NVIDIA accelerated software development kits, libraries and tools can now be accessed asNVIDIA CUDA-X microservices for retrieval-augmented generation (RAG), guardrails, data processing, HPC and more. NVIDIA also separately announced over two dozenhealthcare NIM and CUDA-X microservices.

The curated selection of microservices adds a new layer to NVIDIAs full-stack computing platform. This layer connects the AI ecosystem of model developers, platform providers and enterprises with a standardized path to run custom AI models optimized for NVIDIAs CUDA installed base of hundreds of millions of GPUs across clouds, data centers, workstations and PCs.

Among the first to access the new NVIDIA generative AI microservices available inNVIDIA AI Enterprise 5.0are leading application, data and cybersecurity platform providers includingAdobe,Cadence,CrowdStrike, Getty Images,SAP,ServiceNow, and Shutterstock.

Established enterprise platforms are sitting on a goldmine of data that can be transformed into generative AI copilots, said Jensen Huang, founder and CEO of NVIDIA. Created with our partner ecosystem, these containerized AI microservices are the building blocks for enterprises in every industry to become AI companies.

NIM Inference Microservices Speed Deployments From Weeks to Minutes NIM microservices provide pre-built containers powered by NVIDIA inference software including Triton Inference Server and TensorRT-LLM which enable developers to reduce deployment times from weeks to minutes.

They provide industry-standard APIs for domains such as language, speech and drug discovery to enable developers to quickly build AI applications using their proprietary data hosted securely in their own infrastructure. These applications can scale on demand, providing flexibility and performance for running generative AI in production on NVIDIA-accelerated computing platforms.

NIM microservices provide the fastest and highest-performing production AI container for deploying models from NVIDIA,A121, Adept,Cohere, Getty Images, and Shutterstock as well as open models from Google,Hugging Face, Meta, Microsoft, Mistral AI and Stability AI.

ServiceNowtoday announced that it is using NIM to develop and deploy new domain-specific copilots and other generative AI applications faster and more cost effectively.

Customers will be able to access NIM microservices fromAmazon SageMaker,Google Kubernetes EngineandMicrosoft Azure AI, and integrate with popular AI frameworks likeDeepset,LangChainandLlamaIndex.

CUDA-X Microservices for RAG, Data Processing, Guardrails, HPC CUDA-X microservicesprovide end-to-end building blocks for data preparation, customization and training to speed production AI development across industries.

To accelerate AI adoption, enterprises may use CUDA-X microservices includingNVIDIA Rivafor customizable speech and translation AI,NVIDIA cuOpt for routing optimization, as well asNVIDIA Earth-2for high resolution climate and weather simulations.

NeMo Retriever microservices let developers link their AI applications to their business data including text, images and visualizations such as bar graphs, line plots and pie charts to generate highly accurate, contextually relevant responses. With these RAG capabilities, enterprises can offer more data to copilots, chatbots and generative AI productivity tools to elevate accuracy and insight.

AdditionalNVIDIA NeMo microservicesare coming soon for custom model development. These include NVIDIA NeMo Curator for building clean datasets for training and retrieval, NVIDIA NeMo Customizer for fine-tuning LLMs with domain-specific data, NVIDIA NeMo Evaluator for analyzing AI model performance, as well asNVIDIA NeMo Guardrailsfor LLMs.

Ecosystem Supercharges Enterprise Platforms With Generative AI Microservices In addition to leading application providers, data, infrastructure and compute platform providers across the NVIDIA ecosystem are working with NVIDIA microservices to bring generative AI to enterprises.

Top data platform providers includingBox, Cloudera, Cohesity,Datastax, Dropbox andNetAppare working with NVIDIA microservices to help customers optimize their RAG pipelines and integrate their proprietary data into generative AI applications.Snowflakeleverages NeMo Retriever to harness enterprise data for building AI applications.

Enterprises can deploy NVIDIA microservices included with NVIDIA AI Enterprise 5.0 across the infrastructure of their choice, such as leading cloudsAmazon Web Services (AWS),Google Cloud,AzureandOracle Cloud Infrastructure.

NVIDIA microservices are also supported on over 400 NVIDIA-Certified Systems, including servers and workstations from Cisco,Dell Technologies,Hewlett Packard Enterprise (HPE), HP,Lenovoand Supermicro. Separately today, HPE announced availability of HPEs enterprise computing solution for generative AI, with planned integration of NIM andNVIDIA AI Foundation modelsinto HPEs AI software.

NVIDIA AI Enterprise microservices are coming to infrastructure software platforms includingVMware Private AI Foundationwith NVIDIA.Red HatOpenShift supports NVIDIA NIM microservices to help enterprises more easily integrate generative AI capabilities into their applications with optimized capabilities for security, compliance and controls.Canonicalis adding Charmed Kubernetes support for NVIDIA microservices through NVIDIA AI Enterprise.

NVIDIAs ecosystem of hundreds of AI and MLOps partners, including Abridge, Anyscale, Dataiku,DataRobot,Glean, H2O.ai,Securiti AI,Scale AI,OctoAIandWeights & Biases, are adding support for NVIDIA microservices through NVIDIA AI Enterprise.

Apache Lucene,Datastax, Faiss, Kinetica, Milvus, Redis, and Weaviate are among the vector search providers working with NVIDIA NeMo Retriever microservices to power responsive RAG capabilities for enterprises.

Availability Developers can experiment with NVIDIA microservices atai.nvidia.comat no charge. Enterprises can deploy production-grade NIM microservices with NVIDIA AI Enterprise 5.0 running on NVIDIA-Certified Systems and leading cloud platforms.

For more information, watch the replay ofHuangs GTC keynoteand visit the NVIDIA booth at GTC, held at the San Jose Convention Center through March 21.

View post:

NVIDIA Launches Generative AI Microservices for Developers to Create and Deploy Generative AI Copilots Across ... - NVIDIA Blog

Android Auto AI message summaries are now available here’s how it works – 9to5Google

With the launch of Android Auto 11.4, Google is making its AI message summaries available to all users. Heres how the feature works.

AI messages summaries were first announced by Google earlier this year as a way to quickly understand long messages without having Google Assistant read the whole thing aloud. The feature popped up in Android Autos settings quickly after, but the feature wasnt actually live.

But, now, it seems to finally be widely available.

When starting up Android Auto for the first time after installing v11.4, Google will send a notification to your phone explaining that message summaries are now available. You dont have to take any action with this, but it does also offer a shortcut to settings to manage the feature.

As was detailed recently, AI message summaries only work on longer messages, with 40 words being the barrier. For shorter messages, Google Assistant will still read the contents aloud in full.

On the first time you trigger an AI message summary, Google will alert you that it is generating a summary and that, with that in mind, the contents may be slightly incorrect. While that message is being read, a silent notification on your phone shows up to indicate that the summary is being generated. When read aloud, our test message of over 100 words was summarized down to about 15 words. Much of the detail was purged in the process, but the overall meaning was preserved. Your results, obviously, may vary depending on the scenario.

When an AI summary is being read, theres virtually no difference in the on-screen reply UI, which now takes up the entire display following a recent redesign.

After the summary is read, a notification appears on Android Auto asking for feedback on the summary.

If you do not want AI summaries, the feature can be easily turned off through Android Auto settings either on the cars display or on your phone.

The feature is still referred to as AI message summaries both on the phone and in Android Autos on-car settings, but theres also a new toggle for Notifications with Assistant, but its not super clear at this time exactly what that does.

Follow Ben:Twitter/X,Threads, andInstagram

FTC: We use income earning auto affiliate links. More.

Continued here:

Android Auto AI message summaries are now available here's how it works - 9to5Google

These are the top AI programming languages – Fortune

Weve all heard some of the conversations around AI. While there are many risks, the opportunities for global development and innovation are endlessand likely unstoppable.

In fact, PwC predicts that by 2030, AI alone will contribute $15.7 trillion to the global economy.

ADVERTISEMENT

And with household names like ChatGPT only making up a fraction of the AI ecosystem, the career opportunities in the space also seem endless. AI and machine learning specialist roles are predicted to be the fastest-growing jobs in the world, according to the World Economic Forums 2023 Future of Jobs Report.

Even beyond namesake AI experts, the technology is being utilized more and more across the text world. In fact, 70% of professional developers either use or are planning to use AI tools in their workflows, according to Stack Overflows 2023 Developer Survey.

So, for those especially outside the world of tech, how does AI even work and get created? Programming is at the core.

By and large, Python is the programming language most relevant when it comes to AIin part thanks to the languages dynamism and ease.

Python dominates the landscape because of its simplicity, readability, and extensive library ecosystem, especially for generative AI projects, says Ratinder Paul Singh Ahuja, CTO and VP at Pure Storage.

Rakesh Anigundi, Ryzen AI product lead at AMD, goes even further and calls Python a table stakes languagemeaning it is a baseline skill all those working in AI need to know.

LinkedIn even ranks Python as the second-most in-demand hard skills for engineering in the U.S., second only to engineering itself.

In particular, skills in key programming languages commonly used in the development of AIPython, Java, and SQLrank among the top five most sought-after skills on the technical side in the U.S., writes LinkedIns head of data and AI, Ya Xu.

The programming languages that are most relevant to the world of AI today may not be the most important tomorrow. And, even more crucially, they may not be most utilized by your company.

Regardless, having foundation skills in a language like Python can only help you in the long run. Enrolling in a Python bootcamp or taking a free online Python course is one of many ways to learn the skills to succeed. Students may also be exposed to Python in an undergraduate or graduate level coursework in data science or computer science.

Anigundi also notes it is important for students to be able to know how to efficiently set up programming work environments and know what packages are needed to work on a particular AI model. Being an expert at mathematics like statistics and regressions is also useful.

We have been through these tech trends alwaysits just that the pace at which some of these changes are happening is mind boggling to me, at least in my lifetime, he says. But that still doesnt take away some of the institutional knowledge that these different educational institutes impart in you.

It can be worth considering specializing in a sub-field aligning with personal interests like natural language processing, computer vision, or robotics, Singh Ahuja says. Prioritizing ethics and understanding the true implications of AI are also critical.

But since AI technology is changing so rapidly, soft skills can be argued to be even more important than technical capabilities. Some of the critical skills Singh Ahuja identifies include:

Above all, demonstrating your passion and desire to learn through real-world experience can help you distinguish yourself among the competitive field.

If youre in a very early part of your careerpicking a project, doing a project demonstrating value, sharing it, writing blocks, thats how you create an impact, Anigundi says.

Read more:

These are the top AI programming languages - Fortune

Here Come the AI Worms – WIRED

As generative AI systems like OpenAI's ChatGPT and Google's Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI wormswhich can spread from one system to another, potentially stealing data or deploying malware in the process. It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before, says Ben Nassi, a Cornell Tech researcher behind the research.

Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messagesbreaking some security protections in ChatGPT and Gemini in the process.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms havent been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

Most generative AI systems work by being fed promptstext instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions. For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

To create the generative AI worm, the researchers turned to a so-called adversarial self-replicating prompt. This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the systemby using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

Go here to see the original:

Here Come the AI Worms - WIRED

How businesses are actually using generative AI – The Economist

Listen to this story. Enjoy more audio and podcasts on iOS or Android.

Your browser does not support the

IT HAS BEEN nearly a year since OpenAI released GPT-4, its most sophisticated artificial-intelligence model and the brain-of-sorts behind ChatGPT, its groundbreaking robot conversationalist. In that time the market capitalisation of Americas technology industry, broadly defined, has risen by half, creating $6trn in shareholder value. For some tech firms, growing revenue is starting to match sky-high share prices. On February 21st Nvidia, which designs chips used to train and run models like GPT-4, reported bumper fourth-quarter results, sending its market value towards $2trn. AI mania has also lifted the share prices of other tech giants, including Alphabet (Googles corporate parent), Amazon and Microsoft, which are spending big on developing the technology.

At the same time, big techs sales of AI software remain small. In the past year AI has accounted for only about a fifth of the growth in revenues at Azure, Microsofts cloud-computing division, and related services. Alphabet and Amazon do not reveal their AI-related sales, but analysts suspect they are lower than those of Microsoft.For the AI stockmarket boom to endure, these firms will at some point need to make serious money from selling their services to clients. Businesses across the world, from banks and consultancies to film studios, have to start using ChatGPT-like tools on a large scale. When it comes to real-world adoption of such generative AI, companies have trodden gingerly. Yet even these baby steps hint at the changing nature of white-collar work.

Previous technological breakthroughs have revolutionised what people do in offices. The spread of the typewriter put some workers out of a job: With the aid of this little machine an operator can accomplish more correspondence in a day than half a dozen clerks can with the pen, and do better work, said an observer in 1888. The rise of the computer about a century later eliminated some low-level administrative tasks even as it made highly skilled employees more productive. According to one paper, the computer explains over half the shift in demand for labour towards college-educated workers from the 1970s to the 1990s. More recently the rise of working from home, prompted by the covid-19 pandemic and enabled by video-conferencing, has changed the daily rhythms of white-collar types.

Could generative AI prompt similarly profound changes? A lesson of previous technological breakthroughs is that, economywide, they take ages to pay off. The average worker at the average firm needs time to get used to new ways of working. The productivity gains from the personal computer did not come until at least a decade after it became widely available. So far there is no evidence of an AI-induced productivity surge in the economy at large. According to a recent survey from the Boston Consulting Group (BCG), a majority of executives said it will take at least two years to move beyond the hype around AI. Recent research by Oliver Wyman, another consultancy, concludes that adoption of AI has not necessarily translated into higher levels of productivityyet.

That is unsurprising. Most firms do not currently use ChatGPT, Googles Gemini, Microsofts Copilot or other such tools in a systematic way, even if individual employees play around with them. A fortnightly survey by Americas Census Bureau asks tens of thousands of businesses whether they use some form of AI. This includes the newfangled generative sort and the older type that companies were using before 2023 for everything from improving online search results to forecasting inventory needs. In February only about 5% of American firms of all sizes said they used AI. A further 7% of firms plan to adopt it within six months (see chart). And the numbers conceal large differences between sectors: 17% of firms in the information industry, which includes technology and media, say they use it to make products, compared with 3% of manufacturers and 5% of health-care companies.

When the Census Bureau began asking about AI in September 2023, small firms were likelier to use the technology than big ones, perhaps because less form-ticking made adoption easier for minnows. Today AI is most prevalent in big companies (with more than 250 employees), which can afford to enlist dedicated AI teams and to pay for necessary investments.A poll of large firms by Morgan Stanley, a bank, found that between the start and end of 2023 the share with pilot AI projects rose from 9% to 23%.

Some corporate giants are frantically experimenting to see what works and what doesnt. They are hiring AI experts by the thousand, suggest data from Indeed, a job-search platform (see chart). Last year Jamie Dimon, boss of JPMorgan Chase, said that the bank already had more than 300 AI use cases in production today. Capgemini, a consultancy, says it will utilise Google Clouds generative AI to develop a rich library of more than 500 industry use cases. Bayer, a big German chemicals company, claims to have more than 700 use cases for generative AI.

This use-case sprawl, as one consultant calls it, can be divided into three big categories: window-dressing, tools for workers with low to middling skills, and those for a firms most valuable employees. Of these, window-dressing is by far the most common. Many firms are rebranding run-of-the-mill digitisation efforts as gen AI programmes to sound more sophisticated, says Kristina McElheran of the University of Toronto. Presto, a purveyor of restaurant tech, introduced a gen-AI assistant to take orders at drive-throughs. But fully 70% of such orders require a human to help. Spotify, a music-streaming firm, has rolled out an AI disc-jockey which selects songs and provides inane banter. Recently Instacart, a grocery-delivery company, removed a tool that generated photos of vendors food, after the AI showed customers unappetising pictures. Big tech firms, too, are incorporating their own AI breakthroughs into their consumer-facing offerings. Amazon is launching Rufus, an AI-powered shopping assistant that no shopper really asked for. Google has added AI to Maps, making the product more immersive, whatever that means.

Tools for lower-skilled workers could be more immediately useful. Some simple applications for things like customer service involve off-the-shelf AI. Most customers questions are simple and concern a small number of topics, making it easy for companies to train chatbots to deal with them. A few of these initiatives may already be paying off. Amdocs produces software to help telecoms companies manage their billing and customer services. The use of generative AI, the company says, has reduced the handling time of customers calls by almost 50%. Sprinklr, which offers similar products, says that recently one of its luxury-goods clients has seen a 25% improvement in customer-service scores.

Routine administrative tasks likewise look ripe for AI disruption. The top examples of Bayers 700 use cases include mundane jobs such as easily getting data from Excel files and creating a first draft in Word. Some companies are using generative AI as cleverer search. At Nasdaq, a financial-services firm, it helps financial-crime sleuths gather evidence to assess suspicious bank transactions. According to the company, this cuts a process which can take 30-60 minutes to three minutes.

Giving AI tools to a firms most valuable workers, whose needs are complex, is less widespread so far. But it, too, is increasingly visible. Lawyers have been among the earliest adopters. Allen & Overy, a big law firm, teamed up with Harvey, an AI startup, to develop a system that its lawyers use to help with everything from due diligence to contract analysis. Investment banks are using AI to automate part of their research process. At Bank of New York Mellon an AI system processes data for the banks analysts overnight and gives them a rough draft to work with in the morning. So rather than getting up at four in the morning to write research, they get up at six, the bank says. Small mercies. Sanofi, a French drugmaker, uses an AI app to provide executives with real-time information about many aspects of the companys operations.

Some companies are using the technology to build software. Microsofts GitHub Copilot, an AI coding-writing tool, has 1.3m subscribers. Amazon and Google have rival products. Apple is reportedly working on one. Fortive, a technology conglomerate, says that its operating companies are seeing a greater-than-20% acceleration in software-development time through the use of gen AI. Chirantan Desai, chief operating officer of ServiceNow, a business-software company, has said that GitHub Copilot produces single-digit productivity gains for his firms developers. With the help of AI tools, Konnectify, an Indian startup, went from releasing four apps per month to seven.Surveys from Microsoft suggest that few people who start using Copilot want to give it up.

Pinterest, a social-media company, says it has improved the relevance of users search results by ten percentage points thanks to generative AI. On a recent earnings call its boss, Bill Ready, said that new models were 100 times bigger than the ones his firm used before. LOral, one of the worlds largest cosmetics firms, has caught the eye of investors as it improves BetIQ, an internal tool to measure and improve the companys advertising and promotion. LOral claims that generative AI is already generating productivity increases of up to 10-15% for some of our brands that have deployed it.

This does not mean that those brands will need 10-15% fewer workers. As with earlier technological revolutions, fears of an AI jobs apocalypse look misplaced. So far the technology appears to be creating more jobs than it eliminates. A survey published in November by Evercore ISI, a bank, found that just 12% of corporations believed that generative AI had replaced human labour or would replace it within 12 months. Although some tech firms claim to be freezing hiring or cutting staff because of AI, there is little evidence of rising lay-offs across the rich world.

Generative AI is also generating new types of white-collar work. Companies including Nestl, a coffee-to-cat-food conglomerate, and KPMG, a consultancy, are hiring prompt engineers expert at eliciting useful responses from AI chatbots. One insurance firm employs explainability engineers to help understand the outputs of AI systems. A consumer-goods firm that recently introduced generative AI in its sales team now has a sales-bot manager to keep an eye on the machines.

Though such developments will not translate into overall productivity statistics for a while, they are already affecting what white-collar workers do. Some effects are clearly good. AI lets firms digitise and systematise internal data, from performance reviews to meeting records, that had previously remained scattered. Respondents to surveys conducted by Randy Bean, a consultant, reported big improvements in establishing an internal data and analytics culture, which plenty of businesses find stubbornly difficult to nurture.

AI adoption may also have certain unpredictable consequences. Although AI code-writing tools are helping software engineers do their jobs, a report for GitClear, a software firm, found that in the past year or so the quality of such work has declined. Programmers may be using AI to produce a first draft only to discover that it is full of bugs or lacking concision. As a result, they could be spending less time writing code, but more time reviewing and editing it. If other companies experience something similar, the quantity of output in the modern workplace may go upas AI churns out more emails and memoseven as that output becomes less useful for getting stuff done.

Polling by IBM, a tech firm, suggests that many companies are cagey about adopting AI because they lack internal expertise on the subject. Others worry that their data is too siloed and complex to be brought together. About a quarter of American bosses ban the use of generative AI at work entirely. One possible reason for their hesitance is worry about their companies data. In their annual reports Blackstone, a private-equity giant, and Eli Lilly, a pharmaceutical one, have warned investors about AI-related risks such as possible leakage of intellectual property to AI model-makers. Last year Marie-Hlne Briens Ware, an executive at Orange, a telecoms company, explained that the firm had put data guardrails in place before commencing a trial with Microsofts Copilot.

Ultimately, for more businesses to see it as an open-and-shut case, generative AI still needs to improve. In November Microsoft launched a Copilot for its productivity software, such as Word and Excel. Some early users find it surprisingly clunky and prone to crashingnot to mention cumbersome, even for people already adept at Office. Many bosses remain leery of using generative AI for more sensitive operations until the models stop making things up. Recently Air Canada found itself in hot water after its AI chatbot gave a passenger incorrect information about the airlines refund policy. That was embarrassing for the carrier, but it is easy to imagine something much worse. Still, even the typewriter had to start somewhere.

To stay on top of the biggest stories in business and technology, sign up tothe Bottom Line, our weekly subscriber-only newsletter.

Continue reading here:

How businesses are actually using generative AI - The Economist

AI boom makes Nvidia third US stock to close above $2tn valuation – Financial Times

Unlock the Editors Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Nvidias market value closed on Friday above $2tn for the first time, with enthusiasm about the prospects of artificial intelligence fuelling an eighth straight week of gains for the chipmakers shares.

Apple, Microsoft and Google-parent Alphabet are the other US-listed companies to have reached intraday market values of $2tn, but only the former two have reached the end of a trading day with valuations above that threshold.

Nvidia shares rose 4 per cent on Friday, giving it a valuation of about $2.05tn. Its share price has now climbed 66 per cent since the start of 2024, or about $830bn in dollar terms. That followed a more than 230 per cent increase in 2023, as the company repeatedly blasted through analyst and investor forecasts.

In its most recent financial update last month, Nvidia reported a 265 per cent year-on-year increase in revenues, and chief executive Jensen Huang declared that AI had hit the tipping point with surging demand across companies, industries and nations.

The tech group added $277bn in market capitalisation on the day after the results, a record for a US-listed company.

Nvidia has an almost monopoly position, said Tim Murray, multi-asset strategist at T Rowe Price because the chips they make are the most essential tools to [AI].

Nvidias latest earnings report, coupled with broader enthusiasm about the potential of AI technology, have helped to fuel a wider rally across global stock markets with Wall Streets S&P 500 hitting multiple new records and the tech-heavy Nasdaq Composite surpassing levels seen in 2021 to hit a peak on Friday.

The chipmaker has single-handedly driven more than a quarter of the year to date gains in the S&P 500, directly lifting the index by 96 points even before considering the broader effect it has had on investor sentiment.

Nvidias earnings were always going to be this barometer of whats the demand for AI chips, said Murray.

This years dramatic ascent of Nvidias shares and those of other tech stocks riding the wave of AI enthusiasm has sparked debate over whether the AI boom may be approaching bubble territory.

Were in a period where with AI theres a lot of excitement and weve probably got some time before we really have to see it proven, said Murray. Theres going to be a period eventually where the companies that are spending on AI need to realise some return on investment.

Youve certainly got some time before theres this moment of truth for the AI craze, he added.

Zehrid Osmani, a portfolio manager at Martin Currie with a large investment in Nvidia, said many stocks had been rallying based only on the hope that AI enthusiasm will lead to future earnings, but Nvidias strength in graphics processing units made it one of the stocks that is genuinely monetising.

Yes, in due course there could be more competition, but if you look at the scale of their [research and development] spending...we believe they should be able to keep their technological edge, he said.

For Kristina Hooper, global chief markets strategist at Invesco, Nvidia has captured imagination while providing some real underpinning to those imaginations and that excitement.

Recommended

The late 1990s was a very similar time period for the stock market, Hooper added, in that there was a lot of excitement over technology. However, there wasnt that fundamental underpinning there werent real earnings, there werent solid cash flows.

It was really very much excitement...Sizzle without steak, she said.

This time around, theres sizzle but theres also steak.

Read more:

AI boom makes Nvidia third US stock to close above $2tn valuation - Financial Times