Page 671«..1020..670671672673..680690..»

EU’s AI Act negotiations hit the brakes over foundation models – EURACTIV

A technical meeting on the EUs AI regulation broke down on Friday (10 November) after large EU countries asked to retract the proposed approach for foundation models. Unless the deadlock is broken in the coming days, the whole legislation is at risk.

The AI Act is a landmark bill to regulate Artificial Intelligence following a risk-based approach. The file is currently in the last phase of the legislative process, with the main EU institutions gathered in so-called trilogues to hash out the final dispositions of the law.

Foundation models have become the sticking point in this late phase of the negotiations. With the rise of ChatGPT, a popular chatbot based on OpenAIs powerful GPT-4 model, EU policymakers have been wondering how best to cover this type of AI in the upcoming law.

At the last political trilogue on 24 October, there seemed to be a consensus to introduce rules for foundation models following a tiered approach, namely, introducing tighter rules for the most powerful ones bound to have more impact on society.

This approach, which goes along similar lines to the Digital Markets Act (DMA) and Digital Services Act (DSA), was seen as a concession from the side of the European Parliament, which would have preferred horizontal rules for all foundation models.

The point of the tiered approach was to put the harshest obligations on the leading providers that currently are non-European companies. However, this approach has faced mounting opposition from large European countries.

On Sunday, the Spanish presidency circulated a first draft that put the tiered approach black-on-white for internal feedback. The European Parliaments co-rapporteurs replied with some modifications on Wednesday, maintaining the overall structure of the provisions.

However, at a meeting of the Telecom Working Party on Thursday, a technical body of the EU Councils of Ministers, representatives from several member states, most notably France, Germany and Italy, pushed against any type of regulation for foundation models.

Leading the charge against any regulation for foundation models in the AI rulebook is Mistral, a French AI start-up that has thrown the gauntlet to Big Tech. Cedric O, Frances former state secretary for digital, is pushing Mistrals lobbying efforts, arguing that the AI Act could kill the company.

Meanwhile, Germany is being pressured by its own leading AI company Aleph Alpha, which Euractiv understands has very high-level connections with the German establishment. All these companies fear the EU regulation might put them on a back foot compared to US and Chinese competitors.

Despite efforts from the Spanish presidency to broker an agreement with the European Parliament, faced with these strong stances from political heavyweights, the Spaniards proposed a general rethinking of the dispositions on foundation models.

Pressed for one hour and a half about the reason for such a change of direction, the arguments advanced included that this tiered approach would have amounted to a regulation in the regulation, and that it could jeopardise innovation and the risk-based approach.

The European Commission originally proposed the tiered approach, which would have seen the EU executive in the driving seat of enforcing on foundation models. However, the Commission did not defend it during the technical meeting.

The European Parliaments representatives ended the meeting two hours earlier because there was nothing else to discuss. Euractiv understands that regulating foundation models is a red line for the parliamentarians, without which an agreement cannot be reached.

The ball is now in the Councils court to come up with a proposal, a parliamentary official told Euractiv under the condition of anonymity, stressing that the presidency did not have an alternative solution to the tiered approach.

A second EU official also told Euractiv anonymously that the presidency is trying to convince reluctant member states, which are against regulating systemic actors at the model level but not at the system level.

At the same time, Euractiv understands that a growing faction inside the most reluctant member states is opposing the AI Act as a whole, considering it overregulation. Indeed, if no solution is found soon, the entire law might be at risk.

The EU policymakers were expected to close a political agreement at the next trilogue on 6 December, which means that landing zones for the most critical parts should be more or less in sight by the end of November.

If no agreement is reached in December, the outgoing Spanish presidency would have no incentive to continue the work at the technical level, and the upcoming Belgian presidency would have only a few weeks to tie up the loose ends of such a complex file before the European Parliament is dissolved for the EU elections next June.

Moreover, a general rethinking of the approach to foundation models would also require a deep revision of the regulations governance architecture and dispositions for responsibilities alongside the AI value chain, for which there might simply not be enough time.

When the AI Act was proposed in April 2021, the EU had a first-mover advantage in setting the worlds international standard for regulating Artificial Intelligence. As the hype on AI has grown, policymakers in the US, UK and China have become increasingly active.

Failing to agree on the EUs AI rulebook under this mandate would not only make lose momentum, but it would also result in Brussels losing ground compared to other jurisdictions.

The Telecom Working Party is due to meet again next Tuesday. Another technical meeting is scheduled among the EU co-legislators on the same day. Euractiv understands negotiations have now been escalated at the highest political level to break the deadlock.

The AI Act is on the line now, a third EU official told Euractiv. Its now or never.

[Edited by Nathalie Weatherald]

Read this article:

EU's AI Act negotiations hit the brakes over foundation models - EURACTIV

Read More..

The World Is Running Out of Data to Feed AI, Experts Warn – ScienceAlert

As artificial intelligence (AI) reaches the peak of its popularity, researchers have warned the industry might be running out of training data the fuel that runs powerful AI systems.

This could slow down the growth of AI models, especially large language models, and may even alter the trajectory of the AI revolution.

But why is a potential lack of data an issue, considering how much there are on the web? And is there a way to address the risk?

We need a lot of data to train powerful, accurate and high-quality AI algorithms. For instance, ChatGPT was trained on 570 gigabytes of text data, or about 300 billion words.

Similarly, the stable diffusion algorithm (which is behind many AI image-generating apps such as DALL-E, Lensa and Midjourney) was trained on the LIAON-5B dataset comprising of 5.8 billion image-text pairs. If an algorithm is trained on an insufficient amount of data, it will produce inaccurate or low-quality outputs.

The quality of the training data is also important. Low-quality data such as social media posts or blurry photographs are easy to source, but aren't sufficient to train high-performing AI models.

Text taken from social media platforms might be biased or prejudiced, or may include disinformation or illegal content which could be replicated by the model. For example, when Microsoft tried to train its AI bot using Twitter content, it learned to produce racist and misogynistic outputs.

This is why AI developers seek out high-quality content such as text from books, online articles, scientific papers, Wikipedia, and certain filtered web content. The Google Assistant was trained on 11,000 romance novels taken from self-publishing site Smashwords to make it more conversational.

The AI industry has been training AI systems on ever-larger datasets, which is why we now have high-performing models such as ChatGPT or DALL-E 3. At the same time, research shows online data stocks are growing much slower than datasets used to train AI.

In a paper published last year, a group of researchers predicted we will run out of high-quality text data before 2026 if the current AI training trends continue. They also estimated low-quality language data will be exhausted sometime between 2030 and 2050, and low-quality image data between 2030 and 2060.

AI could contribute up to US$15.7 trillion (A$24.1 trillion) to the world economy by 2030, according to accounting and consulting group PwC. But running out of usable data could slow down its development.

While the above points might alarm some AI fans, the situation may not be as bad as it seems. There are many unknowns about how AI models will develop in the future, as well as a few ways to address the risk of data shortages.

One opportunity is for AI developers to improve algorithms so they use the data they already have more efficiently.

It's likely in the coming years they will be able to train high-performing AI systems using less data, and possibly less computational power. This would also help reduce AI's carbon footprint.

Another option is to use AI to create synthetic data to train systems. In other words, developers can simply generate the data they need, curated to suit their particular AI model.

Several projects are already using synthetic content, often sourced from data-generating services such as Mostly AI. This will become more common in the future.

Developers are also searching for content outside the free online space, such as that held by large publishers and offline repositories. Think about the millions of texts published before the internet. Made available digitally, they could provide a new source of data for AI projects.

News Corp, one of the world's largest news content owners (which has much of its content behind a paywall) recently said it was negotiating content deals with AI developers. Such deals would force AI companies to pay for training data whereas they have mostly scraped it off the internet for free so far.

Content creators have protested against the unauthorised use of their content to train AI models, with some suing companies such as Microsoft, OpenAI and Stability AI. Being remunerated for their work may help restore some of the power imbalance that exists between creatives and AI companies.

Rita Matulionyte, Senior Lecturer in Law, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Go here to read the rest:

The World Is Running Out of Data to Feed AI, Experts Warn - ScienceAlert

Read More..

Here’s How Violent Extremists Are Exploiting Generative AI Tools – WIRED

We're going to partner with Microsoft to figure out if there are ways using our archive of material to create a sort of gen AI detection system in order to counter the emerging threat that gen AI will be used for terrorist content at scale, Hadley says. We're confident that gen AI can be used to defend against hostile uses of gen AI.

The partnership was announced today, on the eve of the Christchurch Call Leaders Summit, a movement designed to eradicate terrorism and extremist content from the internet, to be held in Paris.

The use of digital platforms to spread violent extremist content is an urgent issue with real-world consequences, Brad Smith, vice chair and president at Microsoft said in a statement. By combining Tech Against Terrorisms capabilities with AI, we hope to help create a safer world both online and off.

While companies like Microsoft, Google, and Facebook all have their own AI research divisions and are likely already deploying their own resources to combat this issue, the new initiative will ultimately aid those companies that cant combat these efforts on their own.

This will be particularly important for smaller platforms that don't have their own AI research centers, Hadley says. Even now, with the hashing databases, smaller platforms can just become overwhelmed by this content.

The threat of AI generative content is not limited to extremist groups. Last month, the Internet Watch Foundation, a UK-based nonprofit that works to eradicate child exploitation content from the internet, published a report that detailed the growing presence of child sexual abuse material (CSAM) created by AI tools on the dark web.

The researchers found over 20,000 AI-generated images posted to one dark web CSAM forum over the course of just one month, with 11,108 of these images judged most likely to be criminal by the IWF researchers. As the IWF researchers wrote in their report, These AI images can be so convincing that they are indistinguishable from real images.

Follow this link:

Here's How Violent Extremists Are Exploiting Generative AI Tools - WIRED

Read More..

New international consortium formed to create trustworthy and … – Argonne National Laboratory

A global consortium of scientists from federal laboratories, research institutes, academia, and industry has formed to address the challenges of building large-scale artificial intelligence (AI) systems and advancing trustworthy and reliable AI for scientific discovery.

The Trillion Parameter Consortium (TPC) brings together teams of researchers engaged in creating large-scale generative AI models to address key challenges in advancing AI for science. These challenges include developing scalable model architectures and training strategies, organizing, and curating scientific data for training models; optimizing AI libraries for current and future exascale computing platforms; and developing deep evaluation platforms to assess progress on scientific task learning and reliability and trust.

At our laboratory and at a growing number of partner institutions around the world, teams are beginning to develop frontier AI models for scientific use and are preparing enormous collections of previously untapped scientific data for training. Rick Stevens, Argonne associate laboratory director for computing, environment and life sciences

Toward these ends, TPC will:

The consortium has formed a dynamic set of foundational work areas addressing three facets of the complexities of building large-scale AI models:

TPC aims to provide the community with a venue in which multiple large model-building initiatives can collaborate to leverage global efforts, with flexibility to accommodate the diverse goals of individual initiatives. TPC includes teams that are undertaking initiatives to leverage emerging exascale computing platforms to train LLMs or alternative model architectures on scientific research including papers, scientific codes, and observational and experimental data to advance innovation and discoveries.

Trillion parameter models represent the frontier of large-scale AI with only the largest commercial AI systems currently approaching this scale.

Training LLMs with this many parameters requires exascale class computing resources, such as those being deployed at several U.S. Department of Energy (DOE) national laboratories and multiple TPC founding partners in Japan, Europe, and elsewhere. Even with such resources, training a state-of-the-art one trillion parameter model will require months of dedicated timeintractable on all but the largest systems. Consequently, such efforts will involve large, multi-disciplinary, multi-institutional teams. TPC is envisioned as a vehicle to support collaboration and cooperative efforts among and within such teams.

At our laboratory and at a growing number of partner institutions around the world, teams are beginning to develop frontier AI models for scientific use and are preparing enormous collections of previously untapped scientific data for training, said Rick Stevens, associate laboratory director of computing, environment and life sciences at DOEs Argonne National Laboratory and professor of computer science at the University of Chicago. We collaboratively created TPC to accelerate these initiatives and to rapidly create the knowledge and tools necessary for creating AI models with the ability to not only answer domain-specific questions but to synthesize knowledge across scientific disciplines.

The founding partners of TPC are from the following organizations (listed in organizational alphabetical order, with a point-of-contact):

TPC contact: Charlie Catlett

Learn more at tpc.dev.

Follow this link:

New international consortium formed to create trustworthy and ... - Argonne National Laboratory

Read More..

VMware, Intel to help businesses build and run AI models on … – CRN Australia

VMware and Intel said they are collaborating to help businesses adopt privacy-minded AI solutions faster by eliminating the guesswork to make the solutions run well on existing infrastructure.

At this weeks VMware Explore 2023 event in Barcelona, the virtualisation giant said it has teamed with Intel to develop a validated reference architecture called VMware Private AI with Intel, which consists of VMware Cloud Foundation and its AI computing features as well as Intels Xeon CPUs, Max Series GPUs and AI software suite.

The reference architecture is set for release by next month, and it will be supported by servers from Dell Technologies, Hewlett Packard Enterprise and Lenovo running fourth-generation Intel Xeon CPUs and Intel Max Series GPUs.

Chris Wolf, vice president of VMware AI Labs, said in a statement to CRN that the reference architecture will create new opportunities for VMware and Intels joint partners.

Our broad and growing ecosystem of AI apps and services, MLOps tools, AI hardware and data services is creating considerable optionality by which our joint partners can customise and differentiate, he said.

The reference architecture is an alternative to the VMware Private AI Foundation with Nvidia platform, which was unveiled in August and enables businesses to develop and run AI models on Dell, HPE and Lenovo servers powered by Nvidia GPUs, DPUs and SmartNICs.

Intel is keen to challenge Nvidias dominant position in the AI computing space with not just GPUs but also CPUs with AI acceleration capabilities such as Advanced Matrix Extensions.

Tuning its hardware and software to run AI workloads well on VMwares multi-cloud platform is an important step in giving the semiconductor giant a better fighting chance as it ramps up competition in silicon.

With the potential of artificial intelligence to unlock powerful new possibilities and improve the life of every person on the planet, Intel and VMware are well equipped to lead enterprises into this new era of AI, powered by silicon and software, said Sandra Rivera, the outgoing executive vice president and general manager of Intels Data Centre and AI Group, in a statement.

Enabling AI work with emphasis on privacy and compliance

The main purpose of VMware Private AI with Intel is to enable the virtualisation giants customers to use existing Intel-based infrastructure and open-source software to simplify building and deploying AI models with an emphasis on practical privacy and compliance needs, according to VMware.

This applies to infrastructure wherever enterprise data is being created, processed and consumed, whether in a public cloud, enterprise data centre or at the edge, the company said.

By tapping into existing infrastructure, businesses can reduce total cost of ownership and address concerns of environmental sustainability, it added.

When it comes to AI, there is no longer any reason to debate trade-offs in choice, privacy and control. Private AI empowers customers with all three, enabling them to accelerate AI adoption while future-proofing their AI infrastructure, Wolf said.

The AI computing reference architecture covers the crucial steps of building and running AI models, from data preparation and model training to fine-tuning and inferencing.

The use cases are wide open, from accelerating scientific discovery to enriching business and consumer services.

VMware Private AI with Intel will help our mutual customers dramatically increase worker productivity, ignite transformation across major business functions and drive economic impact, Wolf added.

Intels AI software suite consists of end-to-end open-source software and optional licensing components to enable developers to run full AI pipeline workflows, according to VMware.

This includes Intels oneAPI framework for letting developers writing code once to target software for multiple types of processors as well as Intels Transformer Extensions and PyTorch Extensions.

VMware Cloud Foundation provides complementary features for building and running AI models, such as vSAN Express Storage Architecture for accelerating capabilities such as encryption, vSphere Distributed Resources Scheduler for maximising hardware utilisation for AI models and training, and VMware NSX for micro-segmentation and threat protection capabilities.

The multi-cloud platform also comes with secure boot and Virtual Trusted Platform Module features for enabling model and data confidentiality.

See the rest here:

VMware, Intel to help businesses build and run AI models on ... - CRN Australia

Read More..

Moderna Highlights its Digital and AI Strategy and Progress at … – Moderna Investor Relations

Moderna Highlights its Digital and AI Strategy and Progress at Second Digital Investor Event

The Company demonstrates how its integrated Artificial Intelligence ecosystem accelerates innovation at scale and creates value across the enterprise

Moderna to present case studies on how the organization is building a real-time AI Company

CAMBRIDGE, MA / ACCESSWIRE / November 8, 2023 / Moderna, Inc. (NASDAQ:MRNA) today will unveil its comprehensive AI and digital strategy at its second Digital Investor Event. Building off its Manufacturing and Digital Day hosted in March 2020, the Company will showcase how AI continues to transform the organization and enhance its value creation. Today's presentation will highlight Moderna's leading position in AI-powered innovation, its ability to harness the power of AI to improve efficiency and scalability across the value chain, and its development of an AI-centric culture.

Since its founding, Moderna has been a digital-first company. Building on its strong foundation of more than a decade of data in developing mRNA medicines, combined with its unique platform approach and cloud-native infrastructure, the Company is well-positioned to continue to scale using AI.

"Just as the personal computer changed the way we work and live, AI will completely transform our everyday lives. At Moderna, we are leading the charge of this AI revolution in medicine. It is as much about technology as it is about people and ensuring they have the right skills," said Stphane Bancel, Chief Executive Officer of Moderna. "We were built on the premise that the natural flow of information in life, mRNA, can be used to develop transformative medicines, and by embedding AI into every aspect of how we work, we are accelerating our mission to deliver the greatest possible impact to people through mRNA medicines."

Moderna has already leveraged the impact of AI to increase its speed to market as well as advance the continuous improvement and quality of its products. AI helps optimize each aspect of Moderna's value chain - from drug design to commercial manufacturing. The Company will present a case study on how mRNA-4157 (V940), its individualized neoantigen therapy (INT), leverages a series of fully autonomous, integrated AI algorithms. These proprietary algorithms design the specific therapy for each individual patient. AI algorithms are also used to optimize the timely manufacture and delivery of INT to the patient. Moderna will present a detailed overview of the Company's AI-optimized manufacturing scheduling system, ensuring the timely administration of INT for each patient.

Moderna maintains an AI-centric culture through both educational opportunities for employees and easy-to-implement AI-powered tools. The Company's AI Academy also offers a unique and immersive learning experience to encourage employees to become proficient AI users and enthusiasts. After only two weeks of development, Moderna launched its own generative AI product, mChat, in May 2023. As of last month, nearly 65% of employees are active users, embedding the tool into their specific functions for customized support and meaningful improvements in workflow efficiency and efficacy.

"While 90% of tech executives believe AI is the center of the next tech revolution, only 10% of AI projects make it into production. "We're committed to not only changing this narrative, but to leading by example," said Brad Miller, Moderna's Chief Information Officer. "We know that successful AI implementation means putting our employees at its center, requiring an intentional cultural transformation and a mindset shift around how each employee approaches their work. Rather than adding complexity and viscosity as we grow as a company, we embrace and democratize AI so that every employee can create value measured by efficiency and efficacy."

Webcast Information

Moderna will host a webcast at 8:00 am ET on November 8, 2023. A webcast of the event will be available under "Events and Presentations" in the Investors section of the Moderna website.

Webcast: https://investors.modernatx.com

The archived webcast will be available on Moderna's website and will be available for one year following the call.

About Moderna In over 10 years since its inception, Moderna has transformed from a research-stage company advancing programs in the field of messenger RNA (mRNA), to an enterprise with a diverse clinical portfolio of vaccines and therapeutics across seven modalities, a broad intellectual property portfolio and integrated manufacturing facilities that allow for rapid clinical and commercial production at scale. Moderna maintains alliances with a broad range of domestic and overseas government and commercial collaborators, which has allowed for the pursuit of both groundbreaking science and rapid scaling of manufacturing. Most recently, Moderna's capabilities have come together to allow the authorized use and approval of one of the earliest and most effective vaccines against the COVID-19 pandemic.

Moderna's mRNA platform builds on continuous advances in basic and applied mRNA science, delivery technology and manufacturing, and has allowed the development of therapeutics and vaccines for infectious diseases, immuno-oncology, rare diseases, cardiovascular diseases and auto-immune diseases. Moderna has been named a top biopharmaceutical employer by Science for the past nine years. To learn more, visit http://www.modernatx.com.

Forward-Looking Statements This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, as amended, including statements regarding: how AI is driving the next technological revolution and will transform our everyday lives; Moderna's investments in digital and AI; and Moderna's ability to harness the power of AI to accelerate innovation and improve efficiency and scalability across the value chain. The forward-looking statements in this press release are neither promises nor guarantees, and you should not place undue reliance on these forward-looking statements because they involve known and unknown risks, uncertainties, and other factors, many of which are beyond Moderna's control and which could cause actual results to differ materially from those expressed or implied by these forward-looking statements. These risks, uncertainties, and other factors include, among others, those risks and uncertainties described under the heading "Risk Factors" in Moderna's Annual Report on Form 10-K for the fiscal year ended December 31, 2022, filed with the U.S. Securities and Exchange Commission (SEC), and in subsequent filings made by Moderna with the SEC, which are available on the SEC's website at http://www.sec.gov. Except as required by law, Moderna disclaims any intention or responsibility for updating or revising any forward-looking statements contained in this press release in the event of new information, future developments or otherwise. These forward-looking statements are based on Moderna's current expectations and speak only as of the date of this press release.

Moderna Contacts Media:Kelly CunninghamAssociate Director, Communications & Media617-899-7321Kelly.Cunningham@modernatx.com

Investors:Lavina TalukdarSenior Vice President & Head of Investor Relations617-209-5834Lavina.Talukdar@modernatx.com

SOURCE: Moderna, Inc.

View source version on accesswire.com: https://www.accesswire.com/801019/moderna-highlights-its-digital-and-ai-strategy-and-progress-at-second-digital-investor-event

Read the original:

Moderna Highlights its Digital and AI Strategy and Progress at ... - Moderna Investor Relations

Read More..

Startups to access high-performance Azure infrastructure … – Microsoft

Today Microsoft is updating its startup program to include a free Azure AI infrastructure option for high-end GPU virtual machine clusters, for use in training and running large language models and other deep learning models.

Y Combinator (YC) and its community of startup innovators will be the first to access this offering in private preview to a limited cohort. YC has an unmatched reputation as a pioneering startup accelerator helping launch transformative companies including Airbnb, Coinbase and Stripe. Now YC startups will have the technical resources they need to quickly prototype and bring to market cutting-edge AI innovations. Our close collaboration with YC provides valuable insights into the infrastructure needs of early-stage AI companies, ensuring our offering delivers optimal value to additional startups going forward.

With the overwhelming infrastructure requirements needed to do AI at scale, we believe that providing startups with high-performance capabilities tailored for demanding AI workloads will empower our startups to ship faster, said Michael Seibel, Managing Director of Y Combinator.

We are also working with M12, Microsofts venture fund, and the startups in M12s portfolio which will gain access to these dedicated supercomputing resources to further empower their AI innovations. Over time, our vision is to partner with additional startup investors and accelerators, with a goal of working with the ecosystem to lower the barrier to training and running AI models for any promising startup.

Microsoft Azure offers cloud-based scalable AI infrastructure, built for and with the worlds most sophisticated AI workloads, from delivering the largest and most complex AI models including GPT-4 and ChatGPT through Azure OpenAI Service to developers to infuse AI capabilities into many apps. Azure AI infrastructure is fueling groundbreaking innovations. Infrastructure requirements to do AI at scale are often overwhelming, but with Azures global infrastructure of AI-accelerated server offerings with networked graphics processing units (GPUs), startups building advanced AI systems will be able to leverage these high-performance capabilities to accelerate innovation.

On top of world-class infrastructure, we will also provide tools to simplify deployment and management through Azure Machine Learning. This enables easy low-code or code-based training of custom models and fine-tuning of frontier and open-source models, simplified deployment and optimizations like Low Rank Adaptation, DeepSpeed and ONNX Runtime (ORT). Further, startups can deploy AI solutions with peace of mind knowing all deployments are secure and backed by Microsofts principles for Responsible AI.

Empowering startups to build transformative solutions powered by AI

AI is transforming industries and startups are leading that innovation, creating new business and societal value quicker than many thought possible. According to a recent KPMG survey, the near-term demand is real, with 75% of U.S. CEOs stating that generative AI is a top investment priority, 83% anticipating an increase in generative AI investment by more than 50% in the next year, and 45% saying investment will at least double. For startups, this represents a once-in-a-generation opportunity to bring groundbreaking impact to a market hungry for change.

To help startups meet this opportunity, last year we introduced Microsoft for Startups Founders Hub designed to help founders speed development with free access to GitHub and the Microsoft Cloud as well as unique benefits including free access to $2,500 of OpenAI credits to experiment and up to $150,000 in Azure credits that startups can apply to Azure OpenAI Service. Startups also receive 1:1 advice from Microsoft AI experts to help guide implementation. The Microsoft Pegasus Program, an extension of Founders Hub, links enterprise customers with startup solutions for immediate deployment. Seventy-five percent of Pegasus startups have landed deals with Fortune 1000 companies via increased reach across Azure Marketplace.

Startups using Azure AI to develop cutting-edge solutions for todays problems

Whether you have a product in market or just an idea, Microsoft provides startups with the tools they need to rapidly build and scale AI solutions. Already, we are seeing the results of empowering startups to innovate with AI to improve customer support, detect and address health conditions and advance immersive gaming experiences. Here are just a few examples of the cutting-edge innovation happening now:

Commerce.AI dramatically increases call center productivity with Azure OpenAI ServiceCommerce.AI uses Azure OpenAI Service and Azure AI Services to make call centers more efficient. Azure Cognitive Services uses a Commerce.AI model to transcribe interactions in real time, including into multiple languages. After the call, Azure OpenAI Service creates a summary with customer contact information, topics of conversation and embedded sentiment analysis. The system selects next steps and follow-up action items from pre-generated options, and the customer service agent exports the information to Microsoft Dynamics 365 in one quick step.

Inworld: The next-generation AI character engine for immersive gamingInworld, a Silicon Valley startup, is a fully integrated character engine that goes beyond language models to give users complete control over AI non-player characters (NPCs). With Inworld, users can customize their characters knowledge, memory, personality, emotions and narrative role. Inworld uses Azure AI technologies like Azure OpenAI Service to power its advanced natural language understanding and generation.

BeeKeeperAI is helping catch rare childhood conditions earlyAI tooling company BeeKeeperAI enables AI algorithms to run in a private and compliant way in healthcare environments.The company is pioneering an effort to leverage confidential computing to train an algorithm for predicting a rare childhood condition using real patient health records. By encrypting both the data and algorithm and using Microsoft Azures confidential computing, the company has enabled the algorithm to analyze identifiable health information in a secure, sightless manner.

YouTube Video

Click here to load media

Calling all startup founders Start building the future todayThe AI landscape is developing at breakneck speed, and Microsoft is ready to assist startups in seizing this opportunity. If youre a startup founder evaluating partners, we invite you to join us at Microsoft for Startups Founders Hub and discover how we can accelerate your immediate success.

Tags: AI, Azure AI, Azure OpenAI Service, M12, Microsoft for Startups Founders Hub, startups

Continued here:

Startups to access high-performance Azure infrastructure ... - Microsoft

Read More..

AI-Powered Waste Management System to Revolutionize Recycling – NC State College of Natural Resources News

Americans generate more than 290 million tons of municipal solid waste each year thats all the packaging, clothing, bottles, food scraps, newspapers, batteries and other everyday items that are thrown into garbage cans.

Some of that waste is recycled, composted or burned for energy, but nearly 50% of it is sent to a landfill where it slowly decomposes and emits greenhouse gases that account for about 25% of todays global warming.

With support from the U.S. Department of Energy, NCState researcher Lokendra Pal has partnered with the National Renewable Energy Laboratory, IBM and the Town of Cary to solve that problem.

Pal, the EJ Woody Rice Professor in the Department of Forest Biomaterials, is working with his collaborators to develop a smart waste management system for the collection, identification and characterization of organic materials in non-recyclable waste.

Non-recyclable waste includes items that are too contaminated for recycling, often because they contain organic materials such as oil, grease and dirt. The researchers want to convert these materials into renewable products, energy and fuel.

The sustainable utilization of non-recyclable waste will empower businesses to utilize it as a renewable carbon resource and will support them in the journey toward a low-carbon economy, Pal said.

In developing the smart waste management system, Pal and his collaborators are integrating smart sensors, visual cameras and hyperspectral cameras with an automated waste sorting machine to examine non-recyclable waste items.

The visual and hyperspectral cameras will capture images of the items as they move along a conveyor belt, while the sensors will help to monitor and control the waste management process.

Most objects absorb and reflect light. Digital cameras can only visualize three color bands of light red, green and blue. Hyperspectral cameras, however, can visualize many more bands from across the electromagnetic spectrum, resulting in images that showcase chemical characteristics that would otherwise be invisible.

By combining hyperspectral imaging with visual cameras and smart sensors, we can collect data in real-time to improve the process of characterizing and separating waste so that it doesnt end up in landfills, Pal said.

Pal and his collaborators are also analyzing non-recyclable waste items to determine their physical, chemical, thermal and biological properties, including moisture, density, particle size and distribution, surface area, crystallinity, calorific value and more. This information will help the system to further differentiate items as theyre scanned.

The researchers plan to upload this metadata information, along with the images and descriptions of the items, to a cloud database to train and test machine learning models that can be integrated with the systems cameras to improve the recognition and classification of non-recyclable waste.

A machine learning model is a type of artificial intelligence that analyzes data to identify patterns, make decisions and improve themselves. In the case of Pals research, the models will analyze the uploaded images and descriptions of non-recyclable waste and the information about its physical, chemical and biological properties to determine contaminants, energy density and organic content.

If successful, this project will contribute significantly to the development of commercially viable, high-performance renewable carbon resources for conversion to biofuels and valued-added products, Pal said.

Pal and his collaborators are exploring the use of various processes andtechnologies to produce fuels such as bioethanol and aviation fuel, which can be blended and used as sustainable fuel in the transportation industry, and products such as biochar, which can be used in agriculture to enhance soil fertility and improve plant growth.

Going forward, the researchers plan to evaluate the technical feasibility and environmental performance of their proposed system at pilot-scale. They also plan to develop a web platform that will enable them to share datasets and other information with stakeholders.

In sum, our approach supports the development of sustainable solutions for waste valorization, optimizing resource recovery, minimizing waste generation, reducing emissions, and mitigating environmental impacts while engaging municipalities and industries across the supply chain, Pal said.

This workshop will explore the solutions, challenges and opportunities ofrecovering organic materials from municipal solid waste for conversion to biofuels, biopower, biochemicals, and bioproducts.

Learn more

Read this article:

AI-Powered Waste Management System to Revolutionize Recycling - NC State College of Natural Resources News

Read More..

AI makes you worse at what youre good at – TechCrunch

Image Credits: Tom Werner / Getty Images

Welcome to Startups Weekly. Sign up here to get it in your inbox every Friday.

If youve been following along with this newsletter, youll have noticed that Ive been a little bit curious about AI especially generative AI. Im likely not the first person to make this observation, but AIs are extremely, painfully average. I guess thats kind of the point of them train them on all knowledge, and mediocrity will surface.

The trick is to only use AI tools for stuff that you, yourself, arent very good at. If youre an expert artist or writer, itll let you down. The truth, though, is that most people arent great writers, and so ChatGPT and its brethren are going to be a massive benefit to white-collar workers everywhere. Well, until we collectively discover that a house cleaner has greater job security than an office manager or a secretary, at least.

On that cheerful note, lets sniff about in the startup bushes and see what tasty morsels we can scare up from the depths of the TechCrunch archive from the past week....

I know, this happens every damn week: I start with the intention of writing this newsletter without going up to my eyelashes into the AI morass, and every week, yall keep reading our AI news as if your livelihood depends on it. Because, well, its entirely possible it does, I suppose.

The GPT Store, introduced by OpenAI, enables developers to create custom GPT-based conversational AI models and sell them in a new marketplace. This initiative is designed to expand the accessibility and commercial use of AI, similar to how app stores revolutionized software distribution. Developers can not only build but also monetize their AI creations, opening up a new avenue for innovation and entrepreneurship in the field of artificial intelligence. Of course, that little update and the platform now natively being able to read PDFs and websites is a substantial threat to startups that had previously filled this gap in ChatGPTs offerings, especially those whose business models are based on such features. Its a reminder that building a business around another companys API without a sustainable, stand-alone product is, perhaps, not the shrewdest business move.

AI is, of course, not just for startups. During Apples Q4 earnings call, the companys CEO, Tim Cook, emphasized AI as a fundamental technology and highlighted recent AI-driven features like Personal Voice and Live Voicemail in iOS 17. He also confirmed that Apple is continuing to develop generative AI technologies tellingly, without revealing specifics.

Heinlein would be horrified: Elon Musk announced that Twitters Premium Plus subscribers will soon have early access to xAIs new AI system, Grok,once it exits early beta, positioning the chatbot as a perk for the platforms $16/month ad-free service tier.

Brother, can you spare a GPU?: AWS introduced Amazon Elastic Compute Cloud (EC2) and Capacity Blocks for ML, a new service that enables customers to rent Nvidia GPUs for a set period, primarily for AI tasks like training or experimenting with machine learning models.

From zero to AI founder in one easy bootstrap: In How to bootstrap an AI startup on TC+, Michael Koch advises founders on maintaining control over their startups strategy and product by bootstrapping yes, even in the oft-capital-intensive world of AI startups.

WeWork, once a high-flying startup valued at $47 billion, has filed for Chapter 11 bankruptcy protection, highlighting a staggering collapse. The company, which has over $18.6 billion of debt, received agreement from about 90% of its lenders to convert $3 billion of debt into equity in an attempt to improve its balance sheet and address its costly leases. On TC+, Alex notes what we kinda knew all along: that the core business just didnt make sense.

In other venture news...

Ex-Twitter CEO raises third venture fund: 01 Advisors, the venture firm founded by former Twitter executives Dick Costolo and Adam Bain, has secured $395 million in capital commitments for its third fund, aimed at investing in Series Bstage startups focused on business software and fintech services.

Happy 10th unicornaversary: Alex reflects on the tenth anniversary of the term unicorn, which was initially coined right here on TechCrunch, to describe startups valued at over $1 billion.

You get a chip! You get a chip!: In response to a shortage of AI chips, Microsoft is updating its startup support program to offer selected startups free access to advanced Azure AI supercomputing resources to develop AI models.

Look, Im not going to lie, I think most crypto is dumb, and Ive seen only a handful of startups that use blockchains in a way that makes any sense whatsoever most of them would have done just fine with a simple database so Ive been following Jacquelyns coverage of Bankman-Frieds trial with a not insignificant amount of schadenfreude. Its human to make mistakes, and startup founders are human, but if youre defrauding the fuck out of people, you deserve all the comeuppance you can get.

Sam Bankman-Fried was the co-founder and CEO of the cryptocurrency exchange FTX and the trading firm Alameda Research (named specifically to not sound like a crypto company). He has been found guilty on all seven counts of fraud and money laundering.

The charges were related to a scheme involving misappropriating billions of dollars of customer funds deposited with FTX and misleading investors and lenders of both FTX and Alameda Research. After the five-week trial, the jury spent just four hours to reach its verdict.

The collapse of FTX and Alameda Research, which led to the indictment of Bankman-Fried about 11 months ago by the U.S. Department of Justice, was significant, with the executives allegedly stealing over $8 billion in customer funds.

Sentencing will happen next March, but if he gets smacked with the full weight of his actions, he will face a total possible sentence of 115 years in prison.

Jacquelyn did a heroic job covering the trial for TechCrunch, and its worth taking an afternoon to read through it all the details are mind-boggling.

The house sometimes wins: Mr. Cooper, a mortgage and loan company, experienced a cybersecurity incident that led to an ongoing system outage. The company says it has taken steps to secure data and address the issue.

Cant think of any downsides of the Hindenburg: The worlds largest aircraft, Pathfinder 1, is an electric airship prototype developed by LTA Research and funded by Sergey Brin. It was unveiled this week, promising a new era in sustainable air travel.

Arrivals departure: The EV startup Arrival, which aimed to revolutionize electric vehicle production with its micro-factory model, is now facing severe operational challenges, including multiple layoffs, missed production targets, and noncompliance with SEC filing requirements, resulting in a plummet from a $13 billion valuation.

Follow this link:

AI makes you worse at what youre good at - TechCrunch

Read More..

Elon throws AI-generated insults at GPT-4 after OpenAI CEO mocks … – Cointelegraph

The launch of Elon Musks new Grok artificial intelligence (AI) system may not have yet made waves throughout the machine learning community or directly threatened the status quo, but its certainly drawn the attention of Sam Altman, the CEO of ChatGPT maker OpenAI.

In a post on the social media platform X, formerly Twitter, Altman compared Groks comedic chops to that of a grandpa, saying that it creates jokes similar to your dads dad.

In classic form, Musk apparently couldnt resist the challenge. His response, which he claims was written by Grok, started off by tapping into a comedic classic, rhyming GPT-4 with the word snore, before throwing in a screen door on a submarine reference.

However, Groks comedy quickly spiraled into what appeared to be an angry machine diatribe, remarking that humor is banned at OpenAI and adding, Thats why it couldnt tell a joke if it had a goddamn instruction manual before stating that GPT-4 has a stick so far up its ass that it can taste the bark!

Related: Elon Musk launches AI chatbot Grok, says it can outperform ChatGPT

As far as CEO vs. CEO squabbles go, this one may lack the classic nuance and grace of the legendary Silicon Valley battles of yesteryear (Bill Gates vs. Steve Jobs, for example). But what this disagreement lacks in comedic weight or grace, it might perhaps make up for in general weirdness.

Altman and Musk go way back. Both were co-founders at OpenAI before Musk left the company just in time to avoid getting swept up in the rocket-like momentum thats carried it to a $2 billion valuation.

In the wake of OpenAIs success, which has largely been attributed to the efficacy of its GPT-3 and GPT-4 large language models (LLMs), Musk joined a chorus of voices calling for a six-month pause in AI development,primarily prompted by fears surrounding the supposed potential for chatbots to cause the extinction of the human species.

Six months later, nearly to the day, Musk and X unveileda chatbot model that he claims outperforms ChatGPT.

Dubbed Grok, Musks version of a better chatbot is an LLM supposedly fine-tuned to generate humorous texts in the vein of The Hitchhikers Guide to the Galaxy, a celebrated science fiction novel written by Douglas Adams.

Adams literary work is widely regarded as foundational in the pantheon of comedic science fiction and fantasy. His humor has been described by pundits and literary critics as clever, witty, and full of both heart and humanity.

And that brings us to GPT-4, OpenAIs recently launched GPTs featureallowing users to define a personality for their ChatGPT interface, and Musks insistence that Grok is funnier.

Its currently unclear which model is more robust or capable. There are no standard, accepted benchmarks for LLMs (or comedy, for that matter).

While OpenAI has published several research papers detailing ChatGPTs abilities, X hasnt so far not offered such details about Grok beyond claiming that it outscores GPT-3.5 (an outdated model of the LLM powering ChatGPT) on certain metrics.

Link:

Elon throws AI-generated insults at GPT-4 after OpenAI CEO mocks ... - Cointelegraph

Read More..