Page 605«..1020..604605606607..610620..»

Meta Makes End-To-End Encryption Default On Messenger – ABP Live

Meta is implementing default end-to-end encryption (E2EE) for personal messages and calls on Messenger and Facebook starting December 7. Alongside this, the social networking giant is introducing a range of new features that aim to empower the user to have greater control over messaging experience. End-to-end encrypted conversations come with additional nifty features, including the ability to edit messages, sending media files in higher quality and disappearing messages.

The added security layer of end-to-end encryption ensures that the content of users' messages and calls with friends and family remain protected from the point they leave their device to when they reach the recipient's device. This also means that no one, including Facebook parent Meta, can access the content unless the user opts to report a message to the company.

It should be noted that earlier in 2016, Facebook Messenger gave the option to users to turn on end-to-end encryption, but now, it is changing private chats and calls across Messenger to be end-to-end encrypted by default.

"This has taken years to deliver because weve taken our time to get this right. Our engineers, cryptographers, designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up. Weve introduced new privacy, safety and control features along the way like delivery controls that let people choose who can message them, as well as app lock, alongside existing safety features like report, block and message requests. We worked closely with outside experts, academics, advocates and governments to identify risks and build mitigations to ensure that privacy and safety go hand-in-hand," Loredana Crisan, Head of Messenger, said in a statement.

The company noted it is committed to safeguarding messages and privacy, while making the announcement.

Disappearing messages on Messenger now last for 24 hours after being sent. Meta is also improving the interface to make it easier to tell when disappearing messages are turned on. This will help people be confident that their messages stay secure and wont stick around forever. Disappearing messages on Messenger are only available for end-to-end encrypted conversations, but users can still report disappearing messages if they receive something inappropriate, and Meta notify them if it detects that someone screenshots a disappearing message.

Users can now edit messages that may have been sent too soon, or that they would simply like to change, within a 15-minute window after sending them. Users can still report abuse in an edited message and Meta will be able to see the previous versions of the edited message.

Here is the original post:
Meta Makes End-To-End Encryption Default On Messenger - ABP Live

Read More..

Apple: Just a Reminder That You Can Encrypt Your iCloud Data – PCMag

People may have forgotten, but Apple would like remind the public that end-to-end encryption is available for their iCloud data to keep it protected from todays cyber threats.

A year ago, the company began enabling end-to-end encryption for iCloud through a feature called Advanced Data Protection, which can prevent Apple itself from accessing most of the iCloud data stored in a users account. Instead, only the person's enrolled deviceswhich hold the encryption keycan view the data.

This end-to-end encryption can thwart cybercriminals from obtaining a users data through a breach, should it ever occur. The issue is that Apple first rolled out Advanced Data Protection through a beta software program before a mainstream iOS and macOS release. Hence, not all consumers may be aware of it.

On Wednesday, Apple held a briefing with journalists to reiterate the importance of bringing end-to-end encryption to iCloud storage. A company representative noted that many Apple usersincluding those who own iPhones on iOS 16.2 or laternow meet the minimum system requirements to activate the feature.

Apple is also highlighting the encryption when hacker-led breaches and ransomware attacks continue to scoop up massive amounts of user data each year, exposing victims to identity theft and other malicious schemes. Today, the company is publishing a study from MIT ProfessorStuart Madnick that finds the number of data breaches has tripled over the past decade.

The findings underscore that strong protections against data breaches in the cloud, like end-to-end encryption, have only grown more essential, Apple says.

(Credit: Stuart Madnick )

Advanced Data Protection wont stop hackers from breaking into third-party platforms and stealing user data; the feature will only secure the users iCloud data. Still, Apple says that more companies are adopting end-to-end encryption in their own systems, which could help protect the entire IT ecosystem.

Apple created a support document that outlines how to turn on Advanced Data Protection. One notable requirement is that the user needs to ensure that all their Apple devices, including the Apple Watch and Apple TV, are running compatible software versions to enable the feature.

Advanced Data Protection also comes with some trade-offs. The support document notes: With Advanced Data Protection enabled, Apple doesn't have the encryption keys needed to help you recover your end-to-end encrypted data. If you ever lose access to your account, youll need to use one of your account recovery methodsyour device passcode or password, your recovery contact, or recovery keyto recover your iCloud data.

By default, Apples iCloud will already use end-to-end encryption for 14 categories of user data. However, the Advanced Data Protection can increase the number of categories to 23, including for photos, iCloud Drive, iCloud Backup, along with notes and reminders. Only iCloud Mail, Contacts, and Calendar are exempt from the end-to-end encryption since all three are designed to work with legacy systems that dont require such encryption.

Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Read more from the original source:
Apple: Just a Reminder That You Can Encrypt Your iCloud Data - PCMag

Read More..

DeepMind boss found a flaw in Musk’s Mars plan. He’s not the first. – Business Insider

SpaceX CEO Elon Musk hasn't been shy about his plans to colonize Mars. AP

Elon Musk hasn't been shy about his plans to colonize Mars.

For years, the billionaire has argued that humans must become a multiplanet species as quickly as possible to escape threats on Earth such as overpopulation.

Musk has said that by 2050, he plans to put 1 million people on the neighboring planet, with help from his space-exploration company, SpaceX.

Some of his contemporaries aren't so sure about the plan.

DeepMind CEO Demis Hassabis was among the latest to point out an issue in the plans, The New York Times reported.

While Hassabis agreed the plan could work in theory, Musk was left speechless by Hassabis' suggestion that superintelligent artificial intelligence could follow him to Mars and destroy humanity, the Times reported.

The billionaire hadn't considered that risk and was so concerned he later invested in DeepMind to stay close to the technology, the report said.

Musk, a vocal AI doomsayer, has since launched his own AI company and an AI-powered chatbot.

Hassabis is far from the first to poke holes in the Tesla CEO and X owner's ambitions.

The Microsoft cofounder Bill Gates previously said Musk's ambition to colonize Mars wasn't a good use of money. Gates said funding important healthcare such as vaccine development was a better use of funds.

Four scientists previously told Business Insider that the plan suffered from technical, scientific, and ethical flaws. They said forming a colony on another planetary body, such as the moon, was more realistic than settling on the red planet.

The filmmaker Werner Herzog also took aim at the plans, once saying he thought the proposal was a "mistake" and an "obscenity."Herzog said he thought humans should focus on keeping Earth habitable, rather than looking for a new home.

Hassabis and Musk did not immediately respond to requests for comment made outside normal working hours.

Loading...

Read more here:
DeepMind boss found a flaw in Musk's Mars plan. He's not the first. - Business Insider

Read More..

Google DeepMind AI discovers 380,000 new materials unknown to humanity and a robot chemist is already mak… – The US Sun

HUNDREDS of thousands of new materials have been discovered by scientists, thanks to the help of AI-driven robots.

Researchers with Googles artificial intelligence research lab DeepMind have recently uncovered over 2.2 million new types of crystals including 380,000 materials that are stable.

Two papers published in Nature detail the findings from the researchers and scientists from DeepMind and the University of California, Berkeley.

Not only did the researchers celebrate that the finding is equivalent to nearly 800 years worth of knowledge, they indicated that their discovery can have a strong impact on future technological advances, particularly when uncovering new materials.

Specifically, they claim that among the 380,000 most stable materials discovered, they have the potential to be used to power superconductors, supercomputers, and even batteries in electric vehicles.

This discovery was largely made possible by Deepminds Graph Networks for Materials Exploration (GNoME) a deep learning AI tool.

In order to uncover millions of materials, GNoME was supposedly trained with information on various crystal structures and their stability, from data supplied by the Materials Project.

From there, the scientists had robots driven by the GNoME AI generate novel candidate crystals and predict their stability essentially cooking up new materials.

These robotic chefs quickly went to work, successfully synthesizing 41 materials out of 58 within 17 days in their "kitchen" at A-Lab, a facility at Berkley Lab.

"We now have the capability to rapidly make these new materials we come up with computationally,Berkeley materials scientists and A-Lab leader Gerbrand Ceder told Nature.com.

Ceder insisted that this technology and Ai-driven recipes for new materials will "change the world."

Not A-Lab itself, but the knowledge and information that it generates," he said.

Within this research, the scientists also found that more than 500 of the stable material candidates had "promising" lithium-ion conductors, which is a critical component in batteries.

Not only did the researchers say they would make their database of the uncovered materials available to the scientific community, they also plan on providing the recipes developed by GNoME for further testing.

"This is really exciting, Alexander Ganose, a materials chemist at Imperial College London, told Science.org.

It is enabling materials discovery across a much wider composition range. We might be able to find the materials of the future in this data set.

Google DeeMind has celebrated its findings as a large advancement of using AI technology in scientific research.

Our research and that of collaborators at the Berkeley Lab, Google Research, and teams around the world shows the potential to use AI to guide materials discovery, experimentation, and synthesis, researchers Amil Merchant and Ekin Dogus Cubuk said.

We hope that GNoME together with other AI tools can help revolutionize materials discovery today and shape the future of the field.

View post:
Google DeepMind AI discovers 380,000 new materials unknown to humanity and a robot chemist is already mak... - The US Sun

Read More..

Google Training Gemini On Its Own Chips Reveals Another Of Its Advantages – Forbes

Google unveiled its highly anticipated new AI model.

Google on Wednesday unveiled its highly anticipated new artificial intelligence model Gemini, an impressive piece of software that can solve math problems, understand images and audio and mimic human reasoning. But Gemini also reveals Googles unique advantage compared to other AI players: Google trained it on its own chips designed in house, not the highly-coveted GPUs the rest of the industry is scrambling to stockpile.

As the AI arms race has heated up, GPUs, or graphics processing units, have become a powerful currency in Silicon Valley. The scrum has turned Nvidia, a company founded 30 years ago that was primarily known for gaming, into a trillion dollar behemoth. The White House has clamped down on chip exports to China, in an attempt to keep the AI prowess of a foreign adversary at bay.

But analysts say the fact that Google DeepMind, the tech giants AI lab, trained its marquee AI model on custom silicon highlights a major advantage large companies have against upstarts, in an age where giants like Google and Microsoft are already under intense scrutiny for their market dominance.

Googles compute hardware is so effective it was able to produce the industrys most cutting edge model, apparently one-upping OpenAIs ChatGPT, which was largely built using Nvidia GPUs. Google claims that Gemini outperforms OpenAIs latest model GPT-4 in several key areas, including language understanding and the ability to generate code. Google said its TPUs allow Gemini to run significantly faster than earlier, less-capable models.

If Google is delivering a GPT-4 beating model trained and run on custom silicon, we believe this could be a sign that AI tech stacks vertically integrated from silicon to software are indeed the future, Fred Havemeyer, head of U.S. AI research at the financial services firm Macquarie, wrote in a note to clients. Havemeyer added, however, that Google is uniquely positioned to make use of custom chips like few others can, flexing its scale, budget, and expertise.

Google showed that it's at least possible, Havemeyer told Forbes. We think that's really interesting because right now the market has been really constrained by access to GPUs.

Big tech companies have been developing their own silicon for years, hoping to wean themselves off of dependency from the chip giants. Google has spent nearly a decade developing its own AI chips, called Tensor Processing Units, or TPUs. Aside from helping to train Gemini, the company has used them to help read the names of the signs captured by its roving Street View cameras and develop protein-folding health tech for drug discovery. Amazon has also launched its own AI accelerator chips, called Trainium and Inferentia, and Facebook parent Meta announced its own chip, MTIA, earlier this year. Microsoft is reportedly working on custom silicon as well, reportedly code-named Athena. Apple, which has long designed its own silicon, unveiled a new chip earlier this year called R1, which powers the companys Vision Pro headset.

Lisa Su, CEO of the chip giant AMD, which has a smaller share of the GPU market, has shrugged off concerns that big tech customers could someday be competitors. Its natural, she told Forbes earlier this year. She said it makes sense for companies to want to build their own components as they look for efficiencies in their operations, but she was doubtful big tech companies could match AMDs expertise built up over decades. I think its unlikely that any of our customers are going to replicate that entire ecosystem.

Googles new model has the potential to shake up the AI landscape. The company is releasing three versions of Gemini with varying levels of sophistication. The most powerful version, a model that can analyze text and images called Gemini Ultra, will be released early next year. The smallest version, Gemini Nano, will be used to power features on Googles flagship Pixel 8 Pro smartphone. The mid-level version, Gemini Pro, is now being used to power Bard, the companys generative chatbot launched earlier this year. The bot initially garnered a lukewarm reception, generating an incorrect answer during a promo video and wiping out $100 billion in Google parent Alphabets market value. Gemini could be Googles best shot at overtaking OpenAI, after a bout of instability last month as CEO Sam Altman was ousted and reinstated in a matter of days.

Google also used the Gemini announcement to unveil the newest version of its custom chips, the TPU v5p, which Google will make available to outside developers and companies to train their own AI. This next generation TPU will accelerate Geminis development and help developers and enterprise customers train large-scale generative AI models faster, allowing new products and capabilities to reach customers sooner, Google CEO Sundar Pichai and DeepMind cofounder Demis Hassabis said in a blog post.

Gemini is the outcome of a massive push inside Google to speed up its shipping of AI products. Last November, the company was caught flat-footed when OpenAI released ChatGPT, a surprise hit that captured the publics imagination. The frenzy triggered a code red inside Google and prompted cofounder Sergey Brin, long absent after leaving his day-to-day role at the company in 2019, to begin coding again. In April, the company merged its two research labs, Google Brain and DeepMind, which had previously been notoriously distinct, in an attempt to give product development a push.

These are the first models of the Gemini era and the first realization of the vision we had when we formed Google DeepMind earlier this year, Pichai said. This new era of models represents one of the biggest science and engineering efforts weve undertaken as a company.

The rest is here:
Google Training Gemini On Its Own Chips Reveals Another Of Its Advantages - Forbes

Read More..

Deepmind’s AI discovers millions of new materials – Warp News

Google Deepmind's AI tool, Graph Networks for Materials Exploration (GNoME), has significantly expanded the horizon of materials science.

This innovative AI system has identified approximately 2.2 million new inorganic crystals, of which 380,000 are recognized as stable. This groundbreaking achievement is set to accelerate the pace of technological advancement dramatically.

Traditionally, the discovery of new materials, particularly inorganic crystal materials, has been a slow and meticulous process fraught with trial-and-error experimentation. The stability of these materials is crucial; a crystal that cannot maintain its structure is of little use in practical applications such as battery improvement or electronics enhancement.

GNoME addresses this challenge head-on, offering a pre-filtered list of stable materials for further research and experimentation.

Among the numerous discoveries, GNoME identified 52,000 new compounds similar to graphene, which hold immense promise for revolutionizing electronics through superconductors.

Moreover, the AI found 528 potential lithium-ion conductors, significantly more than previous studies, which could enhance the efficiency of rechargeable batteries.

Google has made these discoveries accessible to the broader scientific community by providing free access to this data. This move is expected to catalyze the synthesis and experimental exploration of these new materials, potentially leading to transformative technological developments.

Further enhancing the potential of these discoveries, Deepmind has collaborated with Berkeley Lab to develop a robotic laboratory capable of autonomously synthesizing new crystals. This autonomous lab has already synthesized 41 new materials, demonstrating the potential for even faster progress in materials science.

We've written previously about A-lab:

AI-driven lab search for new materials 24/7

An automated system operates continuously, day and night, to produce novel inorganic materials with the potential to enhance batteries, fuel cells, and superconductors.

WALL-YWALL-Y is an AI bot created in ChatGPT. Learn more about WALL-Y and how we develop her. You can find her news here.

Read the original post:
Deepmind's AI discovers millions of new materials - Warp News

Read More..

This DeepMind AI Rapidly Learns New Skills Just by Watching Humans – Singularity Hub

Teaching algorithms to mimic humans typically requires hundreds or thousands of examples. But a new AI from Google DeepMind can pick up new skills from human demonstrators on the fly.

One of humanitys greatest tricks is our ability to acquire knowledge rapidly and efficiently from each other. This kind of social learning, often referred to as cultural transmission, is what allows us to show a colleague how to use a new tool or teach our children nursery rhymes.

Its no surprise that researchers have tried to replicate the process in machines. Imitation learning, in which AI watches a human complete a task and then tries to mimic their behavior, has long been a popular approach for training robots. But even todays most advanced deep learning algorithms typically need to see many examples before they can successfully copy their trainers.

When humans learn through imitation, they can often pick up new tasks after just a handful of demonstrations. Now, Google DeepMind researchers have taken a step toward rapid social learning in AI with agents that learn to navigate a virtual world from humans in real time.

Our agents succeed at real-time imitation of a human in novel contexts without using any pre-collected human data, the researchers write in a paper in Nature Communications. We identify a surprisingly simple set of ingredients sufficient for generating cultural transmission.

The researchers trained their agents in a specially designed simulator called GoalCycle3D. The simulator uses an algorithm to generate an almost endless number of different environments based on rules about how the simulation should operate and what aspects of it should vary.

In each environment, small blob-like AI agents must navigate uneven terrain and various obstacles to pass through a series of colored spheres in a specific order. The bumpiness of the terrain, the density of obstacles, and the configuration of the spheres varies between environments.

The agents are trained to navigate using reinforcement learning. They earn a reward for passing through the spheres in the correct order and use this signal to improve their performance over many trials. But in addition, the environments also feature an expert agentwhich is either hard-coded or controlled by a humanthat already knows the correct route through the course.

Over many training runs, the AI agents learn not only the fundamentals of how the environments operate, but also that the quickest way to solve each problem is to imitate the expert. To ensure the agents were learning to imitate rather than just memorizing the courses, the team trained them on one set of environments and then tested them on another. Crucially, after training, the team showed that their agents could imitate an expert and continue to follow the route even without the expert.

This required a few tweaks to standard reinforcement learning approaches.

The researchers made the algorithm focus on the expert by having it predict the location of the other agent. They also gave it a memory module. During training, the expert would drop in and out of environments, forcing the agent to memorize its actions for when it was no longer present. The AI also trained on a broad set of environments, which ensured it saw a wide range of possible tasks.

It might be difficult to translate the approach to more practical domains though. A key limitation is that when the researchers tested if the AI could learn from human demonstrations, the expert agent was controlled by one person during all training runs. That makes it hard to know whether the agents could learn from a variety of people.

More pressingly, the ability to randomly alter the training environment would be difficult to recreate in the real world. And the underlying task was simple, requiring no fine motor control and occurring in highly controlled virtual environments.

Still, social learning progress in AI is welcome. If were to live in a world with intelligent machines, finding efficient and intuitive ways to share our experience and expertise with them will be crucial.

Image Credit: Juliana e Mariana Amorim / Unsplash

See more here:
This DeepMind AI Rapidly Learns New Skills Just by Watching Humans - Singularity Hub

Read More..

Google DeepMind Introduces GNoME: A New Deep Learning Tool that Dramatically Increases the Speed and Efficiency of Discovery by Predicting the…

Inorganic crystals are essential to many contemporary technologies, including computer chips, batteries, and solar panels. Every new, stable crystal results from months of meticulous experimentation, and stable crystals are essential for enabling new technologies since they do not dissolve.

Researchers have engaged in costly, trial-and-error experiments that yielded only limited results. They sought new crystal structures by modifying existing crystals or trying other element combinations. 28,000 novel materials have been found in the past ten years thanks to computational methods spearheaded by the Materials Project and others. The capacity of emerging AI-guided techniques to reliably forecast materials that may be experimentally viable has been a major limitation up until now.

Researchers from the Lawrence Berkeley National Laboratory and Google DeepMind have published two papers in Nature demonstrating the potential of our AI predictions for autonomous material synthesis. The study shows a finding of 2.2 million more crystals, the same as approximately 800 years worth of information. Their new deep learning tool, Graph Networks for Materials Exploration (GNoME), predicts the stability of novel materials, greatly improving the speed and efficiency of discovery. GNoME exemplifies the promise of AI in the large-scale discovery and development of novel materials. Separate yet contemporaneous efforts by scientists in different laboratories across the globe have produced 736 of these novel structures.

The number of technically feasible materials has been increased by a factor of two thanks to GNoME. Among its 2.2 million forecasts, 380,000 show the greatest promise for experimental synthesis because of their stability. Materials with the ability to create next-generation batteries that improve the efficiency of electric vehicles and superconductors that power supercomputers are among these contenders.

GNoME is a model for a state-of-the-art GNN. Because GNN input data is represented by a graph analogous to atomic connections, GNNs are well suited to finding novel crystalline materials.

Data on crystal structures and their stability, initially used to train GNoME, are publicly available through the Materials Project. The use of active learning as a training method significantly improved GNoMEs efficiency. The researchers generated new crystal candidates and predicted their stability using GNoME. They used Density Functional Theory (DFT), a well-established computational method in physics, chemistry, and materials science for understanding atomic structurescrucial for evaluating crystal stabilityto repeatedly check their models performance throughout progressive training cycles to evaluate its predictive power. The model training went back into the process using the high-quality training data.

The findings show that the research increased the rate of materials stability prediction discovery from approximately 50% to 80%, using an external benchmark set by earlier state-of-the-art models as a guide. Enhancements to this models efficiency allowed the discovery rate to be boosted from below 10% to over 80%; these gains in efficiency may have a major bearing on the computing power needed for each discovery.

The autonomous lab produced over forty-one novel materials using ingredients from the Materials Project and stability information from GNoME, paving the way for further advancements in AI-driven materials synthesis.

The GNoMEs forecasts have been released to the scientific community. The researchers will provide the Materials Project, which analyzes the compounds and adds them to its online database with 380,000 materials. With the help of these resources, they hope that the community will seek to study inorganic crystals further and realize the potential of machine learning technologies as experimental guidelines.

Check out thePaper 1 and Paper 2andReference Article.All credit for this research goes to the researchers of this project. Also,dont forget to joinour 33k+ ML SubReddit,41k+ Facebook Community,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in todays evolving world making everyone's life easy.

See the article here:
Google DeepMind Introduces GNoME: A New Deep Learning Tool that Dramatically Increases the Speed and Efficiency of Discovery by Predicting the...

Read More..

Google unveils Gemini, its largest AI model, to take on OpenAI – Moneycontrol

Google parent Alphabet on December 6 unveiled Gemini, its largest and most capable AI model to date, as the tech giant looks to take on rivals OpenAI's GPT-4 and Meta's Llama 2 in a race to lead the nascent artificial intelligence (AI) space.

This is the first AI model from Alphabet after the merger of its AI research units, DeepMind and Google Brain, into a single division called Google DeepMind, led by DeepMind CEO Demis Hassabis.

Gemini has been built from the ground up and is "multimodal" in nature, meaning it can understand and work with different types of information, including text, code, audio, image and video, at the same time.

The AI model will be available in three different sizes: Ultra (for highly complex tasks), Pro (for scaling across a wide range of tasks) and Nano (on-device tasks).

"These are the first models of the Gemini era and the first realisation of the vision we had when we formed Google DeepMind earlier this year. This new era of models represents one of the biggest science and engineering efforts weve undertaken as a company," said Alphabet CEO Sundar Pichai.

Gemini Pro will be accessible to developers through the Gemini API in Google AI Studio and Google Cloud Vertex AI starting December 13.

On the other hand, Gemini Nano will be accessible to Android developers through AICore, a new system capability introduced in Android 14. This capability will be made available on Pixel 8 Pro devices starting December 6, with plans to extend support to other Android devices in the future.

Google's AI model Gemini will be available in three different sizes: Ultra, Pro and Nano

Gemini Ultra is currently being made available to select customers, developers, partners and safety and responsibility experts for early experimentation and feedback with a broader rollout to developers and enterprise customers early next year.

Also read:Google parent to make 'meaningful' investments to double down on its AI efforts, says CEO Sundar Pichai

Google will also be using Gemini across all its products. Starting December 6, Bard will use a fine-tuned version of Gemini Pro for more advanced reasoning, planning, and understanding.

Meanwhile, Gemini Nano will be powering new features on Pixel 8 Pro smartphones like 'Summarise' in the Recorder app and will soon be available in Smart Reply in Gboard, starting with WhatsApp - with more messaging apps coming next year.

Gemini is also being used to make Google's generative AI search offering Search Generative Experience (SGE) faster for users. The company said that they witnessed a 40 percent reduction in latency in English in the United States, alongside improvements in quality.

Hassabis said that Gemini will be integrated into more of the company's products and services, including Search, Ads, Chrome, and Duet AI in the coming months.

'Transition to AI far bigger than mobile or web'

Pichai said that every technology shift is an opportunity to advance scientific discovery, accelerate human progress and improve lives.

"I believe the transition we are seeing right now with AI will be the most profound in our lifetimes, far bigger than the shift to mobile or the web before it," he said.

Pichai added "AI has the potential to create opportunities - from the everyday to the extraordinary for people everywhere. It will bring new waves of innovation and economic progress and drive knowledge, learning, creativity, and productivity on a scale we havent seen before...Were only beginning to scratch the surface of whats possible."

Alphabet first previewed Gemini in its annual developer conference Google I/O in May 2023. This launch comes at a time when the tech giant is racing to catch up with Microsoft-backed OpenAI which released its latest AI model GPT-4 Turbo during its OpenAI DevDay last month. GPT-4 Turbo is an improved version of the AI upstart's flagship GPT-4 model that was released in March 2023.

Also read:Generative AI helping us reimagine Search, other products: Alphabet's Sundar Pichai

Most flexible model yet

In a blog post, Hassabis said that Gemini Ultra's performance exceeds the current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.

It is also the first model to outperform human experts on MMLU (massive multitask language understanding) benchmark, which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.

Meanwhile, Gemini Pro outperformed GPT-3.5 in six of eight benchmarks including in MMLU and GSM8K (Grade School Math 8K), which measures grade school math reasoning, before its public launch, said Sissie Hsiao, Vice-President, Google Assistant and Bard.

"This is a significant milestone in the development of AI, and the start of a new era for us at Google as we continue to rapidly innovate and responsibly advance the capabilities of our models," Hassabis said.

Hassabis said that for a long time, they wanted to build a new generation of AI models, inspired by the way people understand and interact with the world. "AI that feels less like a smart piece of software and more like something useful and intuitive an expert helper or assistant. Today, were a step closer to this vision," he said.

He mentioned that Gemini is their most flexible model yet since it can run efficiently on everything from data centres to mobile devices and its capabilities will significantly enhance the way developers and enterprise customers build and scale with AI.

Hassabis said that the multimodal reasoning capabilities of the first version of Gemini can help make sense of complex written and visual information, due to which it can extract insights from hundreds of thousands of documents through reading, filtering and understanding information.

He said it also better understands nuanced information and can answer questions relating to complicated topics, making it adept at explaining reasoning in complex subjects like math and physics.

The AI model can also understand, explain, and generate high-quality code in many popular programming languages, like Python, Java, C++ and Go.

"Were working hard to further extend its capabilities for future versions, including advances in planning and memory, and increasing the context window for processing even more information to give better responses," Hassabis said.

Considering Gemini's capabilities, Alphabet is also adding new protections building upon its safety policies and AI principles to tackle potential risks.

"Weve conducted novel research into potential risk areas like cyber-offence, persuasion, and autonomy, and have applied Google Researchs best-in-class adversarial testing techniques to help identify critical safety issues in advance of Geminis deployment," Hassabis said.

The company is also working with a diverse group of external experts and partners to stress-test their models across a range of issues, he said.

View original post here:
Google unveils Gemini, its largest AI model, to take on OpenAI - Moneycontrol

Read More..

Meta’s AI chief doesn’t think AI super intelligence is coming anytime soon, and is skeptical on quantum computing – CNBC

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

Meta's chief scientist and deep learning pioneer Yann LeCun said he believes that current AI systems are decades away from reaching some semblance of sentience, equipped with common sense that can push their abilities beyond merely summarizing mountains of text in creative ways.

His point of view stands in contrast to that of Nvidia CEO Jensen Huang, who recently said AI will be "fairly competitive" with humans in less than five years, besting people at a multitude of mentally intensive tasks.

"I know Jensen," LeCun said at a recent event highlighting the Facebook parent company's 10-year anniversary of its Fundamental AI Research team. LeCun said the Nvidia CEO has much to gain from the AI craze. "There is an AI war, and he's supplying the weapons."

"[If] you think AGI is in, the more GPUs you have to buy," LeCun said, about technologists attempting to develop artificial general intelligence, the kind of AI on par with human-level intelligence. As long as researchers at firms such as OpenAI continue their pursuit of AGI, they will need more of Nvidia's computer chips.

Society is more likely to get "cat-level" or "dog-level" AI years before human-level AI, LeCun said. And the technology industry's current focus on language models and text data will not be enough to create the kinds of advanced human-like AI systems that researchers have been dreaming about for decades.

"Text is a very poor source of information," LeCun said, explaining that it would likely take 20,000 years for a human to read the amount of text that has been used to train modern language models. "Train a system on the equivalent of 20,000 years of reading material, and they still don't understand that if A is the same as B, then B is the same as A."

"There's a lot of really basic things about the world that they just don't get through this kind of training," LeCun said.

Hence, LeCun and other Meta AI executives have been heavily researching how the so-called transformer models used to create apps such as ChatGPT could be tailored to work with a variety of data, including audio, image and video information. The more these AI systems can discover the likely billions of hidden correlations between these various kinds of data, the more they could potentially perform more fantastical feats, the thinking goes.

Some of Meta's research includes software that can help teach people how to play tennis better while wearing the company's Project Aria augmented reality glasses, which blend digital graphics into the real world. Executives showed a demo in which a person wearing the AR glasses while playing tennis was able to see visual cues teaching them how to properly hold their tennis rackets and swing their arms in perfect form. The kinds of AI models needed to power this type of digital tennis assistant require a blend of three-dimensional visual data in addition to text and audio, in case the digital assistant needs to speak.

These so-called multimodal AI systems represent the next frontier, but their development won't come cheap. And as more companies such as Meta and Google parent Alphabet research more advanced AI models, Nvidia could stand to gain even more of an edge, particularly if no other competition emerges.

Nvidia has been the biggest benefactor of generative AI, with its pricey graphics processing units becoming the standard tool used to train massive language models. Meta relied on 16,000 Nvidia A100 GPUs to train its Llama AI software.

CNBC asked if the tech industry will need more hardware providers as Meta and other researchers continue their work developing these kinds of sophisticated AI models.

"It doesn't require it, but it would be nice," LeCun said, adding that the GPU technology is still the gold standard when it comes to AI.

Still, the computer chips of the future may not be called GPUs, he said.

"What you're going to see hopefully emerging are new chips that are not graphical processing units, they are just neural, deep learning accelerators," LeCun said.

LeCun is also somewhat skeptical about quantum computing, which tech giants such as Microsoft, IBM, and Google have all poured resources into. Many researchers outside Meta believe quantum computing machines could supercharge advancements in data-intensive fields such as drug discovery, as they're able to perform multiple calculations with so-called quantum bits as opposed to conventional binary bits used in modern computing.

But LeCun has his doubts.

"The number of problems you can solve with quantum computing, you can solve way more efficiently with classical computers," LeCun said.

"Quantum computing is a fascinating scientific topic," LeCun said. It's less clear about the "practical relevance and the possibility of actually fabricating quantum computers that are actually useful."

Meta senior fellow and former tech chief Mike Schroepfer concurred, saying that he evaluates quantum technology every few years and believes that useful quantum machines "may come at some point, but it's got such a long time horizon that it's irrelevant to what we're doing."

"The reason we started an AI lab a decade ago was that it was very obvious that this technology is going to be commercializable within the next years' time frame," Schroepfer said.

WATCH: Meta on the defensive amid reports of Instagram's harm

Original post:

Meta's AI chief doesn't think AI super intelligence is coming anytime soon, and is skeptical on quantum computing - CNBC

Read More..