Page 878«..1020..877878879880..890900..»

Richard Zhang – MIT Technology Review

Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.

One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.

Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.

As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.

Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.

To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.

See the rest here:
Richard Zhang - MIT Technology Review

Read More..

Connor Coley – MIT Technology Review

Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.

One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.

Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.

As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.

Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.

To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.

Link:
Connor Coley - MIT Technology Review

Read More..

Heres exactly what Google will argue to fight the DOJs antitrust claims – Ars Technica

Today, DC-based US District Court Judge Amit Mehta will hear opening arguments in a 10-week monopoly trial that could disrupt Google's search business and redefine how the US enforces antitrust law in the tech industry.

The trial comes three years after the Department of Justice began investigating whether Googlecurrently valued at $1.7 trillionpotentially abused its dominance in online search to make it nearly impossible for rival search engines to compete. Today, Google controls more than 90 percent of the search engine market within the US and globally, and this has harmed competitors and consumers, the DOJ argued, by depriving the world of better ways to search the web.

"Googles anticompetitive conduct harms consumerseven those who prefer its search enginebecause Google has not innovated as it would have with competitive pressure," the DOJ wrote in a pre-trial brief filed on Friday.

This trial will be "the federal governments first monopoly trial of the modern Internet era," The New York Times reported. For officials, the trial marks a shift away from opposing anti-competitive tech mergers and acquisitionswhich attempt to stop tech giants from getting even bigger in desirable markets. Starting with this trial, officials will now begin scrutinizing more closely than ever before how tech giants got so big in the first place.

No one's sure yet if today's antitrust laws can even answer some of the latest emerging questions about tech competition. Last year, Congress recommended changes to strengthen antitrust laws, including by directly prohibiting abuse of dominance. But rather than wait for lawmakers to update laws, the DOJ is treading aggressively into new territory and might even become emboldened to break up some of the biggest tech companiesespecially if the DOJ proves that tech giants have carefully built their businesses to shut out competition, as it's accused Google of doing.

"Google has entered into a series of exclusionary agreements that collectively lock up the primary avenues through which users access search engines, and thus the Internet, by requiring that Google be set as the preset default general search engine on billions of mobile devices and computers worldwide and, in many cases, prohibiting preinstallation of a competitor," a 2020 DOJ press release announcing the lawsuit said.

In Google's case, the DOJ has alleged that Google pays billions to browser developers, wireless carriers, and mobile phone makers to drive users to Google by agreeing to exclusively feature Google as the default search engine. Competing search engines will likely never be able to afford to outbid Google and form their own agreements, the DOJ argued. And denying them prominent placements across nearly every distribution channel prevents Google's rivals from reaching a broader scale of users and collecting a wider range of data that's needed to improve their search products. Google's market share is already too great for any rival to catch up, the theory goes, and Google has allegedly invested lots of money to keep it that way.

A nonprofit advocating for strong enforcement of antitrust laws, the American Economic Liberties Project, cited a recent estimate that "suggests Google paid over $48 billion in 2022 for these agreements." The nonprofit claimed that Google's outsize investment in these agreements was a red flag signaling a power grab and "as a result, search competitors who cant afford to cut billion-dollar checks to make their search engines accessible face an insurmountable barrier to entry." The DOJ agrees.

For Google, paying for those agreements has always been worth it, but the DOJ's attack on Google's key business strategy could end up costing Google big. If Google loses the trial, the search giant risks paying damages, potentially being forced to change its business practices, and possibly even being ordered to restructure its business. Among other remedies, the American Economic Liberties Project and a coalition of 20 civil society and advocacy groups recommended that the DOJ order the "separation of various Google products from parent company Alphabet, including breakouts of Google Chrome, Android, Waze, or Googles artificial intelligence lab Deepmind."

Although the trial starts today, the DOJ and Google started sparring earlier this year, when the DOJ accused Google of routinely deleting evidence and Google denied that anything important was deleted. Ahead of the trial, the DOJ and Google have already deposed 150 people, but the trial will call upon even more witnesses, potentially exposing parts of Google's core businessesandbecause Apple signed an agreement to make Google a default search engine on iPhonesalso Apple's. The testimony about trade secrets could be so sensitive that Apple executives requested the court block their testimony, but they were recently denied that request, Reuters reported.

The trial will be decided by Judge Mehta, and not a jury. To defend its search business, Google hired John E. Schmidtlein, a partner at the law firm Williams & Connolly, who will face off against the DOJ's head of antitrust, Jonathan Kanter.

Google employeesincluding CEO Sundar Pichaiand executives from other big tech companies like Apple will likely appear to testify. In a pre-trial brief, Google said that they would also be bringing in Google search users and advertisers as witnesses.

Former FTC antitrust attorney Sean Sullivan told Ars that the key questions before the court will be determining if Google possesses monopoly power in a relevant market and if Google got that power through anticompetitive conduct. That will likely require Google to explain in greater detail than ever before how it runs its search business, triggering even more interest in the trial, parts of which will be concealed from the public to protect Google trade secrets. While the DOJ appears confident that it can prove that Google has monopoly power, Sullivan said that antitrust cases are "rarely" straightforward.

"It is not enough that the defendant's conduct disadvantages its rivals," Sullivan told Ars. "Instead, the plaintiff ordinarily must prove that the defendant did things to disadvantage its rivals that lacked any business justification except to reduce the rivals competitive significance, or that needlessly disadvantaged rivals relative to other less restrictive ways of competing. That often funnels dispositive weight to whether the defendant can explain why it has done the things it is accused of doing."

The DOJ has said that it has the evidence to prove that Google didn't win a competitive advantage by building a superior product but by building a monopoly over search.

"A theme that will likely permeate Googles presentation at trial is its view that the company offers a quality search product that many users prefer," the DOJ wrote in its pre-trial brief. "But Googles conduct undermines this argument, as the monopolist feels the perpetual need to pay billions annually to ensure that consumers are routed to its search engine" by ensuring that Google "is the default search engine for iPhones, Android phones, and most third-party browsers, such as Mozillas Firefox."

This harms competitors, the DOJ said, because "Googles use of contracts to maintain default status denies rival search engines access to critical distribution channels and, by extension, the data necessary to improve their products." In turn, harming competition harms consumers because, while Google "attracts more users, who generate more data and who help attract more advertising revenue," Google's rivals "face an insurmountableand still growingdifference in scale" and are deprived "the opportunity to provide more accurate results" that would improve the search experience for consumers.

"Only Google has the full opportunity to improve," the DOJ argued.

At trial, the DOJ said it "will demonstrate that Google has maintained its durable monopolies in general search servicesand the related advertising markets that fund itby cutting off the air supply to Googles rivals," other general search engines.

To convince the court of Google's monopoly power, the DOJ said it would share evidence showing that Google competes in advertising markets with other general search engines that are also considered a "one-stop-shop" for searchers. And because Google has a dominant share of those markets and poses alleged barriers to entry, the DOJ argued that Google "easily meets" the threshold for monopoly power.

The DOJ claimed that it would show direct evidence that Google raised advertising prices "above a competitive level." Officials also urged the court not to buy into "any argument by Google that competitive pressures force it to innovatethus cutting against a finding of monopoly power." That "runs counter to the evidence," the DOJ said.

To back up the DOJ, the agency will call upon its expert economist, Michael Whinston, to testify that this case "has all the hallmarks of an exercise of monopoly power in the relevant advertising markets." The DOJ will also tap accounting expert Christine Hammer to "explain that Google turns an exceptionally high profit margin" on its search businesses.

Ultimately, the DOJ told the court that "in a monopoly maintenance case such as this one, the operative question is not whether the defendant has acquired its monopoly through anticompetitive means, but whether, once acquired, the defendant used anticompetitive means to maintain its monopoly."

Google laid out its defense against the DOJ's antitrust claims in a pre-trial brief also filed Friday. Ahead of the trial, the company disputed how the DOJ defined its top competitors, argued that its dominance in the search market does not create barriers to entry for competitors, and alleged that procompetitive benefits outweigh harms of alleged misconduct. The company also challenged how Colorado, which is another plaintiff in the case after filing a separate complaint, defined its "duty to deal" with search advertising competitors like Microsoft.

Google's defense of its business starts by suggesting that the DOJ will be unable to prove that "Google possesses 'monopoly power in the relevant market' because the DOJ has failed to identify a relevant market with a "set of products that serve as important competitive constraints on Google."

In making this argument, Google seeks to convince the judge that rival general search engines like Bing, Yahoo!, and DuckDuckGo are not Google's key competitors, because they are not "the products and firms to which Google would lose search queries (in the short and long run) if the quality of its search offering declined."

Rather, Google claimed that "specialized vertical providers" (SVPs) like Amazon, Yelp, and Expediaas well as "other popular places users go to search for information such as TikTok and Instagram"are Google's top search market rivals. Google wrote:

By defining the relevant market to include only general search engines, plaintiffs distort the commercial reality that users routinely substitute other search providers for general search enginessuch as Amazon when they shop, or Expedia when they traveland thereby improperly exclude many of Googles strongest competitors from the relevant market.

At the trial, Google's economic expert, Mark Israel, will share empirical evidence showing that's why SVPs "are closer competitors to Google in these verticals than other general search engines like Bing," Google wrote.

In addition to being counted among Google's top rivals for users, SVPs and social media platforms are also more relevant competitors to Google for advertisers than other general search engines, Google argued.

At trial, Israel will share results from his "detailed examination" of how advertisers decide where to place search ads, showing that "Googles search ads compete with a wide range of other digital advertisements"all of which Google said have been "improperly" excluded from the lawsuit's defined relevant markets allegedly harmed by Google's monopoly power.

Israel's examination will go up against testimony from Google advertisers themselves. In Colorado's pre-trial brief, the state confirmed that advertiser witnesses "who have products but decline to sell them on SVP platforms" would testify that "they cannot substitute SVP ads for general search ads."

Google has argued that if the DOJ and Colorado had included "strong competitors like Amazon, Expedia, Meta, and Yelp" in definitions of relevant markets, the court would see that Google does not enjoy monopoly power and instead "faces substantial competitive constraints for both users and advertisers." As proof, Google claimed that "on the user side, Googles share of user traffic has decreased while the search traffic captured by SVPs has increased over time." And it's most significantly "losing share to platforms such as Amazon and Meta," Google noted.

To prove that "the evidence on output, price, and quality indicates that Google competes in a fiercely competitive marketplace," Google said it will trot out Google search users as witnesses, who "will detail the intense continuing investments Google makes to compete, including the launch of thousands of product improvements every year, from generative artificial intelligence capabilities to enhancing the breadth and depth of local results to new flight search interfaces."

And Google also plans to introduce testimony from advertisers, who "will describe the investments Google makes to continue to improve its search advertising products" and explain how that generates more revenue for both advertisers and Google.

Beyond disagreeing with the DOJ on which businesses Google actually competes with, Google also seeks to dismantle the DOJ's argument that rival search engines face barriers to entry that are greater than ever, which prevents them from achieving the scale they need to compete with Google.

Computer science experts Edward Fox and Ophir Frieder will help Google explain from a technical perspective that, while the Google practice of collecting heaps of "user data can improve search quality," there "are diminishing returns to scale." They will also explain that competitors like Microsoft have "sufficient scale to compete" and detail the "many aspects of search that can be improved without additional scale." These experts will be largely charged with convincing the court that "Google owes its quality advantage over rivals to its 'superior skill, foresight, and industry' and not to anticompetitive conduct.

Disputing Google's computer science experts on the question of scale, Colorado said it will present testimony from employees of general search engine competitors who have struggled to overcome Google's monopoly, advertisers who have struggled with shifting investments from Google to Bing, and Google employees who will explain how Google benefits from massive amounts of user data.

Google also claimed that "there is no shortage" of procompetitive benefits from alleged anticompetitive behaviors. Among them, Google said that its economic expert, Kevin Murphy, and other witnesses would testify that Google becoming the default search engine both increased search usage and thus expanded its search outputbenefiting users. This also provided critical funds to browsers, which could then invest in improvements and innovations to improve browser functionality, Google argued.

Perhaps more significantly, Google's mobile application distribution agreements (MADAs) have "greatly benefited consumers and search competition by fostering the success of the Android platform, an innovative mobile platform that today provides the most significant competition to market-leading Apple in the United States," Google argued.

Google has claimed that "reengineering the MADAs, as Plaintiffs demand, would undermine these important benefits without boosting search competition."

Since Google has identified alleged procompetitive benefits, it will be up to plaintiffs to rebut them in the trial. The DOJ already claimed in its pre-trial brief that "the trial record will confirm that any purported benefits are outweighed by the anticompetitive effects in this case." The DOJ also suggested that consumers could enjoy the same benefits if Google used less anti-competitive meanssuch as paying for search traffic, instead of paying for exclusionary default search agreements.

Finally, Google argued that Colorado's claims that Google had a "duty to deal" with Microsoft and implement Bing Ads in its search engine marketing tool are "dead on arrival." And this could be crucial to proving what Sullivan said is essential to win the case: that Google "needlessly disadvantaged rivals relative to other less restrictive ways of competing."

However, according to Google, Colorado has failed to identify competitive harms caused to Microsoft by Google's delay in implementing Bing ads in its adtech tool. Google also argued that "even when a duty to deal exists, as long as the monopolist has 'valid business reasons' for the refusal to deal, the refusal does not violate" antitrust law.

"Google looks forward to presenting its case at trial," Google's pre-trial brief concluded.

It may not be easy for the public to follow all the nitty-gritty details of the Google antitrust trial online.

The juiciest parts of the trialwhere Google and Apple will discuss trade secrets driving core businesswill be sealed, and it appears that unsealed portions of the trial will likely only be accessible for those who can attend live.

In the days ahead of the trial, the American Economic Liberties Project joined other organizations in requesting that the court provide live audio feeds of the proceedings so that the public could follow the unsealed portions of the trial. But Mehta denied the request last weekend, citing judicial policy that does not allow either civil or criminal courtroom proceedings in the district courts to be broadcast, televised, recorded, or photographed for the purpose of public dissemination.

Among his reasons for denying the request, Mehta wrote that "the court has serious concerns about the unauthorized recording of portions of the trial, particularly witness testimony." Mehta said that the district court has recognized that live witness testimony is "particularly sensitive" and exempted those proceedings from audio feeds "even when those proceedings involved 'a matter of public interest[.]'

Senior counsel at the American Economic Liberties Project, Katherine Van Dyck, in a press release criticized Mehta's decision, saying that it would shroud the trial in secrecy and "prioritizes Googles privacy over the publics First Amendment right to listen, in real time, to witnesses that will lay out how Google monopolized search engines."

"The company whose infamous mission is to organize the worlds information and make it universally accessible was successful today in its effort to block the public from accessing the most important antitrust trial of the century, Van Dyck's statement said.

Van Dyck told Ars it seemed unlikely that Mehta would change his mindbeyond possibly piping an audio feed into a room where press will be gatheredbut some advocates have not given up hope that the trial will be accessible online. On Sunday, the trade organization Digital Context Next filed a similar request asking the court to provide a live audio feed, arguing that "there is substantial public interest" in the case and "there does not appear to be any good reason to close the trial completely for this testimony other than to shield Google and Apple from potential embarrassment."

Van Dyck told Ars that Mehta's decision would make it harder for the public to follow the trial. She said that the American Economic Liberties Project would send experts to attend public hearings and provide timely updates on a website dedicated to the trial. That website will also compile research and reports from other advocacy groups that joined the American Economic Liberties Project's coalition, including advocates opposing monopolies like Open Markets and tech-focused groups like Fight for the Future and Demand Progress.

Sullivan told Ars that it's too soon to say how this antitrust trial will affect the average Internet user.

"If the government wins, the court could order Google to change its behavior or divest parts of its business," Sullivan told Ars. "If so, and if that order survives any relevant appeals, then users could see changes in the search products they are offered." However, if Google wins, "then maybe nothing changes at all."

Publicity from the trial could cause some Internet users to shift their behaviors, though, Sullivan suggested.

"One indirect but significant way that the case might impact average people is by causing them to stop and think about how they make their search decisions," Sullivan told Ars. "One important tenet of the government's case is that default search engine assignments are sticky. That might be true as a historical and empirical assertion, but nothing compels it to be so. Maybe this litigation inspires people to change the default search engines on their phones and personal computers."

Read the original here:
Heres exactly what Google will argue to fight the DOJs antitrust claims - Ars Technica

Read More..

Meet the Genius behind Med-PaLM 2 – Analytics India Magazine

In December, last year, when OpenAIs ChatGPT was struggling to find real use cases, Google decided to explore the use of large language models (LLMs) for healthcare, resulting in the creation of Med-PaLM an open-sourced large language model designed for medical purposes.

Since then, the team has released scaled-up versions of healthcare LLMs, including Med-PaLM-2 and Med-PaLM-M, both of which have had a direct impact on human lives. Currently, Med-PaLM-2 is also undergoing testing at renowned healthcare institutions such as the Mayo Clinic. One of the prominent contributors to these projects is Vivek Natarajan, an AI researcher at Google Health.

Currently, based in the San Francisco Bay Area, the Tamilian with deep Bengali roots, began his journey as an engineering intern at Qualcomm, progressing to a role with Meta AI, and ultimately finding a fulfilling place at Google Health.

However, there is a story behind why he chose to transition into the field of medical AI.

It is 2023, and Indias healthcare system still faces significant hurdles with insufficient medical infrastructure and a severe shortage of medical professionals, especially in rural regions. The ratio of doctors to patients falls well below global standards, with a mere 0.7 doctors per 1,000 people. Adding to that, we have only 0.9 beds per 1,000 population, and out of those, only 30% are in rural areas.

Most had to walk tens of kilometres, often in extreme conditions, leading to delayed diagnoses, poorly managed chronic conditions, and even untimely deaths. This healthcare disparity affected both the underprivileged and affluent individuals, underscoring the stark healthcare inequalities in these areas.

Having grown up in different parts of India, Natarajan witnessed these immense challenges faced by people in small towns and villages when it came to accessing medical care. It always bothered me that people should not have to suffer so much to receive basic healthcare, and I always wanted to do something about it, Natarajan told AIM in an exclusive interaction.

From starting out by building Ask the Doctor, Anytime Anywhere, an app aimed at democratizing healthcare access in 2013 to being the research lead behind Googles state-of-the-art LLM for medicine, Med-PaLM 2, Natarajan has come a long way. I guess the name gives away what we were trying to do. Ask the Doctor was bootstrapped using older AI techniques and a lot of rules, and it clearly did not work well, leading to its discontinuation, he said.

The app was made by leveraging pre-deep learning ML techniques a combination of expert systems and rules. However, even back in 2013, he had this intuition that AI would be the most important piece of solving this healthcare problem.

After completing a bachelors degree at NIT Trichy in Electronics Engineering and graduating with a masters degree in Computer Science from UT Austin in 2015, Natarajan joined Meta AI. Despite being in the pre-transformer era, Natarajans time at Meta AI, which was his first job, taught him the potential of deep learning. At Meta, he worked in various areas, from speech recognition to conversational and multimodal AI, and on various business-critical platforms such as Newsfeed and Messenger.

However things took a different turn. Unfortunately, it was during this period that his father began showing signs of an aggressive form of Parkinsons disease, which couldnt have been identified sooner due to the limited care options and resources. That persuaded me to go back to the problem that I always deeply cared about using AI to democratise access to healthcare and put world-class medical expertise in the pocket of billions, said Natarajan.

Coincidentally, this was also the time when researchers from Google Brain and DeepMind (now referred to as Google DeepMind), after some seminal medical AI papers, were coming together to form Google Health AI, aligning with his aim. So when Greg Corrado, co-founder of Google Brain and head of Google Health AI, offered me the chance to join, I took it up without hesitation, he added.

Since then, he has collaborated with esteemed AI researchers like Greg and Dr Alan Karthikesalingam to work toward the vision of making an AI doctor accessible to billions.

If not an AI researcher, Natarajan would have probably been a cricket commenter like Harsha Bhogle. Well, lets take a moment to appreciate that he didnt embark on that career, otherwise, we might have missed out on his stellar work in building Med-PaLM, Med-PaLM 2, Med-PaLM M, and related projects.

The core concept driving the development of Med-PaLM is the utilisation of general-purpose language models like PaLM and GPT-4, which excel in predicting text but lack specialised medical knowledge. However, the challenge lies in transforming these models into medical experts.So, we need to do the same with AI and send them to medical school if we want to use them for medical applications. Make them learn from high-quality medical domain information spanning human biology to practice of medicine as well as from clinical expert demonstrations and feedback similar to residency after medical school, he added.

However, the primary obstacle was the scarcity of large-scale medical datasets due to privacy concerns and healthcare in the global south not being digital. Additionally, theres a pressing concern about bias in LLMs used in healthcare. These cultural, social, racial, and gender biases can result in unequal access to care, misdiagnoses, and treatment disparities. The root of this problem lies in the reliance of healthcare LLMs on extensive datasets that mirror historical healthcare inequities, potentially leading to inaccurate diagnoses and treatment recommendations for marginalised communities.

The Med-PaLM models, derived from the PaLM general-purpose language models, are tailored for medical applications through fine-tuning with high-quality medical datasets and clinical expert demonstrations, covering areas like professional medical exams, PubMed research, and user-generated medical questions. These datasets, including the openly available HealthSearchQA dataset from Google, are instrumental in the development of Med-PaLM and its likes.

In the Med-PaLM paper, researchers introduced an evaluation rubric for assessing LLMs in medical applications, with bias being one of the key dimensions.Additionally, in Med-PaLM 2, we introduced adversarial questions evaluation, specifically targeting sensitive topics like vaccine misinformation, COVID-19, obesity, mental health, and suicide. These topics have a high potential to exacerbate bias and healthcare disparities through the spread of medical misinformation, said Natarajan.

Our approach to mitigating bias involves rigorous evaluation and expert clinician demonstrations to train the model. While its a complex challenge, we are steadily making progress in this area, he added.

Consequently, he added that the fine-tuning approach used depends on the available data. In the case of the first Med-PaLM, prompt tuning was employed, wherein the majority of the LLM parameters remained fixed, and only a small set of additional parameters were learned. However, for subsequent versions such as Med-PaLM 2 and Med-PaLM M, the team had access to more data, enabling them to fine-tune the models end-to-end in order to enhance performance and align them more closely with medical expertise.

As we continue to ride the generative AI wave, Natarajan believes that understanding LLMs is crucial, as they differ from human intelligence and require specialised methods, such as mechanistic interpretability or artificial neuroscience, posing a plethora of new challenges that need to be solved. According to him, there lies immense potential for exciting research beyond large language models. He is particularly excited about LLMs potential in biology and neurology, such as analysing the human genome and decoding brain signals.

Although he has no plans to directly revisit building a similar app like Ask the Doctor, he believes that his work on Med-PaLM and medical AI as a whole at Google will eventually lead to something very similar. While there is still a long way to go, given the incredible progress made in LLMs just last year, it appears that my dream of making an AI doctor accessible to billions is no longer science fiction. Fingers crossed! Natarajan concluded.

Read more: Pushmeet Kohli On Solving Intelligence at DeepMind for Humanity & Science

The rest is here:
Meet the Genius behind Med-PaLM 2 - Analytics India Magazine

Read More..

The global race to set the rules for AI – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Original post:
The global race to set the rules for AI - Financial Times

Read More..

The 3 Most Undervalued Quantum Computing Stocks to Buy in September 2023 – InvestorPlace

If you thought the artificial intelligence boom was explosive, keep an eye on quantum computing.

According to Haim Israel, Head of Global Thematic Investing Research at Bank of America, we could soon see a revolution for humanity bigger than fire, bigger than the wheel, as quoted by Barrons. This is creating a massive opportunity for quantum computing stocks.

Whereas a supercomputer could take several years to compute, quantum computing can solve in a matter of minutes. In 2019, Googles quantum computer once performed a calculation in 200 seconds. It would have taken the worlds most powerful computer about 10,000 years to complete this computation.

Another example is drug development.

It takes an average of 15 years and tens of billions of dollars because only one out of 10,000 molecules becomes a drug. Quantum computing can do those calculations probably in a matter of minutes. I cant even think about an industry that wont be revolutionized, according to Haim Israel in a Barrons article.

Or, how about this? A team of scientists in Australia recently used quantum computing to slow down a molecular interaction 100 billion times slower than normal. In doing so, they slowed down chemical dynamics from femtoseconds (a quadrillionth of a second) to milliseconds. At those speeds, we could be looking at massive disruption in nearly every industry in the world.

These facts and stats could mean incredible profits for the following quantum computing stocks.

Source: Amin Van / Shutterstock.com

The last time I mentioned IonQ (NYSE:IONQ), the pure-play stock traded at just $4.56 on March 13. Today, its up to $19.68 and could see higher highs.

All thanks to a booming quantum computing market and solid earnings growth.

While the company posted a loss of 22 cents a share, missing estimates by 14 cents, revenue more than doubled to $5.52 million. That number beat estimates by $1.16 million. Also, Q2 bookings were a record $28 million, which now brings first-half 2023 bookings to more than $32 million.

In addition, the company increased its 2023 bookings to a new range of $45 million to $55 million. Then it raised full-year revenue guidance to $18.9 million to $19.3 million from a prior range of $18.8 million to $19.2 million. Analysts like the stock, with Morgan Stanley raising its price target to $16 from $7. Even Benchmarkraised its target price to $20 from $17.

Source: Shutterstock

Also, Rigetti Computing (NASDAQ:RGTI) has been equally as explosive. Since May, the company developing quantum integrated circuits for quantum computers popped from about 36 cents to a high of $3.43. While it has since pulled back to $2.03, recent weakness could be seen as an opportunity.

From current prices, I believe RGTI could double, if not triple, to higher highs. Helping, Benchmark analysts just upgraded RGTI to a buy rating, with a price target of $44, all thanks to earnings. In its second quarter, the company posted Q2 EPS of 13 cents, which beat estimates by four cents. Revenue, up 56% year over year (YOY) to $3.33 million, beat by $0.58 million.

Source: Shutterstock

Or, if you want to diversify among top quantum computing names at low cost, try an ETF, such as Defiance Quantum ETF (NYSEARCA:QTUM).

With an expense ratio of 0.40%, the fund provides exposure to cloud computing, quantum computing, artificial intelligence, and machine learning stocks. Better, the ETF has been on fire this year.Since January, the ETF ran from about $39 a share to a recent high of $50.15.

From there, Id like to see QTUM again challenge prior resistance around $53.55.Some top holdings include Ionq, Rigetti Computing ,Splunk (NASDAQ:SPLK), Intel (NASDAQ:INTC), Nvidia (NASDAQ:NVDA), and Applied Materials (NASDAQ:AMAT) to name a few of its 71 holdings.

On the date of publication, Ian Cooper did not hold (either directly or indirectly) any positions in the securities mentioned. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines

Ian Cooper, a contributor to InvestorPlace.com, has been analyzing stocks and options for web-based advisories since 1999.

Excerpt from:
The 3 Most Undervalued Quantum Computing Stocks to Buy in September 2023 - InvestorPlace

Read More..

Infleqtion Unveils Open Beta Release of Superstaq: Accelerating … – PR Newswire

CHICAGO, Sept. 12, 2023 /PRNewswire/ -- Infleqtion, the world's quantum information company, today announced the release of its flagship quantum software platform Superstaqinto open beta. Superstaq's device-physics-aware compilation techniques have led to remarkable performance enhancements, such as a 10x boost in standard benchmark applications like Bernstein-Vazirani. Various deep optimization techniquesparametric (fractional) gates, dynamical decoupling, swap mirroring, bring-your-own gateset, phased microwave decompositions, approximate synthesis, and qutrits contribute to this progress.

Quantum computers are noisy and error-prone, making optimized circuit compilation critical for obtaining useful results. With Superstaq's full-stack solution, application developers, researchers, quantum hardware providers, and users at national labs, accelerate their time-to-market and boost the computational power of their machines and applications. The platform integrates seamlessly with Qiskit Runtime, maximizing the efficiency of quantum computations.

"We're excited to introduce the Superstaq open beta, which signifies a pivotal juncture in the quantum computing landscape. Through its advanced features and user-focused design, Superstaq enables quantum enthusiasts, researchers, and industry leaders to fully unlock the potential of quantum applications," shared Dr. Pranav Gokhale, VP of Quantum Software at Infleqtion.

The open beta release of Superstaq introduces a host of user-centric enhancements. New tutorials and updated documentation ensure easier onboarding, making quantum computing accessible to a broader audience. The platform also offers improved error messaging and resolution channels, enhancing the overall user experience.

"Sandia National Laboratories' close collaboration with Infleqtion's Superstaq team has been invaluable in helping Sandia provide researchers around the world low-level access to our quantum computing testbed QSCOUT, a versatile, open, trapped-ion quantum computer. The team tailored compiler optimization techniques attuned to our hardware's specific performance and capabilities. These routines have focused on the advantages, challenges, and noise characteristics of QSCOUT's continuously parameterized two-qubit gateset, yielding exciting developments. Our productive endeavor is underscored by a deeply rooted, shared co-design philosophy that has improved both QSCOUT and Superstaq. As QSCOUT has evolved, the Superstaq team has been a valued, collaborative partner," said Christopher Yale, Experimental Team Lead on QSCOUT.

In addition to QSCOUT, users of Superstaq include other national laboratories, such as Argonne and the Advanced Quantum Testbed at Lawrence Berkeley; academic and educational institutions, such as Northwestern University and QuSTEAM; and financial companies, such as Morningstar, in addition to many others.

Ji Liu, a postdoctoral appointee at the U.S. Department of Energy's Argonne National Laboratory, shared, "Superstaq's low-level quantum programming primitives unlock significant advances in performance for applications such as Toffoli gates and Hamiltonian simulation algorithms. The access to native gates and pulse-level controls is important for optimizing execution on quantum hardware."

"Superstaq has been a fantastic resource," said Bennett Brown, Executive Director of QuSTEAM. "I'm excited about what's next for the platform and leveraging this power and their team's expertise to advance undergraduate education with project-based course modules, further growing our quantum community."

Dr. Gokhale will participate as a panelist at IEEE Quantum Week, "From the Capitol to the Laboratory: How Industry and Academia Can Leverage National Policy for Funding of QIS," on September 20th at 3:00 Pacific Time (PDT). The event will also feature a paper presentation on arXiv. Further details about Superstaq can be found online at https://www.infleqtion.com/superstaq, and interested users can join the Superstaq open beta at https://superstaq.infleqtion.com.

About Infleqtion

Infleqtion delivers high-value quantum information precisely where it is needed. By operating at the Edge, our software-configured, quantum-enabled products deliver unmatched levels of precision and power, generating streams of high-value information for commercial organizations, the United States, and allied governments. With 16 years of ColdQuanta's pioneering quantum research as our foundation, our hardware products and AI-powered solutions address critical market needs in PNT, global communication security and efficiency, resilient energy distribution, and accelerated quantum computing. Headquartered in Austin, TX, with offices in Boulder, CO; Chicago, IL; Madison, WI; Melbourne, AU; and Oxford, UK. Learn how Infleqtion is revolutionizing how we communicate, navigate, and discover at http://www.Infleqtion.com.

The names Infleqtion, Super.tech, Superstaq, ColdQuanta, and the Infleqtion logo are registered trademarks of Infleqtion, Inc.

SOURCE Infleqtion

Read the rest here:
Infleqtion Unveils Open Beta Release of Superstaq: Accelerating ... - PR Newswire

Read More..

Quantagonias HybridSolver is Now Accessible through the Strangeworks Platform – Quantum Computing Report

Strangeworks has added another partner to their quantum syndicate of hardware, software, and service providers. This new one is Quantagonia, a German software company that specializes in providing optimization solutions for customers through their HybridSolver. Quantagonias software can accept problem formulations in multiple ways including mixed integer programming (MIP), linear programming (LP), and quadratic unconstrained binary optimization problems (QUBO) and compile the problem to run it on many different quantum and quantum-inspired backends including CPUs, GPUs and QPUs.Optimization represents an important class of use cases and Quantagonias tools and experience can help a user find the best optimization approach from the many available. For users with a Strangeworks account, they can access Quantagonias HybridSolver for free on problems that have up to 50 variables. Running problems larger than that will entail some charges. Strangeworks has issued a press release announcing their new partnership with Quantagonia that is available here.

September 12, 2023

Continue reading here:
Quantagonias HybridSolver is Now Accessible through the Strangeworks Platform - Quantum Computing Report

Read More..

D-Wave CEO: Quantum computing will ‘fundamentally transform’ the … – Finbold – Finance in Bold

In the ever-evolving realm of technology, one concept stands out as the harbinger of the future: quantum computing. Its a revolutionary field that is reshaping the boundaries of whats possible in computation.

Market watchers and tech leaders widely believe that quantum technology has the potential to revolutionize industries, akin to the transformative impacts of artificial intelligence (AI) and cloud computing, ushering in a new era of unparalleled computational capabilities.

Alan Baratz, CEO of D-Wave Quantum (NYSE: QBTS), reiterated those views, in his September 11 interview with Bloomberg Technology.

Notably, Baratz believes quantum technology is going to fundamentally transform the way businesses operate and have a huge impact on the social and economic environment.

Due to its immense potential, the US must accelerate investments in the development of quantum computing, he added, as its biggest economic rival China continues to make progress in this area.

While its potential remains unquestionable, commercial quantum computing is still early, Baratz said when Bloomberg Technology co-host Ed Ludlow asked him why D-Wave continues to generate low revenues despite having 60 commercial customers.

Baratz acknowledged that his company was also not commercial until more than a year ago, but nevertheless, the company has seen its bookings accelerate quarter-over-quarter for five quarters now, thanks to its unusual approach.

Additionally, D-Wave has witnessed its average deal size grow substantially from tens of thousands of dollars to well into the hundreds of thousands of dollars.

Having said that, Baratz said he is really excited about the companys future prospects, even though the company, and the broader quantum tech space, are still in their early days.

The firms CEO stated that D-Wave has developed its technology entirely by ourselves. Today, the company has more than 200 US-granted patents, and an additional 100 in process worldwide.

At the time of writing, D-Wave stock was standing at $1.07, after soaring more than 17% in the past 24 hours.

Over the past week, QBTS remains down more than 5% and over 25% on the month.

The stock reached a 2023 high of $2.91 in mid-July, but the share price declined significantly after the company reported weak Q2 earnings and forward guidance, despite robust bookings growth.

Buy stocks now with Interactive Brokers the most advanced investment platform

Watch the full interview below:

Disclaimer: The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.

Excerpt from:
D-Wave CEO: Quantum computing will 'fundamentally transform' the ... - Finbold - Finance in Bold

Read More..

Research Highlights: Unveiling the First Fully Integrated and … – insideBIGDATA

Quantinuum, a leading integrated quantum computing company has published full details of their complete Quantum Monte Carlo Integration (QMCI) engine. QMCI applies to problems that have no analytic solution, such as pricing financial derivatives or simulating the results of high-energy particle physics experiments and promises computational advances across business, energy, supply chain logistics and other sectors.

The QMCI tool, utilizing advanced quantum algorithms, will allow quantum computers to perform estimations more efficiently and accurately than equivalent classical tools, inferring an early-stage quantum advantage in areas such as derivative pricing, portfolio risk calculations and regulatory reporting. A white paper supporting the new tool reveals that QMCI benefits from a computational complexity advantage over classical MCI, and suggests the engine has the potential to provide quantum usefulness in its current form.

The white paper,A Modular Engine for Quantum Monte Carlo Integration, has been made available on arXiv, detailing, among other items, the enhanced P-builder, a tool for constructing quantum circuits representing commonplace computational methods used in finance. The white paper also proposes how users of the new tool could obtain quantum advantage without compromising statistical robustness in the ensuing estimates.

Ilyas Khan, Chief Product Officer of Quantinuum said Quantinuums end-to-end QMCI engine the first ever complete quantum solution, offers the prospect of an immediate boost to the productivity of users in at least two sectors: banking and financial institutions, and scientists who expect quantum computers to help them process the vast amounts of data generated in experimental fields such as high energy physics. Our QMCI engine is the culmination of years of work by our algorithms team, and highlights just how quantum computers will offer practical utility. Our modular approach also future-proofs the engine as quantum computing hardware advances.

The engine has four modules loading probability distributions and random processes as quantum circuits; programing a wide variety of financial calculations; programming different statistical quantities (e.g. mean, variance and others); and the estimation of quantum amplitude, which is the core source of computational advantage in QMCI. The engine features aresource mode, which precisely quantifies the exact quantum and classical resources needed for user-specified calculations a feature which is essential for predicting when particular applications will enjoy quantum advantage. Thus, the paper reveals a direct line of sight to quantum advantage and concludes users will achieve useful benefits sooner still.

Dr Steven Herbert said: The QMCI engine taps into rapidly growing demand for tools that help global organizations in finance and other sectors explore and evaluate their route towards quantum advantage. Classical Monte Carlo integration is the preferred method in a range of computational areas where analytic solutions are unavailable and it is widely recognized that these methods will benefit from a quantum advantage. By taking a modular approach, we will equip those scientific and financial professionals with a platform that supports them flexibly through rapid technological advances in the years to come.

The new white paper sets out the areas that stand to benefit from the development of QMCI, beyond finance, including achieving efficiencies in supply chain and logistics, energy production and transmission, and data-intensive fields of science such as solving the high-dimensional integrals in high-energy physics. It concludes that use cases such as estimation and forecasting can benefit from the new QMCI engine in its current form.

Banks and financial institutions are expected to increase investment in quantum computing capabilities from$80 million in 2022 to $19 billion in 2032, growing at a 10-year CAGR of 72%.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:https://twitter.com/InsideBigData1

Join us on LinkedIn:https://www.linkedin.com/company/insidebigdata/

Join us on Facebook:https://www.facebook.com/insideBIGDATANOW

Read more:
Research Highlights: Unveiling the First Fully Integrated and ... - insideBIGDATA

Read More..