Category Archives: Deep Mind
AI Can Be an Extraordinary Force for Goodif It’s Contained – WIRED
In a quaint Regency-era office overlooking Londons Russell Square, I cofounded a company called DeepMind with two friends, Demis Hassabis and Shane Legg, in the summer of 2010. Our goal, one that still feels as ambitious and crazy and hopeful as it did back then, was to replicate the very thing that makes us unique as a species: our intelligence.
To achieve this, we would need to create a system that could imitate and then eventually outperform all human cognitive abilities, from vision and speech to planning and imagination, and ultimately empathy and creativity. Since such a system would benefit from the massively parallel processing of supercomputers and the explosion of vast new sources of data from across the open web, we knew that even modest progress toward this goal would have profound societal implications.
It certainly felt pretty far-out at the time.
But AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if Im even close to right, the implications are truly profound.
Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyones direct control. It was clear that if we or others were successful in replicating human intelligence, this wasnt just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks. Now, alongside a host of technologies including synthetic biology, robotics, and quantum computing, a wave of fast-developing and extremely capable AI is starting to break. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.
As a builder of these technologies, I believe they can deliver an extraordinary amount of good. But without what I call containment, every other aspect of a technology, every discussion of its ethical shortcomings, or the benefits it could bring, is inconsequential. I see containment as an interlocking set of technical, social, and legal mechanisms constraining and controlling technology, working at every possible level: a means, in theory, of evading the dilemma of how we can keep control of the most powerful technologies in history. We urgently need watertight answers for how the coming wave can be controlled and contained, how the safeguards and affordances of the democratic nation-state, critical to managing these technologies and yet threatened by them, can be maintained. Right now no one has such a plan. This indicates a future that none of us want, but its one I fear is increasingly likely.
Facing immense ingrained incentives driving technology forward, containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible.
It would seem that the key to containment is deft regulation on national and supranational levels, balancing the need to make progress alongside sensible safety constraints, spanning everything from tech giants and militaries to small university research groups and startups, tied up in a comprehensive, enforceable framework. Weve done it before, so the argument goes; look at cars, planes, and medicines. Isnt this how we manage and contain the coming wave?
If only it were that simple. Regulation is essential. But regulation alone is not enough. Governments should, on the face of it, be better primed for managing novel risks and technologies than ever before. National budgets for such things are generally at record levels. Truth is, though, novel threats are just exceptionally difficult for any government to navigate. Thats not a flaw with the idea of government; its an assessment of the scale of the challenge before us. Governments fight the last war, the last pandemic, regulate the last wave. Regulators regulate for things they can anticipate.
See the article here:
AI Can Be an Extraordinary Force for Goodif It's Contained - WIRED
The Coming Wave by Mustafa Suleyman review a tech tsunami – The Guardian
Science and nature books
The co-founder of DeepMind issues a terrifying warning about AI and synthetic biology but how seriously should we take it?
Scott Shapiro
On 22 February1946, George Kennan, an American diplomat stationed in Moscow, dictated a 5,000-word cable to Washington. In this famous telegram, Kennan warned that the Soviet Unions commitment to communism meant that it was inherently expansionist, and urged the US government to resist any attempts by the Soviets to increase their influence. This strategy quickly became known as containment and defined American foreign policy for the next 40 years.
The Coming Wave is Suleymans book-length warning about technological expansionism: in close to 300 pages, he sets out to persuade readers that artificial intelligence (AI) and synthetic biology (SB) threaten our very existence and we only have a narrow window within which to contain them before its too late. Unlike communism during the cold war, however, AI and SB are not being forced on us. We willingly adopt them because they not only promise unprecedented wealth, but solutions to our most intractable problems climate change, cancer, possibly even mortality. Suleyman sees the appeal, of course, claiming that these technologies will usher in a new dawn for humanity.
An entrepreneur and AI researcher who co-founded DeepMind in 2010, before it was acquired by Google in 2014, Suleyman is at his most compelling when illustrating the promises and perils of this new world. In breezy and sometimes breathless prose, he describes how human beings have finally managed to exert power over intelligence and life itself.
Take the AI revolution. Language models such as ChatGPT are just the beginning. Soon, Suleyman predicts, AI will discover miracle drugs, diagnose rare diseases, run warehouses, optimise traffic, and design sustainable cities. We will be able to tell a computer program to make a $1 million on Amazon in a few months and it will carry out our instructions.
The problem is that the same technologies that allow us to cure a disease could be used to cause one which brings us to the truly terrifying parts of the book. Suleyman notes that the price of genetic sequencing has plummeted, while the ability to edit DNA with technologies such as Crispr has vastly improved. Soon, anyone will be able to set up a genetics lab in their garage. The temptation to manipulate the human genome, he predicts, will be immense.
Human mutants, however, are not the only horrors awaiting us. Suleyman envisions AI and SB joining forces to enable malicious actors to concoct novel pathogens. With a 4% transmissibility rate (lower than chickenpox) and 50% case fatality rate (about the same as Ebola), an AI-designed and SB-engineered virus could cause more than a billion deaths in a matter of months.
Despite these risks, Suleyman doubts any nation will make the effort to contain these technologies. States are too dependent on their economic benefits. This is the basic dilemma: we cannot afford not to build the very technology that might cause our extinction. Sound familiar?
The Coming Wave is not about the existential threat posed by superintelligent AIs. Suleyman thinks that merely smart AIs will wreak havoc precisely because they will vastly increase human agency in a very short period. Whether via AI-generated cyber-attacks, homebrewed pathogens, the loss of jobs due to technological change, or misinformation aggravating political instability, our institutions are not ready for this tsunami of tech.
He repeatedly tells us that the wave is coming, the coming wave is coming, even the coming wave really is coming. I suppose living through the past 15 years of AI research, and becoming a multimillionaire in the process, would turn anyone into a believer. But if the past is anything to go by, AI is also known for its winters, when initial promise stalled and funding dried up for long periods. Suleyman disregards the real possibility that this will happen again, thereby giving us more time to adapt to and even stem the tide of social change.
But even if progress continues its frenetic pace, it is unlikely that societies will tolerate the ethical abuses Suleyman fears most. When a Chinese scientist revealed in 2018 that he had edited the genes of twin girls, he was sentenced to three years in prison, universally condemned, and there have been no similar reports since. The EU is set to prohibit certain forms of AI such as facial recognition in public spaces in its forthcoming AI Act. Normal legal and cultural pushback will probably slow the proliferation of the most disruptive and disturbing practices.
Despite claiming that the containment problem is the defining challenge of our era, Suleyman does not support a tech moratorium (he did just start a new AI company). Instead he sets out a series of proposals at the end of the book. They are unfortunately not reassuring.
For example, Suleyman suggests that AI companies spend 20% of R&D funds on safety research, but does not say why companies would divert capital away from rushing their new products to market. He advocates banning AI in political ads, but doing so would violate the first amendment to the US constitution. He proposes an international anti-proliferation treaty, but does not give us any indication of how it might be enforced. At one point, Suleyman hints that the US may need to coerce other countries to comply. Some measure of anti-proliferation is necessary. And, yes, lets not shy away from the facts; that means real censorship, possibly beyond national borders. I dont know exactly what he means here, but I dont like the way it sounds.
Suleyman pushes these costly proposals despite conceding that his catastrophic scenarios are tail risks. Yes, the probability of doomsday is low, but the consequences would be so catastrophic that we must treat the possibility as a clear and present danger. One very large elephant in the room is climate change. Unlike the AI apocalypse that may happen in the future, a climate emergency is happening right now. This July was the hottest on record. Containing carbon, not AI, is the defining challenge of our era. Yet here, Suleyman is strikingly and conveniently optimistic. He believes that AI will solve the climate emergency. That is a happy thought but if AI will solve the climate problem, why cant it solve the containment problem too?
If the books predictions about AI are accurate, we can safely ignore its proposals. Wait a few years and we can just ask ChatGPT-5, -6, or -7 how to handle the coming wave.
Scott Shapiro is professor of law and philosophy at Yale and author of Fancy Bear Goes Phishing (Allen Lane). The Coming Wave by Mustafa Suleyman and Michael Bhaskar is published by Bodley Head (25). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
Go here to read the rest:
The Coming Wave by Mustafa Suleyman review a tech tsunami - The Guardian
Google to require disclosure of AI use in political ads – POLITICO
While the Federal Election Commission hasnt set rules on using AI in political campaign ads, in August it voted to seek public comments on whether to update its misinformation policy to include deceptive AI ads.
The Google policy change also comes as Congress is working on comprehensive legislation to set guardrails on AI, and is meeting with leaders next week in the generative AI space, including Google CEO Sundar Pichai, which owns AI subsidiary DeepMind.
The specifics: Googles latest rule update which also applies to YouTube video ads requires all verified advertisers to prominently disclose whether their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events. The company mandates the disclosure be clear and conspicuous on the video, image or audio content. Such disclosure language could be this video content was synthetically generated, or this audio was computer generated, the company said.
A disclosure wouldnt be required if AI tools were used in editing techniques, like resizing or cropping, or in background edits that dont create realistic interpretations of actual events.
Political ads that dont have disclosures will be blocked from running or later removed if they evaded initial detection, said a Google spokesperson, but advertisers can appeal, or resubmit their ads with disclosures.
Elections worldwide: Googles policy updates its existing election ads rules in regions outside the U.S. as well, including Europe, India and Brazil which all have elections in 2024 as well. It will also apply to advertisements using deepfakes, which are videos or images that have been synthetically created to mislead, that are banned under the companys existing misrepresentation policy.
Facebook currently doesnt require the disclosure of synthetic or AI-generated content in its ads policies. It does have a policy banning manipulated media in videos that are not in advertisements, and bans the use of deepfakes.
Link:
Google to require disclosure of AI use in political ads - POLITICO
Sharon Li – MIT Technology Review
Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.
One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.
Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.
As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.
Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.
To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.
Original post:
Sharon Li - MIT Technology Review
Thalassophobia (Fear Of The Ocean): Symptoms And Treatment – Forbes
Thalassophobia is an intense fear of deep or large bodies of water, like the ocean or the sea. It falls under a category of specific phobia, which is defined as an intense, irrational fear of something that poses little or no actual danger, and affects 9.1% of U.S. adults, according to the National Institute of Mental Health.
A person would know they have thalassophobia because when they are exposed to deep open water or the ocean, whether in person or even just by thinking about it or seeing pictures, they would experience a number of possible physical and psychological symptoms, says Deborah Courtney, Ph.D., a licensed psychotherapist with a private practice in New York and member of the Forbes Health Advisory Board.
Thalassophobia is categorized under phobias of the natural environment, such as water or heights. Other specific phobia categories as defined by the Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR), the text U.S. mental health practitioners refer to for making mental health diagnoses, include:
Often thalassophobia (and most phobias) can be traced back to an early life trauma, which could be processed through psychodynamic therapy, which explores how early life experiences create templates that constrict how we live in the future, explains Sabrina Romanoff, Psy.D., a Harvard-trained clinical psychologist, professor and researcher based in New York and a member of the Forbes Health Advisory Board. With therapy, people can learn to take risks by challenging those templates and revise them to be more aligned with their current reality and values, she says.
Continued here:
Thalassophobia (Fear Of The Ocean): Symptoms And Treatment - Forbes
Disinformation wars: The fight against fake news in the age of AI – New Scientist
IN OCTOBER 2021, Phil Howard, an internet researcher at the University of Oxford, was alerted to a preposterous story on social media. It alleged that the covid-19 pandemic was started by a shipment of Maine lobsters that arrived in Wuhan, China, days before the first outbreak. He and his colleagues spent months trying to track down the source and didnt get to the bottom of it except that it probably originated in China, possibly through the state-owned TV channel CGTN.
I felt my career had hit a new low, says Howard. What was so ridiculous was the enormous effort that we needed to expose a ridiculous attempt to manipulate public opinion. I realised that I didnt want to do that work myself, so I decided to try and come up with an initiative that would do something about the problem in a systematic way.
Today, Howard is chair of a new organisation called the International Panel on the Information Environment, one of many initiatives pushing back against the pollution of the information ecosystem. Regulators, too, are finally lacing up their own boots after spending years sitting on their hands.
The stakes couldnt be higher, with the recent rise of generative artificial intelligence and its capacity to produce persuasive disinformation on an industrial scale. Many researchers are saying that the next two years are make or break in the information wars, as deep-pocketed bad actors escalate their disinformation campaigns, while the good guys fight back. Which side prevails will determine how the information environment and everything it shapes, from peoples beliefs about vaccines to the outcomes of elections
Read the original here:
Disinformation wars: The fight against fake news in the age of AI - New Scientist
Richard Zhang – MIT Technology Review
Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.
One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.
Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.
As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.
Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.
To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.
See the rest here:
Richard Zhang - MIT Technology Review
Pranav Rajpurkar – MIT Technology Review
Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.
One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.
Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.
As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.
Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.
To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.
View post:
Pranav Rajpurkar - MIT Technology Review
Heres exactly what Google will argue to fight the DOJs antitrust claims – Ars Technica
Today, DC-based US District Court Judge Amit Mehta will hear opening arguments in a 10-week monopoly trial that could disrupt Google's search business and redefine how the US enforces antitrust law in the tech industry.
The trial comes three years after the Department of Justice began investigating whether Googlecurrently valued at $1.7 trillionpotentially abused its dominance in online search to make it nearly impossible for rival search engines to compete. Today, Google controls more than 90 percent of the search engine market within the US and globally, and this has harmed competitors and consumers, the DOJ argued, by depriving the world of better ways to search the web.
"Googles anticompetitive conduct harms consumerseven those who prefer its search enginebecause Google has not innovated as it would have with competitive pressure," the DOJ wrote in a pre-trial brief filed on Friday.
This trial will be "the federal governments first monopoly trial of the modern Internet era," The New York Times reported. For officials, the trial marks a shift away from opposing anti-competitive tech mergers and acquisitionswhich attempt to stop tech giants from getting even bigger in desirable markets. Starting with this trial, officials will now begin scrutinizing more closely than ever before how tech giants got so big in the first place.
No one's sure yet if today's antitrust laws can even answer some of the latest emerging questions about tech competition. Last year, Congress recommended changes to strengthen antitrust laws, including by directly prohibiting abuse of dominance. But rather than wait for lawmakers to update laws, the DOJ is treading aggressively into new territory and might even become emboldened to break up some of the biggest tech companiesespecially if the DOJ proves that tech giants have carefully built their businesses to shut out competition, as it's accused Google of doing.
"Google has entered into a series of exclusionary agreements that collectively lock up the primary avenues through which users access search engines, and thus the Internet, by requiring that Google be set as the preset default general search engine on billions of mobile devices and computers worldwide and, in many cases, prohibiting preinstallation of a competitor," a 2020 DOJ press release announcing the lawsuit said.
In Google's case, the DOJ has alleged that Google pays billions to browser developers, wireless carriers, and mobile phone makers to drive users to Google by agreeing to exclusively feature Google as the default search engine. Competing search engines will likely never be able to afford to outbid Google and form their own agreements, the DOJ argued. And denying them prominent placements across nearly every distribution channel prevents Google's rivals from reaching a broader scale of users and collecting a wider range of data that's needed to improve their search products. Google's market share is already too great for any rival to catch up, the theory goes, and Google has allegedly invested lots of money to keep it that way.
A nonprofit advocating for strong enforcement of antitrust laws, the American Economic Liberties Project, cited a recent estimate that "suggests Google paid over $48 billion in 2022 for these agreements." The nonprofit claimed that Google's outsize investment in these agreements was a red flag signaling a power grab and "as a result, search competitors who cant afford to cut billion-dollar checks to make their search engines accessible face an insurmountable barrier to entry." The DOJ agrees.
For Google, paying for those agreements has always been worth it, but the DOJ's attack on Google's key business strategy could end up costing Google big. If Google loses the trial, the search giant risks paying damages, potentially being forced to change its business practices, and possibly even being ordered to restructure its business. Among other remedies, the American Economic Liberties Project and a coalition of 20 civil society and advocacy groups recommended that the DOJ order the "separation of various Google products from parent company Alphabet, including breakouts of Google Chrome, Android, Waze, or Googles artificial intelligence lab Deepmind."
Although the trial starts today, the DOJ and Google started sparring earlier this year, when the DOJ accused Google of routinely deleting evidence and Google denied that anything important was deleted. Ahead of the trial, the DOJ and Google have already deposed 150 people, but the trial will call upon even more witnesses, potentially exposing parts of Google's core businessesandbecause Apple signed an agreement to make Google a default search engine on iPhonesalso Apple's. The testimony about trade secrets could be so sensitive that Apple executives requested the court block their testimony, but they were recently denied that request, Reuters reported.
The trial will be decided by Judge Mehta, and not a jury. To defend its search business, Google hired John E. Schmidtlein, a partner at the law firm Williams & Connolly, who will face off against the DOJ's head of antitrust, Jonathan Kanter.
Google employeesincluding CEO Sundar Pichaiand executives from other big tech companies like Apple will likely appear to testify. In a pre-trial brief, Google said that they would also be bringing in Google search users and advertisers as witnesses.
Former FTC antitrust attorney Sean Sullivan told Ars that the key questions before the court will be determining if Google possesses monopoly power in a relevant market and if Google got that power through anticompetitive conduct. That will likely require Google to explain in greater detail than ever before how it runs its search business, triggering even more interest in the trial, parts of which will be concealed from the public to protect Google trade secrets. While the DOJ appears confident that it can prove that Google has monopoly power, Sullivan said that antitrust cases are "rarely" straightforward.
"It is not enough that the defendant's conduct disadvantages its rivals," Sullivan told Ars. "Instead, the plaintiff ordinarily must prove that the defendant did things to disadvantage its rivals that lacked any business justification except to reduce the rivals competitive significance, or that needlessly disadvantaged rivals relative to other less restrictive ways of competing. That often funnels dispositive weight to whether the defendant can explain why it has done the things it is accused of doing."
The DOJ has said that it has the evidence to prove that Google didn't win a competitive advantage by building a superior product but by building a monopoly over search.
"A theme that will likely permeate Googles presentation at trial is its view that the company offers a quality search product that many users prefer," the DOJ wrote in its pre-trial brief. "But Googles conduct undermines this argument, as the monopolist feels the perpetual need to pay billions annually to ensure that consumers are routed to its search engine" by ensuring that Google "is the default search engine for iPhones, Android phones, and most third-party browsers, such as Mozillas Firefox."
This harms competitors, the DOJ said, because "Googles use of contracts to maintain default status denies rival search engines access to critical distribution channels and, by extension, the data necessary to improve their products." In turn, harming competition harms consumers because, while Google "attracts more users, who generate more data and who help attract more advertising revenue," Google's rivals "face an insurmountableand still growingdifference in scale" and are deprived "the opportunity to provide more accurate results" that would improve the search experience for consumers.
"Only Google has the full opportunity to improve," the DOJ argued.
At trial, the DOJ said it "will demonstrate that Google has maintained its durable monopolies in general search servicesand the related advertising markets that fund itby cutting off the air supply to Googles rivals," other general search engines.
To convince the court of Google's monopoly power, the DOJ said it would share evidence showing that Google competes in advertising markets with other general search engines that are also considered a "one-stop-shop" for searchers. And because Google has a dominant share of those markets and poses alleged barriers to entry, the DOJ argued that Google "easily meets" the threshold for monopoly power.
The DOJ claimed that it would show direct evidence that Google raised advertising prices "above a competitive level." Officials also urged the court not to buy into "any argument by Google that competitive pressures force it to innovatethus cutting against a finding of monopoly power." That "runs counter to the evidence," the DOJ said.
To back up the DOJ, the agency will call upon its expert economist, Michael Whinston, to testify that this case "has all the hallmarks of an exercise of monopoly power in the relevant advertising markets." The DOJ will also tap accounting expert Christine Hammer to "explain that Google turns an exceptionally high profit margin" on its search businesses.
Ultimately, the DOJ told the court that "in a monopoly maintenance case such as this one, the operative question is not whether the defendant has acquired its monopoly through anticompetitive means, but whether, once acquired, the defendant used anticompetitive means to maintain its monopoly."
Google laid out its defense against the DOJ's antitrust claims in a pre-trial brief also filed Friday. Ahead of the trial, the company disputed how the DOJ defined its top competitors, argued that its dominance in the search market does not create barriers to entry for competitors, and alleged that procompetitive benefits outweigh harms of alleged misconduct. The company also challenged how Colorado, which is another plaintiff in the case after filing a separate complaint, defined its "duty to deal" with search advertising competitors like Microsoft.
Google's defense of its business starts by suggesting that the DOJ will be unable to prove that "Google possesses 'monopoly power in the relevant market' because the DOJ has failed to identify a relevant market with a "set of products that serve as important competitive constraints on Google."
In making this argument, Google seeks to convince the judge that rival general search engines like Bing, Yahoo!, and DuckDuckGo are not Google's key competitors, because they are not "the products and firms to which Google would lose search queries (in the short and long run) if the quality of its search offering declined."
Rather, Google claimed that "specialized vertical providers" (SVPs) like Amazon, Yelp, and Expediaas well as "other popular places users go to search for information such as TikTok and Instagram"are Google's top search market rivals. Google wrote:
By defining the relevant market to include only general search engines, plaintiffs distort the commercial reality that users routinely substitute other search providers for general search enginessuch as Amazon when they shop, or Expedia when they traveland thereby improperly exclude many of Googles strongest competitors from the relevant market.
At the trial, Google's economic expert, Mark Israel, will share empirical evidence showing that's why SVPs "are closer competitors to Google in these verticals than other general search engines like Bing," Google wrote.
In addition to being counted among Google's top rivals for users, SVPs and social media platforms are also more relevant competitors to Google for advertisers than other general search engines, Google argued.
At trial, Israel will share results from his "detailed examination" of how advertisers decide where to place search ads, showing that "Googles search ads compete with a wide range of other digital advertisements"all of which Google said have been "improperly" excluded from the lawsuit's defined relevant markets allegedly harmed by Google's monopoly power.
Israel's examination will go up against testimony from Google advertisers themselves. In Colorado's pre-trial brief, the state confirmed that advertiser witnesses "who have products but decline to sell them on SVP platforms" would testify that "they cannot substitute SVP ads for general search ads."
Google has argued that if the DOJ and Colorado had included "strong competitors like Amazon, Expedia, Meta, and Yelp" in definitions of relevant markets, the court would see that Google does not enjoy monopoly power and instead "faces substantial competitive constraints for both users and advertisers." As proof, Google claimed that "on the user side, Googles share of user traffic has decreased while the search traffic captured by SVPs has increased over time." And it's most significantly "losing share to platforms such as Amazon and Meta," Google noted.
To prove that "the evidence on output, price, and quality indicates that Google competes in a fiercely competitive marketplace," Google said it will trot out Google search users as witnesses, who "will detail the intense continuing investments Google makes to compete, including the launch of thousands of product improvements every year, from generative artificial intelligence capabilities to enhancing the breadth and depth of local results to new flight search interfaces."
And Google also plans to introduce testimony from advertisers, who "will describe the investments Google makes to continue to improve its search advertising products" and explain how that generates more revenue for both advertisers and Google.
Beyond disagreeing with the DOJ on which businesses Google actually competes with, Google also seeks to dismantle the DOJ's argument that rival search engines face barriers to entry that are greater than ever, which prevents them from achieving the scale they need to compete with Google.
Computer science experts Edward Fox and Ophir Frieder will help Google explain from a technical perspective that, while the Google practice of collecting heaps of "user data can improve search quality," there "are diminishing returns to scale." They will also explain that competitors like Microsoft have "sufficient scale to compete" and detail the "many aspects of search that can be improved without additional scale." These experts will be largely charged with convincing the court that "Google owes its quality advantage over rivals to its 'superior skill, foresight, and industry' and not to anticompetitive conduct.
Disputing Google's computer science experts on the question of scale, Colorado said it will present testimony from employees of general search engine competitors who have struggled to overcome Google's monopoly, advertisers who have struggled with shifting investments from Google to Bing, and Google employees who will explain how Google benefits from massive amounts of user data.
Google also claimed that "there is no shortage" of procompetitive benefits from alleged anticompetitive behaviors. Among them, Google said that its economic expert, Kevin Murphy, and other witnesses would testify that Google becoming the default search engine both increased search usage and thus expanded its search outputbenefiting users. This also provided critical funds to browsers, which could then invest in improvements and innovations to improve browser functionality, Google argued.
Perhaps more significantly, Google's mobile application distribution agreements (MADAs) have "greatly benefited consumers and search competition by fostering the success of the Android platform, an innovative mobile platform that today provides the most significant competition to market-leading Apple in the United States," Google argued.
Google has claimed that "reengineering the MADAs, as Plaintiffs demand, would undermine these important benefits without boosting search competition."
Since Google has identified alleged procompetitive benefits, it will be up to plaintiffs to rebut them in the trial. The DOJ already claimed in its pre-trial brief that "the trial record will confirm that any purported benefits are outweighed by the anticompetitive effects in this case." The DOJ also suggested that consumers could enjoy the same benefits if Google used less anti-competitive meanssuch as paying for search traffic, instead of paying for exclusionary default search agreements.
Finally, Google argued that Colorado's claims that Google had a "duty to deal" with Microsoft and implement Bing Ads in its search engine marketing tool are "dead on arrival." And this could be crucial to proving what Sullivan said is essential to win the case: that Google "needlessly disadvantaged rivals relative to other less restrictive ways of competing."
However, according to Google, Colorado has failed to identify competitive harms caused to Microsoft by Google's delay in implementing Bing ads in its adtech tool. Google also argued that "even when a duty to deal exists, as long as the monopolist has 'valid business reasons' for the refusal to deal, the refusal does not violate" antitrust law.
"Google looks forward to presenting its case at trial," Google's pre-trial brief concluded.
It may not be easy for the public to follow all the nitty-gritty details of the Google antitrust trial online.
The juiciest parts of the trialwhere Google and Apple will discuss trade secrets driving core businesswill be sealed, and it appears that unsealed portions of the trial will likely only be accessible for those who can attend live.
In the days ahead of the trial, the American Economic Liberties Project joined other organizations in requesting that the court provide live audio feeds of the proceedings so that the public could follow the unsealed portions of the trial. But Mehta denied the request last weekend, citing judicial policy that does not allow either civil or criminal courtroom proceedings in the district courts to be broadcast, televised, recorded, or photographed for the purpose of public dissemination.
Among his reasons for denying the request, Mehta wrote that "the court has serious concerns about the unauthorized recording of portions of the trial, particularly witness testimony." Mehta said that the district court has recognized that live witness testimony is "particularly sensitive" and exempted those proceedings from audio feeds "even when those proceedings involved 'a matter of public interest[.]'
Senior counsel at the American Economic Liberties Project, Katherine Van Dyck, in a press release criticized Mehta's decision, saying that it would shroud the trial in secrecy and "prioritizes Googles privacy over the publics First Amendment right to listen, in real time, to witnesses that will lay out how Google monopolized search engines."
"The company whose infamous mission is to organize the worlds information and make it universally accessible was successful today in its effort to block the public from accessing the most important antitrust trial of the century, Van Dyck's statement said.
Van Dyck told Ars it seemed unlikely that Mehta would change his mindbeyond possibly piping an audio feed into a room where press will be gatheredbut some advocates have not given up hope that the trial will be accessible online. On Sunday, the trade organization Digital Context Next filed a similar request asking the court to provide a live audio feed, arguing that "there is substantial public interest" in the case and "there does not appear to be any good reason to close the trial completely for this testimony other than to shield Google and Apple from potential embarrassment."
Van Dyck told Ars that Mehta's decision would make it harder for the public to follow the trial. She said that the American Economic Liberties Project would send experts to attend public hearings and provide timely updates on a website dedicated to the trial. That website will also compile research and reports from other advocacy groups that joined the American Economic Liberties Project's coalition, including advocates opposing monopolies like Open Markets and tech-focused groups like Fight for the Future and Demand Progress.
Sullivan told Ars that it's too soon to say how this antitrust trial will affect the average Internet user.
"If the government wins, the court could order Google to change its behavior or divest parts of its business," Sullivan told Ars. "If so, and if that order survives any relevant appeals, then users could see changes in the search products they are offered." However, if Google wins, "then maybe nothing changes at all."
Publicity from the trial could cause some Internet users to shift their behaviors, though, Sullivan suggested.
"One indirect but significant way that the case might impact average people is by causing them to stop and think about how they make their search decisions," Sullivan told Ars. "One important tenet of the government's case is that default search engine assignments are sticky. That might be true as a historical and empirical assertion, but nothing compels it to be so. Maybe this litigation inspires people to change the default search engines on their phones and personal computers."
Read the original here:
Heres exactly what Google will argue to fight the DOJs antitrust claims - Ars Technica
Connor Coley – MIT Technology Review
Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.
One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.
Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.
As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.
Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.
To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.