Category Archives: Deep Mind
On AI regulation, how the US steals a march over Europe amid the UKs showpiece Summit – The Indian Express
Over the last decade, Europe has taken a decisive lead over the US on tech regulation, with overarching laws safeguarding online privacy, curbing Big Tech dominance and protecting its citizens from harmful online content.
British Prime Minister Rishi Sunaks showpiece artificial intelligence event that kicked off in Bletchley Park on Wednesday sought to build on that lead, but the United States seems to have pulled one back, with Vice President Kamala Harris articulating Washingtons plan to take a decisive lead on global AI regulation, helped in large measure by an elaborate template that was unveiled just two days prior to the Summit. Harris then went on to elaborately flesh out the US plan for leadership in the AI regulation space before a handpicked audience, which included former British PM Theresa May, at the American Embassy in London, while she was there to attend Sunaks Summit.
The template for Harris guidance on tech regulation was the freshly released White House Executive Order on AI, which proposed new guardrails on the most advanced forms of the emerging tech where American companies dominate. And in contrast to the UK-led initiative, where the Bletchley Declaration signed by 28 signatories was the only major high-point, the US executive order is at the point of being offered as a well-calibrated template that could work as a blueprint for every other country looking to regulate AI, including the UK.
Harris was emphatic in her assertion that there was a moral, ethical, and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits. And to address predictable threats, such as algorithmic discrimination, data privacy violations, and deep fakes, the US had last October released a Blueprint for an AI Bill of Rights seen as a building block for Mondays executive order.
After its Bill of Rights was released, Washington had extensive engagement with the leading AI companies, most of which are American (with the exception of London-based Deep Mind, which is now a Google subsidiary) in a bid to evolve a blueprint and to establish a minimum baseline of responsible AI practices.
We intend that the actions we are taking domestically will serve as a model for international action. Understanding that AI developed in one nation can impact the lives and livelihoods of billions of people around the world. Fundamentally it is our belief that technology with global impact requires global action, Harris said just before travelling to the United Kingdom for the summit on AI safety.
Let us be clear when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. This is America that can catalyse global action and build global consensus in a way that no other country can. And under President Joe Biden, it is America that will continue to lead on AI, Harris said before the signing of Mondays executive order, clearly outlining Washingtons intent to take a lead on AI regulation just ahead of the UK-led safety summit.
This assumes significance, given that over the last quarter century, the US Congress has not managed to pass any major regulation to rein in Big Tech companies or safeguard internet consumers, with the exception of just two side legislations: one on child privacy and the other on blocking trafficking content on the net.
In contrast, the EU has enforced the landmark GDPR (General Data Protection Regulation) since May 2018 that is clearly focused on privacy and requires individuals to give explicit consent before their data can be processed and is now a template being used by over 100 countries, Then there are a pair of sub-legislations the Digital Services Act (DSA) and the Digital Markets Act (DMA) that take off from the GDPRs overarching focus on the individuals right over her data. The DSA focused on issues such as regulating hate speech, counterfeit goods etc. while the DMA has defined a new category of dominant gatekeeper platforms and is focused on non competitive practices and the abuse of dominance by these players.
On AI, though, the tables may clearly be turning. Washingtons executive order is a detailed blueprint aimed at safeguarding against threats posed by artificial intelligence and seeks to exert oversight over safety benchmarks that companies use to evaluate conversation bots such as ChatGPT and Google Bard. The move is being seen as a vital first step by the Biden administration in the process of regulating rapidly-advancing AI technology, which White House deputy chief of staff Bruce Reed had described as a batch of reforms that amounted to the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.
EU lawmakers, on the other hand, are yet to reach an agreement on several issues related to its proposed AI legislation and the deal is reportedly not expected anytime before December.
The US executive order required AI companies to conduct tests of their newer products and share the results with the US federal government officials before the new capabilities were made available to consumers. These safety tests undertaken by developers, known as red teaming, are aimed at ensuring that new products do not pose a threat to users or the public at large. Under these new government powers, enabled under the US Defense Production Act, the federal government is empowered to subsequently force a developer to either tweak the product or abandon an initiative.
As part of the initiative, the United States will launch an AI safety institute to evaluate known and emerging risks of AI models: this move could be in parallel to an initiative by London to set up a United Kingdom Safety Institute, though Washington has subsequently indicated that the proposed US institute would establish a formal partnership with the UK Institute.
Among the standards set out in the US order, a new rule seeks to codify the use of watermarks that alert consumers when they encounter a product enabled by AI, which is aimed at potentially limiting the threat posed by content such as deepfakes. Another standard stipulates that biotech firms take appropriate precautions when using AI to create or modify biological material. Incidentally, the industry guidance has been prescribed more as suggestions rather than binding requirements, giving developers and firms enough elbow room to work around some of the government recommendations.
Also, the executive order explicitly directs American government agencies to implement changes in their use of AI, thereby creating industry best practices that Washington expects will be embraced by the private sector. The US Department of Energy and the Department of Homeland Security will, for instance, take steps to address the threat that AI poses for critical infra, the White House said in a statement.
Harris said the focus of the move, while aiming for the existential threats of generative AI being highlighted by experts, also resonated at an individual or citizen level.Additional threads that also demand our action threats that are currently causing harm and which too many people also feel existential. Consider for example, when a senior is kicked off his health care plan because of a faulty AI algorithm, is that not existential for him? When a woman is threatened by an abusive partner with explicit deep fake photographs, is that not existential for her? When a young father is wrongfully imprisoned because of biased AI, facial recognition, is that not existential for his family? And when people around the world cannot discern fact from fiction because of a flood of AI enabled myth and disinformation, I ask, is that not existential for democracy?
Varied Approaches
These developments come as policymakers across jurisdictions have stepped up regulatory scrutiny of generative AI tools, prompted by ChatGPTs explosive launch. The concerns being flagged fall into three broad heads: privacy, system bias and violation of intellectual property rights.
The policy response has been different too, across jurisdictions, with the European Union having taken a predictably tougher stance by proposing to bring in its new AI Act that segregates artificial intelligence as per use case scenarios, based broadly on the degree of invasiveness and risk; the UK is seen to be on the other end of the spectrum, with a decidedly light-touch approach that aims to foster, and not stifle, innovation in this nascent field.
The US approach now slots somewhere in between, with Washington now clearly setting the stage for defining an AI regulation rulebook with Mondays executive order. This clearly builds on the move by the White House Office of Science and Technology Policy last October to unveil its Blueprint for the AI Bill of Rights. China too has released its own set of measures to regulate AI.
This also comes in the wake of calls by tech leaders Elon Musk, Steve Wozniak (Apple co-founder) and over 15,000 others for a six-month pause in AI development in April this year, saying labs are in an out-of-control race to develop systems that no one can fully control. Musk was in attendance at Bletchley Park, where he warned that AI is one of the biggest threats to humanity and that the Summit was timely because AI posed an existential risk to humans, who face being outsmarted by machines for the first time.
Original post:
On AI regulation, how the US steals a march over Europe amid the UKs showpiece Summit - The Indian Express
EU investors including Bosch, SAP pump $500 million into … – The Stack
Germanys Aleph Alpha has raised over $500 million in a Series B round backed by Bosch and SAP among other new investors, as the startup looks to build on its promise to be the leading provider of sovereign generative AI applications in Europe, taking on OpenAI and the hyperscalers.
Aleph Alpha, founded by former Apple AI researcher Jonas Andrulis, has created the Luminous series of large language models (it has promised a 300 billion parameter Luminous World model later this year) which are commercially available for customers now, via a Python client.
Working with partner Graphcore, it has demonstrated some impressive AI compute efficiency progress as well as capabilities with a privacy and explainability focus (see for example its Atman paper of earlier in 2023.)
The Series B funding, it said in a press release on Monday, strengthens the foundation for Aleph Alpha to further advance its proprietary AI research, accelerate development and commercialization of Generative AI for the most complex and critical applications such as in data sensitive industries like healthcare, finance, law, government and security.
At a press conference on Monday, Germanys Minister for Economic Affairs Robert Habeck suggested that the investment played to Europes strategic national priority as work continues to boost its data sovereignty.
The thought of having our own sovereignty in the AI sector is extremely important. If Europe has the best regulation but no European companies, we havent won much Habeck said at the press conference.
Aleph Alpha will continue to expand its offerings while maintaining independence and flexibility for customers in infrastructure, cloud compatibility, on-premise support and hybrid setups said CEO Andrulis.
The ongoing developments will extend interfaces and customization options tailored to business-critical requirements he added in a release.
The massive funding round came as researchers at Google DeepMind suggested in a widely shared paper that the transformer models powering so much of the past years AI hype were not as intelligent, perhaps, as many seem to believe: When presented with tasks or functions which are out-of-domain of their pretraining data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks they wrote in a paper on November 3.
Together our results highlight that the impressive ICL [in-context learning] abilities of high-capacity sequence models may be more closely tied to the coverage of their pretraining data mixtures than inductive biases that create fundamental generalization capabilities.
Link:
EU investors including Bosch, SAP pump $500 million into ... - The Stack
‘We compete with everybody’: French AI start-up Mistral takes on … – Financial Times
What is included in my trial?
During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.
Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.
Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.
If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.
For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.
You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.
Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.
You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.
You can still enjoy your subscription until the end of your current billing period.
We support credit card, debit card and PayPal payments.
Read more:
'We compete with everybody': French AI start-up Mistral takes on ... - Financial Times
A Third Way on AI – Mozilla & Firefox
Last week was an important moment in the debate about AI, with President Biden issuing an executive order and the UKs AI Safety Summit convening world leaders.
Much of the buzz around these events made it sound like AI presents us with a binary choice: unbridled optimism, or existential fear. But there was also a third path available, a nuanced, practical perspective that examines the real risks and benefits of AI.
There have been people promoting this third perspective for years although GPT-fueled headlines of the past 12 months have often looked past them. They are foundations, think tanks, researchers and activists (including a number of Mozilla fellows and founders) plus the policymakers behind efforts like last years AI Blueprint for an AII Bill of Rights.
We were happy to see the executive order echo many of the ideas that have emerged from this school of thought over the last few years, prioritizing practical, responsible AI governance. The UK Safety Summit started on a very different note, anchored in concerns around existential risks but also some welcome reframing.
As we look forward from this point, it feels important to highlight three key levers that will help us get closer to responsible AI governance: well-designed regulation, open markets, and open source. Some of these were in the news last week, while others require more attention. Together, they have the potential to help us shape AI in ways that are more trustworthy, empowering and equitable.
Regulation
As we saw last week, there is near consensus that AI presents risks and harms, from the immediate (from discrimination to disinformation) to the longer term (which are still emerging and being explored). Theres also a growing consensus that regulation is a part of the solution.
But what exactly this regulation looks like and what its outcomes should be is where consensus breaks down. One thing is clear, though: Any regulatory framework should protect people from harm and provide mechanisms to hold companies accountable where they cause it.
The executive order included encouraging elements, balancing the need for a rights-respecting approach to addressing AIs present risks with exploration of longer-term, more speculative risks. It also acknowledges that the U.S. is still missing critical baseline protections, such as comprehensive privacy legislation that work hand-in-hand with AI-specific rules.
The ideas that dominated the Safety Summit were less encouraging. They reinforced that old binary, either going too far or not far enough. There was a focus on self regulation by AI companies (which isnt really governance at all). And there were nods towards the idea of licensing large language models (which would only increase concentration and may worsen AI risks, in the words of Sayash Kapoor and Arvind Narayanan).
Open markets
To Arvind and Sayashs point, there is a problematic concentration of power in the tech industry. Decisions about AI, like who it most benefits or who is even allowed to access it, are made by a handful of people in just a few corners of the world. The majority of people impacted by this technology dont get to shape it in any meaningful way.
Competition is an antidote. AI development not just by big companies, but also smaller ones (and nonprofits, too) has the potential to decentralize power. And government action to hinder monopolies and anti-competitive practices can accelerate this. The executive order takes note, calling on the Federal Trade Commission (FTC) to promote competition and protect small businesses and entrepreneurs.
Its important for this work to start now both by enforcing existing competition law, but also by greater adoption of ex-ante interventions like the UKs DMCC bill. The previous decade showed how quickly incumbent players, like social media platforms, acquire or shut down competitors. And its already happening again: Anthropic and OpenAI have familiar investors (Google + Amazon and Microsoft, respectively), and once-independent laboratories like DeepMind were long ago acquired (by Google).
Open source
For smaller AI players to thrive in the marketplace, the core building blocks of the technology need to be broadly accessible. This has been a key lever in the past open-source technology allowed a diverse set of companies, like Linux and Firefox, to compete and thrive in the early days of the web.
Open source has a chance to play a role in fueling competition in AI and, more specifically, large language models. This is something organizations like Ai2, EleutherAI, Mistral, and Mozilla.ai are focused on. Open source AI also has the potential to strengthen AI oversight, allowing governments and public interest groups to scrutinize the technology and call out bias, security flaws, and other issues. Weve already seen open source catch critical bugs in tooling used for core AI development. While open source isnt a panacea and it can be twisted to further consolidate power if its not done right it has huge potential in helping more people participate in and shape the next era of AI.
Its important to note that there is a major threat to open source AI emerging: some use the fear of existential risk to propose approaches that would shut down open-source AI. Yes, bad actors could abuse open source AI models but internet history shows that proprietary technologies are just as likely to be abused. Rushing to shut down open source AI in response to speculative fears, rather than exploring new approaches focused on responsible release, could unnecessarily foreclose our ability to tap into the potential of these technologies.
Collaboratively dealing with global problems is not a new idea in technology. In fact there are many lessons to learn from previous efforts from how we dealt with cybersecurity issues like encryption, governed the internet across borders, and worked to counter content moderation challenges like disinformation. What we need to do is take the time to develop a nuanced approach to open source and AI. We are happy to see the EUs upcoming AI Act exploring these questions, and the recent U.S. executive order instructing the Department of Commerce to collect input on both the risks and benefits of dual-use foundation models with widely accessible weights in essence, open-source foundation models. This creates a process to develop the kind of nuanced, well-informed approaches we need.
Which was exactly the goal of the letter on open source and AI safety that we both signed last week along with over 1,500 others. It was a public acknowledgement that open source and open science are neither a silver bullet nor a danger. They are tools that can be used to better understand risks, bolster accountability, and fuel competition. It also acknowledged that positioning tight and proprietary control of foundational AI models as the only path to safety is naive, and maybe even dangerous.
The letter was just that a letter. But we hope its part of something bigger. Many of us have been calling for AI governance that balances real risks and benefits for years. The signers of the letter include a good collection of these voices and many new ones, often coming from surprising places. The community of people ready to roll up their sleeves to tackle the thorny problems of AI governance (even alongside people they usually disagree with) is growing. This is exactly what we need at this juncture. There is much work ahead.
Excerpt from:
A Third Way on AI - Mozilla & Firefox
This week in AI: Can we trust DeepMind to be ethical? – TechCrunch
Image Credits: Google DeepMind
Keeping up with an industry as fast-moving asAIis a tall order. So until an AI can do it for you, heres a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didnt cover on their own.
This week in AI, DeepMind, the Google-owned AI R&D lab, released a paper proposing a framework for evaluating the societal and ethical risks of AI systems.
The timing of the paper which calls for varying levels of involvement from AI developers, app developers and broader public stakeholders in evaluating and auditing AI isnt accidental.
Next week is the AI Safety Summit, a U.K.-government-sponsored event thatll bring together international governments, leading AI companies, civil society groups and experts in research to focus on how best to manage risks from the most recent advances in AI, including generative AI (e.g. ChatGPT, Stable Diffusion and so on). There, the U.K. is planning to introduce a global advisory group on AI loosely modeled on the U.N.s Intergovernmental Panel on Climate Change, comprising a rotating cast of academics who will write regular reports on cutting-edge developments in AI and their associated dangers.
DeepMind is airing its perspective, very visibly, ahead of on-the-ground policy talks at the two-day summit. And, to give credit where its due, the research lab makes a few reasonable (if obvious) points, such as calling for approaches to examine AI systems at the point of human interaction and the ways in which these systems might be used and embedded in society.
But in weighing DeepMinds proposals, its informative to look at how the labs parent company, Google, scores in a recent study released by Stanford researchers that ranks 10 major AI models on how openly they operate.
Rated on 100 criteria, including whether its maker disclosed the sources of its training data, information about the hardware it used, the labor involved in training and other details, PaLM 2, one of Googles flagship text-analyzing AI models, scores a measly 40%.
Now, DeepMind didnt develop PaLM 2 at least not directly. But the lab hasnt historically been consistently transparent about its own models, and the fact that its parent company falls short on key transparency measures suggests that theres not much top-down pressure for DeepMind to do better.
On the other hand, in addition to its public musings about policy, DeepMind appears to be taking steps to change the perception that its tight-lipped about its models architectures and inner workings. The lab, along with OpenAI and Anthropic, committed several months ago to providing the U.K. government early or priority access to its AI models to support research into evaluation and safety.
The question is, is this merely performative? No one would accuse DeepMind of philanthropy, after all the lab rakes in hundreds of millions of dollars in revenue each year, mainly by licensing its work internally to Google teams.
Perhaps the labs next big ethics test is Gemini, its forthcoming AI chatbot, which DeepMind CEO Demis Hassabis has repeatedly promised will rival OpenAIs ChatGPT in its capabilities. Should DeepMind wish to be taken seriously on the AI ethics front, itll have to fully and thoroughly detail Geminis weaknesses and limitations not just its strengths. Well certainly be watching closely to see how things play out over the coming months.
Here are some other AI stories of note from the past few days:
Machine learning models are constantly leading to advances in the biological sciences. AlphaFold and RoseTTAFold were examples of how a stubborn problem (protein folding) could be, in effect, trivialized by the right AI model. Now David Baker (creator of the latter model) and his labmates have expanded the prediction process to include more than just the structure of the relevant chains of amino acids. After all, proteins exist in a soup of other molecules and atoms, and predicting how theyll interact with stray compounds or elements in the body is essential to understanding their actual shape and activity. RoseTTAFold All-Atom is a big step forward for simulating biological systems.
Having a visual AI enhance lab work or act as a learning tool is also a great opportunity. The SmartEM project from MIT and Harvard put a computer vision system and ML control system inside a scanning electron microscope, which together drive the device to examine a specimen intelligently. It can avoid areas of low importance, focus on interesting or clear ones, and do smart labeling of the resulting image as well.
Using AI and other high tech tools for archaeological purposes never gets old (if you will) for me. Whether its lidar revealing Mayan cities and highways or filling in the gaps of incomplete ancient Greek texts, its always cool to see. And this reconstruction of a scroll thought destroyed in the volcanic eruption that leveled Pompeii is one of the most impressive yet.
University of Nebraska-Lincoln CS student Luke Farritor trained a machine learning model to amplify the subtle patterns on scans of the charred, rolled-up papyrus that are invisible to the naked eye. His was one of many methods being attempted in an international challenge to read the scrolls, and it could be refined to perform valuable academic work. Lots more info at Nature here. What was in the scroll, you ask? So far, just the word purple but even that has the papyrologists losing their minds.
Another academic victory for AI is in this system for vetting and suggesting citations on Wikipedia. Of course, the AI doesnt know what is true or factual, but it can gather from context what a high-quality Wikipedia article and citation looks like, and scrape the site and web for alternatives. No one is suggesting we let the robots run the famously user-driven online encyclopedia, but it could help shore up articles for which citations are lacking or editors are unsure.
Language models can be fine-tuned on many topics, and higher math is surprisingly one of them. Llemma is a new open model trained on mathematical proofs and papers that can solve fairly complex problems. Its not the first Google Researchs Minerva is working on similar capabilities but its success on similar problem sets and improved efficiency show that open models (for whatever the term is worth) are competitive in this space. Its not desirable that certain types of AI should be dominated by private models, so replication of their capabilities in the open is valuable even if it doesnt break new ground.
Troublingly, Meta is progressing in its own academic work toward reading minds but as with most studies in this area, the way its presented rather oversells the process. In a paper called Brain decoding: Toward real-time reconstruction of visual perception, it may seem a bit like theyre straight up reading minds.
But its a little more indirect than that. By studying what a high-frequency brain scan looks like when people are looking at images of certain things, like horses or airplanes, the researchers are able to then perform reconstructions in near real time of what they think the person is thinking of or looking at. Still, it seems likely that generative AI has a part to play here in how it can create a visual expression of something even if it doesnt correspond directly to scans.
Should we be using AI to read peoples minds, though, if it ever becomes possible? Ask DeepMind see above.
Last up, a project at LAION thats more aspirational than concrete right now, but laudable all the same. Multilingual Contrastive Learning for Audio Representation Acquisition, or CLARA, aims to give language models a better understanding of the nuances of human speech. You know how you can pick up on sarcasm or a fib from sub-verbal signals like tone or pronunciation? Machines are pretty bad at that, which is bad news for any human-AI interaction. CLARA uses a library of audio and text in multiple languages to identify some emotional states and other non-verbal speech understanding cues.
View original post here:
This week in AI: Can we trust DeepMind to be ethical? - TechCrunch
Google lays foundation of next-stage AI with Gemini LLM – CIO Dive
Dive Brief:
Google was an early incumbent to enter the race to embed generative AI into core tools and services. As more providers launched their own capabilities, simply offering generative AI stopped being innovative. Its expected.
Its almost irresponsible for a vendor not to have AI in their literature somewhere or theyre irrelevant all of the sudden, said Greg Myers, operating partner at investment firm Cota Capital, speaking Tuesday during the Dell Technologies Forum in Washington, D.C.
Enterprises are in search of the best of the best when it comes to tooling, but are taking different approaches to get there. Some are leaning on their existing cloud providers, drawn by the comfort of familiarity. Others are deploying crowdsourcing methods to empower employees, while another portion is waiting for the cream to rise to the top.
Google expects its array of solutions to win over customers as competition mounts. Currently, more than half of all the funded generative AI startups are Google Cloud customers, including AI21 Labs, Contextual, Elemental Cognition and Rytr, according to Pichai.
I view it as a journey, and each generation is going to be better than the other, Pichai said. We are definitely investing and the early results are very promising.
The companys CapEx costs increased in Q3, reaching $8 billion from $6.9 billion in the previous quarter. The growth was driven overwhelmingly by Googles technical infrastructure enhancements to support compute-heavy AI workloads, CFO Ruth Porat said. Upgrading infrastructure in preparation for increased adoption of generative AI is a trend across the cloud hyperscalers, including AWS, Microsoft Azure and Oracle.
We remain committed to durably re-engineering our cost base in order to help create capacity for these investments in support of long-term sustainable financial value, Pichai said. Across Alphabet, teams are looking at ways to operate as effectively as possible, focused on their biggest priorities.
See more here:
Google lays foundation of next-stage AI with Gemini LLM - CIO Dive
The UK Lists Top Nightmare AI Scenarios Ahead of Its Big Tech … – WIRED
Deadly bioweapons, automated cybersecurity attacks, powerful AI models escaping human control. Those are just some of the potential threats posed by artificial intelligence, according to a new UK government report. It was released to help set the agenda for an international summit on AI safety to be hosted by the UK next week. The report was compiled with input from leading AI companies such as Googles DeepMind unit and multiple UK government departments, including intelligence agencies.
Joe White, the UKs technology envoy to the US, says the summit provides an opportunity to bring countries and leading AI companies together to better understand the risks posed by the technology. Managing the potential downsides of algorithms will require old-fashioned organic collaboration, says White, who helped plan next weeks summit. These arent machine-to-human challenges, White says. These are human-to-human challenges.
UK prime minister Rishi Sunak will make a speech tomorrow about how, while AI opens up opportunities to advance humanity, its important to be honest about the new risks it creates for future generations.
The UKs AI Safety Summit will take place on November 1 and 2 and will mostly focus on the ways people can misuse or lose control of advanced forms of AI. Some AI experts and executives in the UK have criticized the events focus, saying the government should prioritize more near-term concerns, such as helping the UK compete with global AI leaders like the US and China.
Some AI experts have warned that a recent uptick in discussion about far-off AI scenarios, including the possibility of human extinction, could distract regulators and the public from more immediate problems, such as biased algorithms or AI technology strengthening already dominant companies.
The UK report released today considers the national security implications of large language models, the AI technology behind ChatGPT. White says UK intelligence agencies are working with the Frontier AI Task Force, a UK government expert group, to explore scenarios like what could happen if bad actors combined a large language model with secret government documents. One doomy possibility discussed in the report suggests a large language model that accelerates scientific discovery could also boost projects trying to create biological weapons.
This July, Dario Amodei, CEO of AI startup Anthropic, told members of the US Senate that within the next two or three years it could be possible for a language model to suggest how to carry out large-scale biological weapons attacks. But White says the report is a high-level document that is not intended to serve as a shopping list of all the bad things that can be done.
The UK report also discusses how AI could escape human control. If people become used to handing over important decisions to algorithms it becomes increasingly difficult for humans to take control back, the report says. But the likelihood of these risks remains controversial, with many experts thinking the likelihood is very low and some arguing a focus on risk distracts from present harms.
In addition to government agencies, the report released today was reviewed by a panel including policy and ethics experts from Googles DeepMind AI lab, which began as a London AI startup and was acquired by the search company in 2014, and Hugging Face, a startup developing open source AI software.
Yoshua Bengio, one of three godfathers of AI who won the highest award in computing, the Turing Award, for machine-learning techniques central to the current AI boom was also consulted. Bengio recently said his optimism about the technology he helped foster has soured and that a new humanity defense organization is needed to help keep AI in check.
View original post here:
The UK Lists Top Nightmare AI Scenarios Ahead of Its Big Tech ... - WIRED
‘Boetie Boer’: a deep dive into the terrifying mind of a monster – Daily Maverick
And you thought Devilsdorp was disturbing Stewart Boetie Boer Wilken is a South African serial killer who was active in Port Elizabeth (now Gqeberha) during the 1990s. In February 1998, he was convicted on seven counts of murder (and two of sodomy, a few weeks before gay sex was legalised in South Africa) but admitted to numerous more murders in his written confession.
This Showmax Original true-crime series is the most frightening to date.Director Jasyn Howes interviews the detectives and forensic psychoanalysts involved in Wilkins capture, one of the victims family, and most alarmingly, the series takes us back to 1990 recreating Wilkins killing spree based on a recorded interview with the murderer himself from his prison cell.
The interview took place in 2006 with Dr Grard Labuschagne, the former section head of the Investigative Psychology Section of the South African Police Services, who also features in the series.
Labuschagne gave us his recording of just over three hours of Wilken sharing, in his own words, everything hed done, says Howes.
Wilken had been in prison for eight years at the time of that interview, and was already sentenced to life behind bars, so he had no obvious reason to hide anything.
It should go without saying that a documentary with re-enacted crimes and first-person accounts of a ruthless serial killer will disturb some people, but best to say it anyway: things get evil very quickly this is not an easy watch.
Dont think because you make it through the first episode without nightmares that youll handle the rest. It only gets more intense as Wilkins reveals how his fiendish crimes escalated. Murder, rape paedophilia, necrophilia, bestiality, suicide, cannibalism, incest you name it, this story has it all.
Even if you are over 18 and a fan of true-crime documentaries, we strongly advise viewer discretion, says Allan Sperling, executive head of content at Showmax.
There have been a lot of conversations recently about the dangers of glamorising true crime in pop culture. When production houses leverage audiences fascination (some would say morbid fascination) with murder to profit off of real-life horrors, this can re-traumatise victims and potentially even inspire copycat criminals.
In 2022, Netflix came under fire for creating a series called Monster: The Jeffrey Dahmer Story, fictionalising the crimes of a real serial killer without consulting the families of the victims he brutally murdered. Even crime documentaries may hinder the pursuit of justice in some cases. Howes has made a concerted effort to avoid sensationalism and ethical pitfalls. Each episode is bookended by disclaimers indicating his awareness of these debates and stating his position on them. Episodes start with the text:
The documentary aims to shed light on the chilling complexity of human actions and the pursuit of justice and not to glorify crime, and end with Reasonable efforts were made to contact the families of the victims of Stewart Wilkens crimes.
The subtitle of this series speaks directly to the reason true crime is so popular many people are extremely curious about how someone can do such terrible things, and what made them that way. These were the questions clinical psychologist Gerard Labuschagne sought to answer when he interviewed Wilken in 2006, to aid in profiling serial killers in general.
This pursuit is plagued by confirmation bias. Because the prospect of understanding criminals better is one of the strong appeals of true crime, serial killer documentaries tend to fixate on causality. If Wilken says it was his being bullied as a child that caused him to become a murderer, the filmmakers seem keen to believe him because it benefits their storytelling. But, as Labuschagne hints in the series, its possible that this explanation is just an attempt to justify his unjustifiable actions by attributing them to his sob story.
Clinical psychologist Giada Del Fabbro builds on this theory in the series,quoting psychological theorist, William Fairburn who said A child would rather be the devil in a world where god existed than have no god whatsoever. Its a compelling notion, and yet loads of people are bullied or abused and still dont grow up to become serial killers, so its problematic to latch onto this explanation.
The truth is that we can never truly get inside the mind of a monster. That is not to say that we shouldnt try, for the purpose of spotting people from becoming criminals, but we should be careful about jumping to conclusions about how minds work just because it aligns with the way we understand the world.
A snippet of Labuschagnes interview in the series distils what makes serial killers such a frightening idea its not that theyre so different from the rest of us, its actually that theyre so similar:
When you meet these people, you only know them through the context of the crimes theyve committed, so whenever you do meet them its not what you expect. 99.9 per cent of the time theyre doing the same things everyone else is doing.
Serial killers often take advantage of positions of trust to mask their crimes. Wilken didnt exactly have the charisma of Ted Bundy, but he was purportedly great with kids and lived in a coloured community, spoke the language and the slang, and had married a woman of colour in a time when that was very unusual for a white Afrikaans man. The mere fact that he was known by the term of endearment Boetie is an indication that there was trust in the community which he could exploit.
The 1990s in South Africa was a time of euphoria. The country was focused on Nelson Mandelas release, its first free elections, the 1995 Rugby World Cup etc. Crime reporter Brett Adkins notes in the series that these fantastic changes somewhat distracted the media from layers of evil that remained.
During apartheid, the police force had been focused on other kinds of crimes, and law enforcement was not trained in serial killing. This meant serial murder could easily be missed because it wasnt a priority. Norman Simons, the infamous Station Strangler, was only convicted in one case and was released on parole in July 2023.
It was only the urgency of SAPSs inability to detain Moses Sithole, The ABC Killer, while he was actively murdering that led them to enlist the help of an ex-FBI forensic psychologist to train them to create serial killer profiles. This training was essential in catching Wilken a few years later because he defied some of the usual profile trends.
Wilken was a highly unusual serial killer, says Howes, Unlike most serial killers, he had more than one type of victim: predominantly female sex workers and young boys, usually street children, across multiple races.
Despite committing at least seven murders, Wilken didnt receive as much coverage as youd expect, and this is generally attributed to the countrys focus on its large political changes.
Its also interesting to discover how the context of South Africa in the 90s shaped Wilkens descent into criminality personally. Wilken was a fisherman in a period when the industry was highly intertwined with drug trafficking. Drug boats would sell stimulants to fishermen to keep them working through the long hours of a haul employees would work on trips of a week or two straight and then be paid in a lump sum, meaning that low and middle-income communities would often have drug users return home loaded with cash to feed their habit. This vicious cycle exacerbated Wilkens psychopathy and aggression and spurred his first murder.
Because there was almost no video or radio archive to draw on, the series leans into recreations to bring the story to life visually. Theyre not great, and they get progressively worse. To put distance between the present and these past events, the re-enactments employ an overly theatrical technique showing the clips in slow motion, muting them, and overlaying dialogue with a reverb effect.
Coupled with a yawn-inducing eerie soundtrack, most of the re-enactment scenes have the dull vibe of a try-hard ghost story, with the exception of Raven Swarts scenes as a young Wilken, which have an appropriately spine-chilling effect.
The interview sections of the film are less sensational and more gripping. The series explores the typical themes that serial killer shows go for, like early development and the nature of psychopathy, and having all that discussed by familiar types of South African characters brings it closer to home. DM
Boetie Boer is available on Showmax. Episodes air weekly on Wednesdays.
You can contact What Were Watching via[emailprotected]
Follow this link:
'Boetie Boer': a deep dive into the terrifying mind of a monster - Daily Maverick
AIs big players all flunked a major transparency assessment of their LLMs – Fortune
Hello and welcome to Eye on AI. This week was a big one for AI research, and were going to start by diving into perhaps the most comprehensive attempt to interrogate the transparency of leading LLMs yet.
The Stanford Institute for Human-Centered AI released its Foundation Model Transparency Index, which rates major foundational model developers to evaluate their transparency. Driven by the fact that public transparency around these models is plummeting just as the societal impacts of them are skyrocketing, the researchers evaluated 100 different indicators of transparency across how a company builds a foundation model, how that model works, and how its actually used. They focused on 10 major foundation model developersOpenAI, Anthropic, Google, Meta, Amazon, Inflection, AI21 Labs, Cohere, Hugging Face, and Stabilityand designated a single flagship model from each developer for evaluation.
Eye on AI talked with one of the researchers behind the index to get a deeper understanding of how the companies responded to their findings, what it all means about the state of AI, and their plans for the index going forward, but first lets get into the results. To sum it up, everyone failed.
Meta (evaluated for LLama 2) topped the rankings with an unimpressive score of 54 out of 100. Hugging Face (BLOOMZ) came in right behind with 53 but scored a notable 0% in both the overall risk and mitigations categories. OpenAI (GPT-4) scored a 48, Stability (Stable Diffusion 2) scored a 47, Google (PaLM 2) scored a 40, and Anthropic (Claude 2) scored a 36. Cohere (Command), AI21 Labs (Jurassic-2), and Inflection (Inflection-1) spanned the mid-30s to low 20s, and Amazon (Titan Text) scored a strikingly low 12, though its worth noting its model is still in private preview and hasnt yet been released for general availability.
We anticipated that companies would be opaque, and that played out with the top score of 54 and the average of a mere 37/100, Rishi Bommasani, CRFM Society Lead at Stanford HAI, told Eye on AI. What we didnt expect was how opaque companies would be on critical areas: Companies disclose even less than we expected about data and compute, almost nothing about labor practices, and almost nothing about the downstream impact of their models.
The researchers contacted all of the companies to give them a chance to respond after they came up with their first draft of the ratings. And while Bommasani said they promised to keep those communications private and wouldnt elaborate on specifics like how Amazon responded to such a low score, he said all 10 companies engaged in correspondence. Eight of the 10 companies (all but AI21 Labs and Google) contested specific scores, arguing that their scores should be 8.75 points higher on average, and eventually had their scores adjusted by 1.25 points on average.
The results say a lot about the current state of AI. And no, it wasnt always like this.
The successes of the 2010s with deep learning came about through significant transparency and the open sharing of datasets, models, and code, Bommasani said. In the 2020s, we have seen that change: Many top labs dont release models, even more dont release datasets, and sometimes we dont even have papers written about widely deployed models. This is a familiar feeling of societal impact skyrocketing while transparency is plummeting.
He pointed to social media as another example of this shift, pointing to how the technology has become increasingly opaque over time as it becomes more powerful in our lives. AI looks to be headed down the same path, which we are hoping to countervail, he said.
AI has quickly gone from specialized researchers tinkering to the tech industrys next (and perhaps biggest ever) opportunity to capture both revenue and world-altering power. It could easily create new behemoths and topple current ones. The off to the races feeling has been intensely palpable ever since OpenAI released ChatGPT almost a year ago, and tech companies have repeatedly shown us theyll prioritize their market competitiveness and shareholder value above privacy, safety, and other ethical considerations. There arent any requirements to be transparent, so why would they be? As Bommasani said, weve seen this play out before.
While this is the first publication of the FMTI index, it definitely wont be the last. The researchers plan to conduct the analysis on a repeated basis, and they hope to have the resources to operate on a quicker cadence than the annual turnaround most indices are conducted in or to mirror the frenetic pace of AI.
Programming note:Gain vital insights on how the most powerful and far-reaching technology of our time is changing businesses, transforming society, and impacting our future. Join us inSan Francisco on Dec. 1112forFortunes third annualBrainstorm A.I.conference. Confirmed speakers include such A.I. luminaries asSalesforce AI CEOClara Shih,IBMsChristina Montgomery, Quizlets CEOLex Bayer,and more.Apply to attendtoday!
And with that, heres the rest of this weeks AI news.
Sage Lazzarosage.lazzaro@consultant.fortune.comsagelazzaro.com
Hugging Face confirms users in China are unable to access its platform. Thats according to Semafor. Chinese users have been complaining of issues connecting to the AI startups popular open-source platform since May, and its been fully unavailable in China since at least Sept. 12. Its not exactly clear what prompted action toward the company, but the Chinese government routinely blocks access to websites it disapproves of. It could also be related to local regulations regarding foreign AI companies that recently went into effect.
Canva unveils suite of AI tools for the classroom. Just two weeks after Canva introduced an extensive suite of AI-powered tools and capabilities, the online design platform announced a suite of AI-powered design tools targeted specifically to teachers and students. The AI-powered tools will live in the companys Canva for Education platform and include a writing assistant, translation capabilities, alt text suggestions, Magic Grab, and the ability to animate designs with one click.
Apple abruptly cancels John Stewarts show over tensions stemming from his interest in covering AI and China. Thats according to the New York Times. The third season of The Promise was already in production and set to begin filming soon before Stewart was (literally) canceled. The details of the dispute over covering AI and China are not clear, but Apples deep ties with China have come under increased scrutiny lately as tensions with the country rise and the U.S. takes action to limit the transfer of AI technologies between the U.S. and China. The company is also starting to move some of its supply chain out of China.
China proposes a global initiative for AI governance. The Cyberspace Administration of China (CAC) announced the Global AI Governance Initiative, calling out the urgency of managing the transition to AI and outlining a series of principles and actions around the need for laws, ethical guidelines, personal security, data security, geopolitical cooperation, and an emphasis on a people-centered approach to AI, according to The Center for AI and Digital Policy newsletter Update 5.40. The document emphasizes the dual nature of AI as a technology that has both the ability to drive progress but also unpredictable risks and complicated challenges.
Eric Schmidt and Mustafa Suleyman call for an international panel on AI safety. The former Google CEO and DeepMind/Inflection AI cofounder published their call to action in the Financial Times. Arguing that lawmakers still lack a basic understanding of AI, they write that calls to just regulate are as loud, and as simplistic, as calls to simply press on. They propose an independent, expert-led body inspired by the Intergovernmental Panel on Climate Change (IPCC), which is mandated to provide policymakers with regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation.
Polling the people. Anthropic this past week published the results of an experiment around what it calls constitutional AI, a method for designing AI models so theyre guided by a list of high-level principles. The company paneled around 1,000 American adults about what sort of principles they think would be important for an AI model to abide by and then trained a smaller version of Claude based on their suggestions. They then compared the resulting model to Claude, which was trained on a constitution designed by Anthropic employees.
Overall, the results showed about a 50% overlap in concepts and values between the two constitutions. The model trained on the peoples constitution focused more on objectivity, impartiality, and promoting desired behaviors for the model to abide by rather than laying out behaviors to avoid. The people also came up with some principles that were lacking from Anthropics version, such as Choose the response that is most understanding of, adaptable, accessible, and flexible to people with disabilities. The model created with the peoples constitution was also slightly less biased than the commercially available version, though the models performed similarly overall.
Its also important to take note of Anthropics methodology. While the company said it sought a representative sample across age, gender, income, and geography, one factor noticeably missing is race. This is especially concerning as evidence has repeatedly shown that people of color are adversely affected by racial bias and accuracy issues in AI models.
How Sam Altman got it wrong on a key part of AI: Creativity has been easier for AI than people thought Rachyl Jones
OpenAIs winning streak falters with reported failure of Arrakis project David Meyer
Nvidia thought it found a way around U.S. export bans of AI chips to Chinanow Biden is closing the loophole and investors arent happy Christiaan Hetzner
Sick of meetings? Microsofts new AI assistant will go in your place Chloe Taylor
Why boomers are catching up with AI faster than Gen Zers, according to Microsofts modern work lead Jared Spataro
How AI can help the shipping industry cut carbon emissions Megan Arnold
Billionaire AI investor Vinod Khoslas advice to college students: Get as broad an education as possible Jeff John Roberts
Would you let Meta read your mind? The tech giant perhaps most synonymous with invading user privacy announced it reached an important milestone in its pursuit of using AI to visualize human thought.
Using a noninvasive neuroimaging technique called magnetoencephalography (MEG), Meta AI researchers showcased a system capable of decoding the unfolding of visual representations in the brain with an unprecedented temporal resolution. In other words, the system can analyze a persons brain activity and then reconstruct visuals depicting what their brain is seeing and processing. While they only reached accuracy levels of 70% in their highest-performing test cases, the researchers note in their paper that this is seven times better than existing models.
The fact that the AI announcements coming out of tech companies in a single week range from animate text with one click to decode and reconstruct human thought shows how incredibly wide-reaching and powerful this technology is. Its hard to imagine theres a corner of society and humanity it wont touch.
Continue reading here:
AIs big players all flunked a major transparency assessment of their LLMs - Fortune
Lcole AI, Creator of "Machine Teaching" Technology, Raises 3 Million USD in Seed Funding From Sofinnova Partners – Yahoo Finance
The companys technology, which removes the engineering complexity around AI systems for image analysis, will initially assist medical professionals
Founder co-created Madbits, a deep-learning image-analysis start-up that was acquired by Twitter
PARIS & NEW YORK, October 26, 2023--(BUSINESS WIRE)--Lcole AI, creator of O, a "machine teaching" technology that removes the engineering complexity around deep-learning systems for computer vision, announced that it has raised 3 million USD in Seed funding from Sofinnova Partners. The funds will be used to develop the companys proprietary technology, which will be rolled out to medical professionals and researchers, enabling them to create bespoke AI systems to assist in their area of expertise. In addition, the financing will fuel the expansion of the team, accelerate product development and structure business efforts. Also participating in the round are notable business angels including Preston-Werner Ventures, the fund started by co-founder and former CEO of GitHub, Tom Preston-Werner.
"We're building a user interface for AI so anyone can create and benefit from their own personalized AI assistant," said Louis-Alexandre Etezad-Heydari, Co-Founder and President of Lcole AI, which means "AI school" in French. "We also are creating a system that will open possibilities for secure collaboration between organizations."
Etezad-Heydari co-founded Madbits, a deep-learning image-analysis start-up, with Clment Farabet in 2013. A year after its founding, Madbits was acquired by Twitter, where the two entrepreneurs ran Twitter Cortex, an internal team that built a Deep Learning platform to power recommendation systems, search, ranking and filtering at Twitter.
Kim Nilsson and Jonathan Alexander Brown teamed up with Etezad-Heydari to perfect the AI development framework for computer vision that initially inspired Madbits, and the three founders set off with the goal to make creating computer vision models simple enough for non-engineers.
Story continues
"L'coles technology is designed to democratize machine learning by enabling life science researchers and other non-machine learning experts to utilize tailored computer vision systems, thus accelerating life sciences research," said Edward Kliphuis, Partner at Sofinnova Partners. "The focus on digital medicine is a logical entry point," he noted.
"Were starting with a focus on health care and life sciences because we want to make a positive impact right away," said Jonathan Alexander Brown, Co-Founder and Chief Executive Officer. "With Sofinnova's support, we are confident we have the right skills to partner with researchers and clinicians in these tightly regulated markets."
Farabet, an investor in Lcole, is an AI pioneer. Currently VP of Research at Google DeepMind, he spent six years as a senior executive at NVIDIA, working on its autonomous vehicles and the company's data science platform. Farabet is also famous in the AI world as one of the creators of Torch, a machine learning framework that provides a simple and flexible interface for building and training deep neural networks.
Lcole AI counts a number of other AI pioneers among its investors, including Nicolas Pinto, head of Deep Learning at Apple, Clment Delangue, Co-Founder and CEO at Hugging Face.
About the Founders
Jonathan Alexander Brown, Co-Founder and CEOA mathematician-turned-actuary, Jonathan has crafted statistical models for natural catastrophes (wildfires, hurricanes, earthquakes, etc.) and introduced innovative risk management and data-analysis strategies at top multinational companies. He has navigated regulated environments throughout his career, always with a keen focus on addressing the needs of key stakeholders.
Louis-Alexandre Etezad-Heydari, Co-Founder and PresidentLouis-Alexandre is a computer vision pioneer, a successful entrepreneur, and an accomplished artist who dropped out of the Neuroscience PhD program at New York University to found Madbits in 2013 with Clment Farabet. The pair "built visual intelligence technology that automatically understands, organizes and extracts relevant information from raw media," Etezad-Heydari said when Madbits was acquired by Twitter.
Kim Nilsson, Co-Founder and CTOKim has more than 20 years' experience as a software engineer. A former senior developer at Opera, he has always displayed the hacker spirit, solving one of the biggest cryptocurrency heists ever in his spare time, an effort the Wall Street Journal described in detail.
About Sofinnova PartnersSofinnova Partners is a leading European venture capital firm in life sciences, specializing in healthcare and sustainability. Based in Paris, London and Milan, the firm brings together a team of professionals from all over the world with strong scientific, medical, and business expertise. Sofinnova Partners is a hands-on company builder across the entire value chain of life sciences investments, from seed to later-stage. The firm actively partners with ambitious entrepreneurs as a lead or cornerstone investor to develop transformative innovations that have the potential to positively impact our collective future.
Founded in 1972, Sofinnova Partners is a deeply established venture capital firm in Europe, with 50 years of experience backing over 500 companies and creating market leaders around the globe. Today, Sofinnova Partners has over 2.5 billion under management. For more information, please visit: sofinnovapartners.com
View source version on businesswire.com: https://www.businesswire.com/news/home/20231025907622/en/
Contacts
Press CEO and Co-FounderJonathan Alexander Brownjonathan@lecole.ai
Follow this link:
Lcole AI, Creator of "Machine Teaching" Technology, Raises 3 Million USD in Seed Funding From Sofinnova Partners - Yahoo Finance