Page 846«..1020..845846847848..860870..»

A catalogue of genetic mutations to help pinpoint the cause of … – DeepMind

New AI tool classifies the effects of 71 million missense mutations

Uncovering the root causes of disease is one of the greatest challenges in human genetics. With millions of possible mutations and limited experimental data, its largely still a mystery which ones could give rise to disease. This knowledge is crucial to faster diagnosis and developing life-saving treatments.

Today, were releasing a catalogue of missense mutations where researchers can learn more about what effect they may have. Missense variants are genetic mutations that can affect the function of human proteins. In some cases, they can lead to diseases such as cystic fibrosis, sickle-cell anaemia, or cancer.

The AlphaMissense catalogue was developed using AlphaMissense, our new AI model which classifies missense variants. In a paper published in Science, we show it categorised 89% of all 71 million possible missense variants as either likely pathogenic or likely benign. By contrast, only 0.1% have been confirmed by human experts.

AI tools that can accurately predict the effect of variants have the power to accelerate research across fields from molecular biology to clinical and statistical genetics. Experiments to uncover disease-causing mutations are expensive and laborious every protein is unique and each experiment has to be designed separately which can take months. By using AI predictions, researchers can get a preview of results for thousands of proteins at a time, which can help to prioritise resources and accelerate more complex studies.

Weve made all of our predictions freely available to the research community and open sourced the model code for AlphaMissense.

A missense variant is a single letter substitution in DNA that results in a different amino acid within a protein. If you think of DNA as a language, switching one letter can change a word and alter the meaning of a sentence altogether. In this case, a substitution changes which amino acid is translated, which can affect the function of a protein.

The average person is carrying more than 9,000 missense variants. Most are benign and have little to no effect, but others are pathogenic and can severely disrupt protein function. Missense variants can be used in the diagnosis of rare genetic diseases, where a few or even a single missense variant may directly cause disease. They are also important for studying complex diseases, like type 2 diabetes, which can be caused by a combination of many different types of genetic changes.

Classifying missense variants is an important step in understanding which of these protein changes could give rise to disease. Of more than 4 million missense variants that have been seen already in humans, only 2% have been annotated as pathogenic or benign by experts, roughly 0.1% of all 71 million possible missense variants. The rest are considered variants of unknown significance due to a lack of experimental or clinical data on their impact. With AlphaMissense we now have the clearest picture to date by classifying 89% of variants using a threshold that yielded 90% precision on a database of known disease variants.

AlphaMissense is based on our breakthrough model AlphaFold, which predicted structures for nearly all proteins known to science from their amino acid sequences. Our adapted model can predict the pathogenicity of missense variants altering individual amino acids of proteins.

To train AlphaMissense, we fine-tuned AlphaFold on labels distinguishing variants seen in human and closely related primate populations. Variants commonly seen are treated as benign, and variants never seen are treated as pathogenic. AlphaMissense does not predict the change in protein structure upon mutation or other effects on protein stability. Instead, it leverages databases of related protein sequences and structural context of variants to produce a score between 0 and 1 approximately rating the likelihood of a variant being pathogenic. The continuous score allows users to choose a threshold for classifying variants as pathogenic or benign that matches their accuracy requirements.

AlphaMissense achieves state-of-the-art predictions across a wide range of genetic and experimental benchmarks, all without explicitly training on such data. Our tool outperformed other computational methods when used to classify variants from ClinVar, a public archive of data on the relationship between human variants and disease. Our model was also the most accurate method for predicting results from the lab, which shows it is consistent with different ways of measuring pathogenicity.

AlphaMissense builds on AlphaFold to further the worlds understanding of proteins. One year ago, we released 200 million protein structures predicted using AlphaFold which is helping millions of scientists around the world to accelerate research and pave the way toward new discoveries. We look forward to seeing how AlphaMissense can help solve open questions at the heart of genomics and across biological science.

Weve made AlphaMissenses predictions freely available to the scientific community. Together with EMBL-EBI, we are also making them more usable for researchers through the Ensembl Variant Effect Predictor.

In addition to our look-up table of missense mutations, weve shared the expanded predictions of all possible 216 million single amino acid sequence substitutions across more than 19,000 human proteins. Weve also included the average prediction for each gene, which is similar to measuring a gene's evolutionary constraint this indicates how essential the gene is for the organisms survival.

A key step in translating this research is collaborating with the scientific community. We have been working in partnership with Genomics England, to explore how these predictions could help study the genetics of rare diseases. Genomics England cross-referenced AlphaMissenses findings with variant pathogenicity data previously aggregated with human participants. Their evaluation confirmed our predictions are accurate and consistent, providing another real-world benchmark for AlphaMissense.

While our predictions are not designed to be used in the clinic directly and should be interpreted with other sources of evidence this work has the potential to improve the diagnosis of rare genetic disorders, and help discover new disease-causing genes.

Ultimately, we hope that AlphaMissense, together with other tools, will allow researchers to better understand diseases and develop new life-saving treatments.

Learn more about AlphaMissense:

Read more:
A catalogue of genetic mutations to help pinpoint the cause of ... - DeepMind

Read More..

Google DeepMind COO Urges Immediate Global Collaboration on … – Cryptopolitan

Description

In a recent address at the CogX event in London, Lila Ibrahim, the Chief Operating Officer (COO) of Google DeepMind, emphasized the imperative for international cooperation in the field of artificial intelligence (AI). She called for global AI regulation to manage risks effectively while harnessing the technologys vast potential. Ibrahims statements come in the wake Read more

In a recent address at the CogX event in London, Lila Ibrahim, the Chief Operating Officer (COO) of Google DeepMind, emphasized the imperative for international cooperation in the field of artificial intelligence (AI). She called for global AI regulation to manage risks effectively while harnessing the technologys vast potential. Ibrahims statements come in the wake of the UK governments push to position the country as a leader in AI safety and innovation. In contrast to this national focus, Ibrahim underscored that AIs impact and challenges transcend national boundaries, requiring a collaborative, worldwide approach.

The United Kingdom has been making strides in positioning itself as a hub for AI safety. Chancellor Rishi Sunak announced in June a vision to make the UK the global center for AI safety. This aspiration aligns with the UK governments broader goal of becoming a true science and technology superpower by 2030, with a significant emphasis on safety and innovation.

Secretary of State for Universities, Michelle Donelan, echoed this vision during her address at the tech-focused CogX event. She asserted that safety would be the UKs unique selling point in the AI arms race. Donelan contended that safety considerations would be the determining factor in the global competition to lead in AI innovation.

Both Lila Ibrahim and Michelle Donelan concurred that the responsibility for ensuring AI safety rests with a collaborative effort involving organizations and governments. They stressed the importance of cooperation and coordination on a global scale to address the challenges posed by AI.

The UK governments AI Safety Summit, scheduled for November 1-2 at Bletchley Park, is a pivotal event in this endeavor. Donelan outlined the summits objectives, which include identifying and agreeing upon AI risks, fostering collaborative research, and establishing regulatory measures to ensure AI serves as a force for good.

One of the key concepts introduced by Secretary Donelan is responsible capability scaling. This approach encourages AI developers to be proactive in monitoring and managing risks associated with their AI systems. Developers are expected to outline how they plan to control risks and take necessary actions, which may include slowing down or pausing AI projects until improved safety mechanisms are in place.

Donelan emphasized the importance of making responsible capability scaling a standard practice in the AI industry. She likened it to having a smoke alarm in ones kitchen, suggesting that it should become an integral part of AI development to ensure the safety of AI technologies.

Lila Ibrahims call for international cooperation in regulating AI underscores the global nature of AIs impact and potential risks. While individual countries can make significant strides in AI development and safety, the interconnectedness of the digital world demands a collaborative approach.

The rapid advancement of AI capabilities further amplifies the need for swift and effective international regulation. As AI technologies continue to evolve and proliferate, the risks associated with them also become more complex and widespread. International coordination can facilitate the sharing of knowledge, best practices, and regulatory frameworks, ensuring that AI benefits humanity while minimizing potential harm.

The United Kingdoms commitment to becoming a leader in AI safety and innovation is evident through its policies and initiatives. Chancellor Rishi Sunaks vision of making the UK a global AI safety hub aligns with the governments broader ambition to excel in science and technology. By prioritizing safety, the UK seeks to differentiate itself in the global competition for AI leadership.

The call for international cooperation on AI regulation, as advocated by Google DeepMinds COO Lila Ibrahim, resonates with the urgency of addressing the challenges posed by artificial intelligence on a global scale. While the UK governments focus on AI safety is commendable, both Ibrahim and Secretary Michelle Donelan emphasize that the solutions to AIs complex issues require collaborative efforts beyond national borders. The upcoming AI Safety Summit in the UK serves as a crucial platform for fostering international cooperation, sharing expertise, and advancing responsible AI development practices. As AI continues to reshape industries and societies worldwide, the imperative for collective action in ensuring its safe and beneficial deployment becomes increasingly evident.

Link:
Google DeepMind COO Urges Immediate Global Collaboration on ... - Cryptopolitan

Read More..

How Google’s Motto ‘Don’t Be Evil’ Disappeared With Its Shaping of … – DataEthics.eu

Guest Contributor Rene Ridgway

This open access article published recently in Big Data & Society draws on Brin and Pages original 1998 paper to explain how Google developed its hegemony on search and laid the groundwork for contemporary surveillance capitalism.

Deleterious consequences was coined by computer scientist and theorist Phil Agre who in 1998 expressed concern about the harmful effects of AI if programmers did not keep one foot planted in the craft work of design and the other foot planted in the reflexive work of critique.

In this article, I revisit Brin and Pages coeval, seminal and only extant text on their search engine and the PageRank algorithm, The Anatomy of a Large-Scale Hypertextual Web Search Engine (1998). I highlight and contextualise some of their original keywords (counting citations or backlinks, trusted user, advertising, personalization, usage data, smart algorithms) that already foreshadow what was yet to come at Google in spite of their dont be evil motto. Although Googles mission statement organising the worlds information and making it accessible and useful is well known, what isnt well known is that Googles intentions were not necessarily accidental, arbitrary nor (un)intentional. Through certain moments of contingency their decisions led to corporate lock-ins along with promoting their own services in search results along with corporate acquisitions and takeovers that facilitated the googlization of everything (Google Ads, Google Maps, Gmail, Google Earth, Google Docs, Google Deep Mind, Android, Waymo, et al).

The past 25 years Google came to not only shape the web through patents and the novel PageRank algorithm that counted citations or backlinks to deliver search resultsbut by reinventing digitaladvertising through secret auctions on keywords. Trusted users search queries and clicking on links increased traffic and the flow of capital as well as contributing to the worlds largest database of intentions. As an omnipotent infrastructure that is intertwined with Big Datas platformization, the article also explains what usage data is accumulated (all) and how it is shared, borrowed and stored beyond just personalization. This extraction and refinement of usage data becomes what Shoshana Zuboff deems behavioural surplus and results in deleterious consequences: a habit of automaticity, which shapes the trusted user through ubiquitous googling and Googles smart algorithms, whilst simultaneously generating prediction products for surveillance capitalism. What would Google have become if Brin and Page in 1998 had applied a critical technical practice, combining reflexive critique and design decisions instead of developing an advertisement company (87% of their revenue still comes from advertising as of writing) cum search engine and not a search engine for research?

This article is part of a special issue, State of Google Critique and Intervention, published by Big Data & Society as open access. Other articles can be found here.

More about Rene Ridgway

Photo: Adobe Firefly (supposedly trained only on consented data) with the prompt: user in front of a computer searching with google as it surveils them

Read the original here:
How Google's Motto 'Don't Be Evil' Disappeared With Its Shaping of ... - DataEthics.eu

Read More..

Mind the tech gap: the AI divide in Europe – fDi Intelligence

Only a handful of European cities host a sizable population of highly-coveted artificial intelligence (AI) engineers, leaving most of the continent scrambling to catch up.

London, the birthplace of Alan Turing, who is considered one of the founding figures of modern computer science and AI, is home to about 24,600 AI engineers, according to figures from venture capital firm Sequoia Capital. The citys AI cluster features major employers in the AI space of the likes of Google DeepMind and a flourishing community of AI start-ups that can find in the city both the capital and the talent they need to scale-up.

There are a number of great universities and apprenticeships here in the UK that allow us to bring some of the best talent into the company and into our partner ecosystem, Vishal Marria, the companys CEO of Quantexa, a UK decision intelligence firm,told fDi after the unveiling a $105m hub for research and development in AI solutions in the UK capital in July.

Beyond London, Paris has the second largest population of AI engineers with 7624, followed by Zurich with 5800.

But the European city with the highest concentration of AI engineers relative to the overall population of tech engineers is Dublin, where almost two in 10 (17%) software engineers have a specialisation in AI.

Similarly to London, one factor driving the outlier concentration in Dublin is that the city has proved a friendly base for tech giants, argues the Sequoia report, which was published in June. Meta, Google and Microsoft among the top five companies hiring AI talent globally have built a considerable presence here, taking advantage of Irelands attractive tax regime for research and development.

If both the EU and other major European powerhouses like the UK have big AI ambitions, the overall level of AI talent available is still relatively low. Across the whole of Europe, only 1.4% of the population of software engineers has a specialisation in AI, with that percentage growing to 7% for engineers with some AI experience, according to Sequoia figures. These figures are even lower in the US and China, where only 1.1% and 0.5%, respectively, of all software engineers have an AI specialisation, Sequoia figures show.

With its wealth of talent, Europe is positioning itself as a leader in the accelerating world of AI, reads the report. While talent is amassing at the tech giants, these talent pools become aircraft carriers as entrepreneurial employees inevitably depart to start their own companies, generating yet more demand for AI skills. With assertive policy incentives in the pipeline, anyone with a stake in AI is keeping their eyes on the region.

More:
Mind the tech gap: the AI divide in Europe - fDi Intelligence

Read More..

Deep Calls to Deep: Eight Years of Soul-Nurturing for Preachers – Virginia Theological Seminary

Deep Calls to Deep, a program to nourish working preachers, began in 2015, funded by a generous grant from the Lilly Endowment. Now, as the grant funding comes to an end this fall, it is a good opportunity to look back on what the program has accomplished.

The fundamental principle of Deep Calls to Deep is that preaching is soul-work, and that rejuvenating preachers depends on feeding the whole personbody, mind, and spirit. We offered this nourishment by focusing on four themes:

the spirituality of preaching (the relationship with God which is the foundation of preaching) nurturing the preaching imagination (cultivating the ability to encounter texts and world creatively) embodiment (bringing the preachers whole self to proclamation) community (conveying the conviction that preaching is not a solitary activity but is nourished by relationships).

Since the program started there have been over 150 participants, who have come to VTS for on-campus residencies that offered time for sabbath, study, engagement with the four themes of the program, worship together, and fellowship. The participants also met for a year in small peer groups to preach for each other and receive feedback on their preaching. I have been so moved to see the diligence and passion that fuels these preachers in their vital work, and to see how the communities formed in Deep Calls to Deep have revitalized the participants.

Though the grant funding for this program comes to an end this fall, we are exploring ways for the work and principles of Deep Calls to Deep to continue. VTS is a seminary that values preaching and preachersafter all, the words on our chapel wall are Go ye into all the world and preach the Gospeland we will keep finding ways to nourish and support working preachers in the challenges and joys of proclamation. Stay tuned!

The Rev. Ruthanna Hooke, Ph.D.Director, Deep Calls to Deep

Back to all

Link:
Deep Calls to Deep: Eight Years of Soul-Nurturing for Preachers - Virginia Theological Seminary

Read More..

Multi-AI collaboration helps reasoning and factual accuracy in large … – MIT News

An age-old adage, often introduced to us during our formative years, is designed to nudge us beyond our self-centered, nascent minds: "Two heads are better than one." This proverb encourages collaborative thinking and highlights the potency of shared intellect.

Fast forward to 2023, and we find that this wisdom holds true even in the realm of artificial intelligence: Multiple language models, working in harmony, are better than one.

Recently, a team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) embodied this ancient wisdom within the frontier of modern technology. They introduced a strategy that leverages multiple AI systems to discuss and argue with each other to converge on a best-possible answer to a given question. This method empowers these expansive language models to heighten their adherence to factual data and refine their decision-making.

The crux of the problem with large language models (LLMs) lies in the inconsistency of their generated responses, leading to potential inaccuracies and flawed reasoning. This new approach lets each agent actively assess every other agents responses, and uses this collective feedback to refine its own answer. In technical terms, the process consists of multiple rounds of response generation and critique. Each language model generates an answer to the given question, and then incorporates the feedback from all other agents to update its own response. This iterative cycle culminates in a final output from a majority vote across the models' solutions. It somewhat mirrors the dynamics of a group discussion where individuals contribute to reach a unified and well-reasoned conclusion.

One real strength of the approach lies in its seamless application to existing black-box models. As the methodology revolves around generating text, it can also be implemented across various LLMs without needing access to their internal workings. This simplicity, the team says, could help researchers and developers use the tool to improve the consistency and factual accuracy of language model outputs across the board.

Employing a novel approach, we dont simply rely on a single AI model for answers. Instead, our process enlists a multitude of AI models, each bringing unique insights to tackle a question. Although their initial responses may seem truncated or may contain errors, these models can sharpen and improve their own answers by scrutinizing the responses offered by their counterparts," says Yilun Du, an MIT PhD student in electrical engineering and computer science, affiliate of MIT CSAIL, and lead author on a new paper about the work. "As these AI models engage in discourse and deliberation, they're better equipped to recognize and rectify issues, enhance their problem-solving abilities, and better verify the precision of their responses. Essentially, we're cultivating an environment that compels them to delve deeper into the crux of a problem. This stands in contrast to a single, solitary AI model, which often parrots content found on the internet. Our method, however, actively stimulates the AI models to craft more accurate and comprehensive solutions."

The research looked at mathematical problem-solving, including grade school and middle/high school math problems, and saw a significant boost in performance through the multi-agent debate process. Additionally, the language models showed off enhanced abilities to generate accurate arithmetic evaluations, illustrating potential across different domains.

The method can also help address the issue of "hallucinations" that often plague language models. By designing an environment where agents critique each other's responses, they were more incentivized to avoid spitting out random information and prioritize factual accuracy.

Beyond its application to language models, the approach could also be used for integrating diverse models with specialized capabilities. By establishing a decentralized system where multiple agents interact and debate, they could potentially use these comprehensive and efficient problem-solving abilities across various modalities like speech, video, or text.

While the methodology yielded encouraging results, the researchers say that existing language models may face challenges with processing very long contexts, and the critique abilities may not be as refined as desired. Furthermore,the multi-agent debate format, inspired by human group interaction, has yet to incorporate the more complex forms of discussion that contribute to intelligent collective decision-making a crucial area for future exploration, the team says. Advancing the technique could involve a deeper understanding of the computational foundations behind human debates and discussions, and using those models to enhance or complement existing LLMs.

"Not only does this approach offer a pathway to elevate the performance of existing language models, but it also presents an automatic means of self-improvement. By utilizing the debate process as supervised data, language models can enhance their factuality and reasoning autonomously, reducing reliance on human feedback and offering a scalable approach to self-improvement," says Du. "As researchers continue to refine and explore this approach, we can get closer to a future where language models not only mimic human-like language but also exhibit more systematic and reliable thinking, forging a new era of language understanding and application."

"It makes so much sense to use a deliberative process to improve the model's overall output, and it's a big step forward from chain-of-thought prompting," says Anca Dragan, associate professor at the University of California at Berkeleys Department of Electrical Engineering and Computer Sciences, who was not involved in the work. "I'm excited about where this can go next. Can people better judge the answers coming out of LLMs when they see the deliberation, whether or not it converges? Can people arrive at better answers by themselves deliberating with an LLM? Can a similar idea be used to help a user probe a LLM's answer in order to arrive at a better one?"

Du wrote the paper with three CSAIL affiliates: Shuang Li SM '20, PhD '23; MIT professor of electrical engineering and computer science Antonio Torralba; and MIT professor of computational cognitive science and Center for Brains, Minds, and Machines member Joshua Tenenbaum. Google DeepMind researcher Igor Mordatch was also a co-author.

The rest is here:
Multi-AI collaboration helps reasoning and factual accuracy in large ... - MIT News

Read More..

You Don’t Have to Pick a Winner in Artificial Intelligence (AI). Here’s … – The Motley Fool

There's no shortage of hype over artificial intelligence (AI) this year.

The launch of OpenAI's ChatGPT in late 2022 made it clear to tech CEOs and individual investors alike how powerful and potentially transformative the new generative AI technology is.

Nearly every tech company seems to be talking up the potential of AI, and AI stocks have consequently skyrocketed this year. In some ways, the impact of the new wave of demand for AI is already being felt. Nvidia's revenue nearly doubled in its most recent quarter on soaring demand for AI chips, and the company posted another round of blowout guidance for its fiscal third quarter. Oracle, meanwhile, is seeing strong growth in its cloud infrastructure division after spending billions on chips to power its superclusters.

However, if you're thinking of dumping your cash into Nvidia or another AI stock, you should think again. This is still a brand-new, emerging industry, and most of the companies touting the potential of AI have yet to see a significant financial benefit.

As AI stocks soar, investors should also be mindful of the dot-com bubble when the introduction of the World Wide Web unleashed a similar transformative technology on the stock market. Many of those tech stocks ultimately went bust, while only a small number went on to be big winners.

Rather than trying to pick a winner in AI, there are better ways to approach the emerging technology.

One of the best ways to get diversification in a particular sector or a class of stocks is through a basket approach, which means buying several stocks so that you're not overly exposed to one particular company. If you'd like to invest a substantial percentage of your holdings in AI, this is a more balanced approach versus buying a single stock.

AI is a broad category, so there are a lot of different ways you could put together a basket. One way to do it might be by taking a few stocks from each of the subsectors that are exposed to AI.

For example, you'll want to invest in semiconductor stocks. Nvidia is an easy one, and you could consider another one or two like Advanced Micro Devices,Broadcom, or evenTaiwan Semiconductorto get exposure to semiconductor manufacturing.

Big tech is another subsector that's worth including. Here, Microsoft and Alphabet are obvious choices given Microsoft's partnership with OpenAI and Alphabet's launch of Bard and its other AI investments, including Google DeepMind, its AI research lab.

Finally, you may want to consider adding stocks that have put AI at the center of their business models, such as Upstartin consumer loans, Lemonade in insurance, orC3.aiin application software.

If you'd like to have the work of managing a basket of AI stocks done for you, the easiest way to do that is by buying an AI ETF. The largest AI ETF on the market is theGlobal X Robotics & Artificial Intelligence ETF(BOTZ -0.55%) with net assets of $2.2 billion.

BOTZ's biggest holding is currently Nvidia with 14.1% of the ETF's assets. Other top holdings include medical device maker Intuitive Surgical (9.7%); ABB, a Swiss company that creates automation and robotics products used in utilities and infrastructure (8.2%); and Keyence, a Japanese company that builds factory automation products like sensors and scanners (6.9%).

It's easy to be blinded by the opportunity in AI, but you shouldn't invest in the space without considering the risks. Keep valuations and realistic prospects in mind as you decide which strategy best suits you in the sector, and remember the lessons of the dot-com bubble and other more recent bubbles in the stock market.

While some AI stocks could be big winners, others will almost certainly be busts.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Jeremy Bowman has positions in Lemonade and Upstart. The Motley Fool has positions in and recommends Abb, Advanced Micro Devices, Alphabet, Intuitive Surgical, Lemonade, Microsoft, Nvidia, Oracle, Taiwan Semiconductor Manufacturing, and Upstart. The Motley Fool recommends Broadcom and C3.ai. The Motley Fool has a disclosure policy.

Continue reading here:
You Don't Have to Pick a Winner in Artificial Intelligence (AI). Here's ... - The Motley Fool

Read More..

Why generative AI is ‘alchemy,’ not science – VentureBeat

A New York Times article this morning, titled How to Tell if Your AI Is Conscious, says that in a new report, scientists offer a list of measurable qualities based on a brand-new science of consciousness.

The article immediately jumped out at me, as it was published just a few days after I had a long chat with Thomas Krendl Gilbert, a machine ethicist who, among other things, has long studied the intersection of science and politics. Gilbert recently launched a new podcast, called The Retort, along with Hugging Face researcher Nathan Lambert, with an inaugural episode that pushes back on the idea of todays AI as a truly scientific endeavor.

Gilbert maintains that much of todays AI research cannot reasonably be called science at all. Instead, it can be viewed as a new form of alchemy that is, the medieval forerunner of chemistry, that can also be defined as a seemingly magical process of transformation.

Many critics of deep learning and of large language models, including those who built them, sometimes refer to AI as a form of alchemy, Gilbert told me on a video call. What they mean by that, he explained, is that its not scientific, in the sense that its not rigorous or experimental. But he added that he actually means something more literal when he says that AI is alchemy.

The people building it actually think that what theyre doing is magical, he said. And thats rooted in a lot of metaphors, ideas that have now filtered into public discourse over the past several months, like AGI and super intelligence. The prevailing idea, he explained, is that intelligence itself is scalar depending only on the amount of data thrown at a model and the computational limits of the model itself.

But, he emphasized, like alchemy, much of todays AI research is not necessarily trying to bewhat we know as science, either. The practice of alchemy historically had no peer review or public sharing of results, for example. Much of todays closed AI research does not, either.

It was very secretive, and frankly, thats how AI works right now, he said. Its largely a matter of assuming magical properties about the amount of intelligence that is implicit in the structure of the internet and then building computation and structuring it such that you can distill that web of knowledge that weve all been building for decades now, and then seeing what comes out.

I was particularly interested in Gilberts thoughts on alchemy given the current AI discourse, which seems to me to include some doozies of cognitive dissonance: There was the Senates closed-door AI Insight Forum, where Elon Musk called for AI regulators to serve as a referee to keep AI safe, while actively working on using AI to put microchips in human brains and making humans a multiplanetary species. There was the EU parliament saying that AI extinction risk should be a global priority, while at the same time, OpenAI CEO Sam Altman said hallucinations can be seen as positive part of the magic of generative AI and that superintelligence is simply an engineering problem.

And there was DeepMind co-founder Mustafa Suleyman, who would not explain to MIT Technology Review how his company Inflections Pi manages to refrain from toxic output Im not going to go into too many details because its sensitive, he said while calling on governments to regulate AI and appoint cabinet-level tech ministers.

Its enough to make my head spin but Gilberts take on AI as alchemy put these seemingly opposing ideas into perspective.

Gilbert clarified that he isnt saying that the notion of AI as alchemy is wrong but that its lack of scientific rigor needs to be called what it really is.

Theyre building systems that are arbitrarily intelligent, not intelligent in the way that humans are whatever that means but just arbitrarily intelligent, he explained. Thats not a well-framed problem, because its assuming something about intelligence that we have very little or no evidence of, that is an inherently mystical or supernatural claim.

AI builders, he continued, dont need to know what the mechanisms are that make the technology work, but they are interested enough and motivated enough and frankly, also have the resources enough to just play with it.

The magic of generative AI, he added, doesnt come from the model. The magic comes from the way the model is matched to the interface. The magic people like so much is that I feel like Im talking to a machine when I play with ChatGPT. Thats not a property of the model, thats a property of ChatGPT of the interface.

In support of this idea, researchers at Alphabets AI division DeepMind recently published work showing that AI can optimize its own prompts and performs better when prompted to take a deep breath and work on this problem step-by-step, though the researchers are unclear exactly why this incantation works as well as it does (especially given the fact that an AI model does not actually breathe at all.)

One of the major consequences of the alchemy of AI is when it intersects with politics as it is now with discussions around AI regulation in the US and the EU, said Gilbert.

In politics, what were trying to do is articulate a notion of what is good to do, to establish the grounds for consensus that is fundamentally whats at stake in the hearings right now, he said. We have a very rarefied world of AI builders and engineers, who are engaged in the stance of articulating what theyre doing and why it matters to the people that we have elected to represent our political interests.

The problem is that we can only guess at the work of Big Tech AI builders, he said. Were living in a weird moment, he explained, where the metaphors that compare AI to human intelligence are still being used, but the mechanisms are not remotely well understood.

In AI, we dont really know what the mechanisms are for these models, but we still talk about them like theyre intelligent. We still talk about them liketheres some kind of anthropological ground that is being uncovered and theres truly no basis for that.

But while there is no rigorous scientific evidence backing for many of the claims to existential risk from AI, that doesnt mean they arent worthy of investigation, he cautioned. In fact, I would argue that theyre highly worthy of investigation scientifically [but] when those things start to be framed as a political project or a political priority, thats a different realm of significance.

Meanwhile, the open source generative AI movement led by the likes of Meta Platforms with its Llama models, along other smaller startups such as Anyscale and Deci is offering researchers, technologists, policymakers and prospective customers a clearer window onto the inner workings of the technology. But translating the research into non-technical terminology that laypeople including lawmakers can understand, remains a significant challenge.

That is the key problem with the fact that AI, as alchemy and not science, has become a political project, Gilbert explained.

Its a laxity of public rigor, combined with a certain kind of willingness to keep your cards close to your chest, but then say whatever you want about your cards in public with no robust interface for interrelating the two, he said.

Ultimately, he said, the current alchemy of AI can be seen as tragic.

There is a kind of brilliance in the prognostication, but its not clearly matched to a regime of accountability, he said. And without accountability, you get neither good politics nor good science.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read more here:
Why generative AI is 'alchemy,' not science - VentureBeat

Read More..

What is a quantum particle really like? It’s not what you think – Big Think

Quantum mechanics is known for some very mind-bending claims, like cats being simultaneously dead and alive, and electrons and protons and other denizens of the subatomic world being both particles and waves. Its quite confusing. But, using modern ideas of the quantum world, there are ways to envision exactly what is going on. In brief, particle interactions are a heady mix of vibrating and interacting fields.

The concept of a classical particle is familiar. A particle is an object with an identifiable location. The object could be big or small, or it could have a peculiar shape. For a subatomic particle like an electron, the usual mental image is something akin to a microscopic ball. When particles interact, they can bounce off one another, like two billiard balls, or can merge, like two lumps of clay hitting one another.

Classical waves are equally familiar. Think of the up and down wiggles on the surface of a lake as a series of objects are dropped into it. Mathematically, a one-dimensional wave is just a steadily oscillating sinusoidal curve. It extends infinitely in either direction with a fixed, repeating wavelength. Unlike particles, waves have no identifiable location. Furthermore, waves interact very differently than particles. As two waves interact, they pass through one another, with the crests and troughs of the two waves either enhancing each other into a bigger crest or cancelling each other entirely (known as constructive and destructive interference, respectively).

A classical wave has an infinite length and no unique location. (Credit: Don Lincoln)

Given that traditional particles and waves seem to have such very different properties, it is easy to understand how early 20th century physicists were so confused as they tried to reconcile claims that things like photons and electrons were both particles and waves. However, scientists have come to understand that subatomic objects have both wave and particle properties, rather than existing as one or the other.

For example, an object like an electron has a wavelength, but it doesnt extend off to infinity. Instead, the amplitude (or height) of the wave has a location where it is maximized, and then it decreases at distances farther from the maximum. The result is what is called a wave packet. In the context of early 20th century quantum mechanics, the term wavicle was briefly in vogue, although it is now rarely used.

A wave packet is an accurate depiction of what a quantum particle is. It contains elements of both a wave and a particle. (Credit: Don Lincoln)

It is completely reasonable to think of subatomic particles like electrons and photons as wave packets, but given that waves are vibrations, one quickly asks, What exactly is it that is vibrating? or, equally confounding, What is the meaning of the wave packet? This is where things get a bit confusing.

In traditional quantum mechanics, this wave packet is called a wave function, and it is simply a method to calculate probabilities. If you square the wave function, the result is a function that tells you the likely locations where the particle will interact with other particles. This wave packet is merely a mathematical construct and nothing else.

When wave function (representing the wave packet) is squared, the result is a probability function that shows where the particle can and cannot be found. (Credit: Don Lincoln)

However, the situation becomes somewhat more physical when more modern ideas of quantum mechanics are used. The name for the modern theory describing particles is quantum field theory. Modern quantum field theory postulates that space is full of a series of fields. There is a field for each kind of known subatomic particle. For example, there is an electron field, a photon field, and so on. There are even quark fields.

According to this theory, an electron is nothing more than a wave packet in the electron field. The meaning of the wave packet is the same as in traditional quantum mechanics that is, if you square the wave function (representing the wave packet), the outcome is the probability of detecting an electron at that location.

The really neat thing about this understanding of particles is it gives us a very different mental picture of how particles are emitted and absorbed at the quantum level. For example, it is common for one subatomic particle to emit another, say, an electron emitting a photon. If subatomic particles are wave packets (localized vibrations of specific fields), then when an electron emits a photon, vibrations in the electron field are transferred to the photon field.

In a way, its like putting two identical tuning forks near one another and hitting one of them. The vibrations from that fork will transfer to the other, and soon both will be vibrating. In the quantum world, some of the vibrations of the electron field will transfer to the photon field, effectively creating a photon.

There is no question that modern physics theories can be difficult to envision.However, once you have embraced the idea that particles are little more than localized vibrations in several interacting fields, you have a reasonably accurate vision of how the quantum world works.

Visit link:

What is a quantum particle really like? It's not what you think - Big Think

Read More..

Solana Eclipsing Ethereum? Wall Street Veterans Top Altcoin Pick Emerges – NewsBTC

Solana (SOL) has captured the attention of former Goldman Sachs executive Raoul Pal, who recently declared it one of his standout altcoin choices and a potential contender to Ethereums dominance.

In a new interview, Pal highlighted Solanas unique strengths, emphasizing its unrivaled speed and performance, qualities that have even prompted payments giant Visa to choose it as their platform of choice.

Pal unveiled his enthusiasm for SOL, dubbing it one of his foremost altcoin choices and a contender for the most promising project outside of Ethereum. The crypto luminary also shed light on the growing influence of Solanas co-founder, Anatoly Yakovenko, popularly known as Toly, in the cryptocurrency landscape.

Pals bullish stance on Solana stems from its speed and exceptional performance, characteristics that have garnered attention even from giants like Visa.

Pal said:

This sentiment represents the distinctiveness of Solana in comparison to Ethereum and hints at the potential for interconnectivity between these blockchain ecosystems. Visas recent announcement of its foray into stablecoin settlement capabilities on Solana further bolsters its credibility in the industry.

Despite Pals optimistic outlook, the charts tell a different story for Solana. On August 5th, SOL experienced a bearish market structure as its price plummeted below the crucial $22.73 support level. Subsequently, in mid-August, the cryptocurrency endured a sharp decline, witnessing a staggering 23.8% drop from its recent peak.

As of now, Solana remains in the grip of this downtrend, with its market structure yet to show signs of a reversal. The Relative Strength Index (RSI) lingers below the critical 50-point mark, underscoring the prevailing bearish momentum. A pivotal line of defense for the bulls is the 78.6% retracement level at $16.94.

According to analysts, a test of this support level could offer a viable buying opportunity, potentially targeting local highs at $25.5, $29, and an ambitious $32. However, a dip below the $15.5-mark would invalidate any hopes of a bullish reversal at the 78.6% retracement level.

At the time of reporting, SOL was trading at $19.84 at CoinGecko, reflecting a 3.7% gain in the past 24 hours. Over the course of the previous seven days, SOL had exhibited a commendable rally of 12.5%.

In contrast, Ether (ETH) was priced at $1,637.25, marking a modest 0.3% increase within the past 24 hours. Over the same seven-day period, ETH had recorded gains of 5.5%.

(This sites content should not be construed as investment advice. Investing involves risk. When you invest, your capital is subject to risk).

Featured image from Britannica

See the original post here:
Solana Eclipsing Ethereum? Wall Street Veterans Top Altcoin Pick Emerges - NewsBTC

Read More..