Page 321«..1020..320321322323..330340..»

First Business PCs to Protect Firmware Against Quantum Computer Hacks – Manufacturing.net

The potential introduction of quantum computerscapable of breaking asymmetric cryptography could place the entire digital world at risk, and this risk is becoming increasingly plausible every day.This led HP tounveilPCs designed to protect firmware against quantum computer attacks. Features include:

Research showsthat 27 percent of experts think there is a 50 percent likelihood of a cryptographically relevant quantum computer (CRQC) by 2033. When that day comes, thesecurity of existing digital signatures on firmware and software will be in question. (Read more on Anticipating the Quantum threat to Cryptographyhere).

Migrating the entire digital world to a new cryptographic standard is a huge undertaking, and while software can be updated, hardware cant. That includes some of the cryptography that protects PC firmware. With no cryptographic protections in place, no device would be safe attackers could access and modify the underlying firmware and gain total control.

HP also recommends that customers start to assess how and when to start migrating all other aspects of their information systems to quantum-resistant cryptography. This includes three steps to begin planning:

For further information on our fifth generation ESC chipavailability, click here.

See the article here:
First Business PCs to Protect Firmware Against Quantum Computer Hacks - Manufacturing.net

Read More..

Alice & Bob and partners awarded 16.5M to slash quantum computing costs – Tech.eu

Quantum computing hardware developer Alice & Bob and academic partners ENS de Lyon and Mines Paris-PSL today announced the receipt of a 16.5 million innovation grant, a France 2030 initiative operated by Bpifrance, France's public investment bank.

Alice & Bob is based in Paris and Boston and is working to create the first universal, fault-tolerant quantum computer. Founded in 2020, the company has raised 30 million in funding, hired over 95 employees and demonstrated experimental results surpassing those of technology giants such as Google or IBM.

Alice & Bob specialises in cat qubits, a pioneering technology developed by the company's founders and later adopted by Amazon.

The funded project, called "Cat Factory," brings industry and government partners to tackle quantum computing's critical issues across various enabling technologies, including nanofabrication, chip design and validation, digital tools and electronic control.

The research from the three partners aims to accelerate quantum computing by enhancing the efficiency of the entire stack, reducing costs, and accelerating market readiness.

The goal is to develop a new optimised architecture for fault-tolerant quantum computing by 2027 that will allow the following:

Reduction of the number of control lines per cat qubit from 4.5 to 2 Reduction of the readout lines per cat qubit from 1 to 0.2

To achieve this degree of optimisation, the infrastructure of enabling technologies surrounding the Quantum Processing Unit (QPU) will be updated to: Increase the number of analogue ports per rack from 60 to 180, dividing the footprint of control electronics by three. Increase control lines per cryostat from 200 to 2000 using next-generation cabling technology.

According to Theau Peronnin, CEO of Alice & Bob:

"We are honoured to be entrusted with making quantum computing useful earlier. Our plan, centred around cat qubits, addresses the real challenges of quantum computing headfirst, enabling massive savings in energy and end-user costs."

The projected cost of quantum computation due to cryogenics and the control of large sets of qubits presents a barrier to widespread adoption. Alice and Bob will use the funding to optimise quantum computation, from design to manufacturing and infrastructure, to make quantum computers ten times cheaper to build and ready for market three years earlier.

"Quantum computing algorithms require hundreds of logical qubits, which translates to thousands to millions of physical qubits," said Florent Di Meglio, the project's lead at Mines ParisPSL.

"Cat Factory aims to reach 100 logical qubits with only three cryostats, a dramatic reduction in the hardware needed for running a useful quantum computer."

To achieve this goal, the partners will work on the whole quantum computer architecture and the infrastructure of enabling technologies surrounding it. The project's cornerstone, the cat qubit, already reduces the number of physical qubits required to build a logical one by a factor of 60.

Paul-Franois Fournier, Executive Director, ofInnovation at Bpifrance, shared:

"We are delighted to support Alice & Bob in its development, which aims to accelerate quantum computing's progress. This support reflects Bpifrance's strong ambitions in terms of disruptive innovation."

Lead image: Alice & Bob.

Read the original post:
Alice & Bob and partners awarded 16.5M to slash quantum computing costs - Tech.eu

Read More..

Quantum startup Alice & Bob receives innovation grant to make quantum computing cheaper – DatacenterDynamics

Quantum startup Alice & Bob has received a 16.5 million ($17.8 million) innovation grant from Frances public investment bank, Bpifrance.

The French startup is the first company to receive all four innovation grants from Bpifrance and the funds will be used to finance a 36-month project that will see the company enhance the efficiency of its quantum stack, reducing manufacturing costs and time to market.

The deal has been endorsed by the office of Prime Minister Gabriel Attal.

Dubbed Cat Factory after cat qubit technology, the project will bring together industry, government, and academic partners from ENS de Lyon and Mines Paris (PSL) to address a number of critical issues related to quantum computing, including nanofabrication, chip design and validation, digital tools, and electronic control.

The team is aiming to develop a new optimized architecture for fault-tolerant quantum computing by 2027 that will allow a reduction in the number of control lines per cat qubit from 4.5 to 2, and a reduction in the readout lines per cat qubit from 1 to 0.2.

In order to achieve this, the Quantum Processing Unit (QPU) architecture will be updated to increase the number of analog ports from 60 to 18 per rack, and increase the number of control lines per cryostat from 200 to 2000.

Quantum computing algorithms require hundreds of logical qubits, which translates to thousands to millions of physical qubits, said Florent Di Meglio lead on the project at Mines Paris, PSL. Cat Factory aims to reach 100 logical qubits with only 3 cryostats, a dramatic reduction in the hardware needs for running a useful quantum computer.

Theau Peronnin, CEO of Alice & Bob, added: We are honored to be entrusted with the task of making quantum computing useful earlier. Our plan, centered around cat qubits, addresses the real challenges of quantum computing headfirst, enabling massive savings in energy and end-user costs.

Founded in 2020, Alice & Bob has already raised 30 million across six funding rounds to develop a fault-tolerant quantum processor.

Earlier this month, the company was given access to 500 million in funding after being invited by the French Ministry for Armed Forces to participate in the countrys PROQCIMA initiative.

Those project participants will work to deliver a universal fault-tolerant quantum computer demonstrator with 128 logical qubits by 2030, and its industrialization into a 2048-logical-qubits computer by 2035.

Follow this link:
Quantum startup Alice & Bob receives innovation grant to make quantum computing cheaper - DatacenterDynamics

Read More..

3 Quantum Computing Stocks to Buy for the Next Bull Run: March 2024 – InvestorPlace

There are some quantum computing stocks to buy for March that I think could lift off to new heights.

Quantum computing is an emerging and potentially revolutionary technology that could have a profound impact on various industries and fields. The market potential for quantum computing is immense. It is widely regarded as one of the most promising technological advancements of the 21st century.

The great thing about these companies is that many of them are speculative investments and therefore trade at attractive valuations. I think that these companies are primed for the next bull run. As the Nasdaq moves higher, so too will these options.

So, here are three quantum computing stocks for investors to consider for March this year.

Source: Amin Van / Shutterstock.com

IonQ (NYSE:IONQ) distinguishes itself as a pure-play quantum computing company. They have a strong focus on developing trapped ion quantum computers.

For this year, it projects its revenue for the full year 2024 to range between $37 million and $41 million. Its bookings are expected to be between $70 million and $90 million. For the first quarter of 2024, revenue is forecasted to be between $6.5 million and $7.5 million. Despite these projections, IONQ anticipates an adjusted EBITDA loss of $110.5 million for 2024.

The companys performance in 2023 set a strong foundation for these forecasts. They had significant achievements including hitting $65.1 million in bookings for the year, exceeding the upper end of its guidance. This represents a 166% growth compared to the previous year. The revenue for 2023 was reported at $22.042 million, a substantial increase from $11.131 million in 2022

I see the projected loss as potentially being a good thing for IONQ investors. This could keep its valuation down to acceptable levels. Due to its small market cap of 1.9 billion, it could rise significantly along with the broader market amid a bull run.

Source: Bartlomiej K. Wroblewski / Shutterstock.com

Rigetti Computing (NASDAQ:RGTI) is known for developing quantum integrated circuits. They also offer a cloud platform that supports quantum algorithm development.

In my opinion, RGTI is one of the more underestimated companies in this list. This is because it has an angle of offering more of the picks and shovels to the quantum industry rather than being a pure-play option than IONQ. Investing in RGTI could then give one more indirect than direct exposure to the industry. This could be a strong diversifier.

In terms of outlook and developments, RGTI made significant progress in 2023, including the deployment of the 84-qubit Ankaa-2 quantum computer, which achieved a 98% median 2-qubit fidelity and a 2.5x improvement in error performance compared to its previous quantum processing units (QPUs).

Underscoring why I believe that it could be a strong contender, analysts have given RGTI a Moderate Buy rating, with a consensus price target of $2.75, indicating a potential upside of 71.34% to be reached within the next twelve months.

Source: JHVEPhoto / Shutterstock.com

IBM (NYSE:IBM) extends its influence in quantum computing beyond hardware.

I chose IBM for investors who want a well-diversified blue-chip investment rather than the more speculative companies on this list. Although its potential for capital growth may be lower, I feel that with its dividend yield of 3.52% at the time of writing, this makes it a solid and safer choice.

IBM is also expanding its global footprint in quantum computing with the establishment of its first European quantum data center in Germany, set to open in 2024. This facility will enable users in Europe to access IBMs quantum computing systems and services.

Hardware-wise, IBM has introduced advanced processors like the 133-qubit Heron and the 433-qubit Osprey. Meanwhile, On the software front, IBM is evolving its Qiskit platform with updates that promise to increase the ease of quantum software programming.

IBM then has many forks in the fire to take advantage of the rise of quantum computing, which along with its stability and dividend yield, makes it one of those stocks that could rise in a bull run. If you are looking for quantum computing stocks to buy, you cant go wrong with these.

On the date of publication, Matthew Farley did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Matthew started writing coverage of the financial markets during the crypto boom of 2017 and was also a team member of several fintech startups. He then started writing about Australian and U.S. equities for various publications. His work has appeared in MarketBeat, FXStreet, Cryptoslate, Seeking Alpha, and the New Scientist magazine, among others.

Read more:
3 Quantum Computing Stocks to Buy for the Next Bull Run: March 2024 - InvestorPlace

Read More..

Chattanooga State Takes A Quantum Leap With Expert Insights Into The Future Of Computing – Chattanooga Pulse

Chattanooga State Community College will take a deep dive into the world of quantum computing alongside physics and computer science expert Dr. Shohini Ghose.

On April 3, Dr. Ghose will join ChattState students, faculty, and members of the Chattanooga community virtually to discuss the emerging field of quantum computing. The event will kick off at 9:30 a.m. with coffee and pastries followed by Dr. Ghoses presentation at 10 a.m. The lecture is titled Preparing for Quantum 2.0.

The reception and lecture will be held in BRANCH 30 on the ChattState main campus. A live stream of the lecture will also be accessible on YouTube.

Dr. Ghose is a professor of Physics and Computer Science at Wilfrid Laurier University. Her areas of expertise include quantum computing, quantum communication, and classical and quantum chaos.

The event is hosted by ChattStates Honors Program, said organizer Dr. Angie Wood, professor of social and behavioral sciences and director of ChattState Honors.

Dr. Wood said she challenged honors students last year to research the field of quantum computing and find an expert to speak on campus. They ultimately chose to invite Dr. Ghose after viewing her TED Talk titled A Beginners Guide to Quantum Computing.

Dr. Wood said offering educational opportunities outside of the classroom is one way ChattState gives students opportunities to further their futures and broaden their horizons.

College is about more than just going to class. It's also about networking and the contacts that you gain outside of the classroom, she said. "You never know when you will walk into an event like this and realize its what youre passionate about.

The event is presented alongside the Chattanooga Quantum Collaborative, a local organization that aims to use quantum technology to improve the local workforce, economy, and infrastructure.

The organization is supported by several founding members: EPB, the City of Chattanoga, TVA, UTC, Oak Ridge National Lab, Hamilton County, ChattState, Chattanooga Area Chamber of Commerce, Company Lab, Hamilton County Schools, and Qubitekk.

Originally posted here:
Chattanooga State Takes A Quantum Leap With Expert Insights Into The Future Of Computing - Chattanooga Pulse

Read More..

UNI’s Begeman Lecture to explore how quantum computing is revolutionizing our world – Cedar Valley Daily Times

Quantum computing, and how its revolutionizing our world, is the focus of this years Begeman Lecture in Physics at the University of Northern Iowa.

The lecture, titled Building a Quantum Computer, One Atom at a Time, will be presented by UNI Department of Physics alum Justin Bohnet on Wednesday, April 3 at 7 p.m. in the Lang Hall Auditorium. The event is free and open to the public.

Justin is in the vanguard of efforts to develop quantum computers for widespread use, said Paul Shand, head of the UNI Department of Physics. Were excited for him to share more about quantum computers and how they will turbocharge computing in the future.

Bohnet is the research & development manager at Quantinuum a quantum computing company whose mission is to accelerate quantum computing and use its power to achieve unprecedented breakthroughs in drug discovery, health care, materials science, cybersecurity, energy transformation and climate change.

In this lecture, Bohnet will share his personal journey from a student at UNI to building the worlds most powerful quantum computers, powered by control over single atoms. Along the way, youll get a crash course on quantum computers what they are, how they work and why were standing on the brink of a technological revolution that will let us explore uncharted territories of science and technology.

If you need a reasonable accommodation in order to participate in this event, please contact the UNI Department of Physics by calling 319-273-2420 or by emailing physics@uni.edu prior to the event.

View original post here:
UNI's Begeman Lecture to explore how quantum computing is revolutionizing our world - Cedar Valley Daily Times

Read More..

Exploring the potential of quantum reservoir computing in forecasting the intensity of tropical cyclones – Moody’s Analytics

What is the problem?

Accurately predicting the intensity of tropical cyclones, defined as the maximum sustained windspeed over a period of time, is a critical yet challenging task. Rapid intensification (RI) events are still a daunting problem for operational intensity forecasting.

Better forecasts and simulation of tropical cyclone (TC) intensities and their track can significantly improve the quality of Moodys RMS tropical cyclone modeling suite. RMS has helped clients manage their risk during TC events in the North Atlantic for almost 20 years. Real time TCs can significantly impact a companys financial, operational, and overall solvency state. Moodys RMS Hwind product helps (re)insurers, brokers, and capital markets understand the range of potential losses across multiple forecast scenarios, capturing the uncertainty in of how track and intensity will evolve.

With the advances in Numerical Weather Prediction (NWP) and new meteorological observations, forecasts of TC movement have progressively improved in global and regional models. However, the model accuracy in forecasting the intensities of TCs remains challenging for operational weather forecasting and consequential assessment of weather impacts such as high winds, storm surges, and heavy rainfall.

Since the current spatial resolution of the NWP model is insufficient for resolving convective scale processes and inner core dynamics of the cyclone, forecast intensities of TCs from operational models are mostly underestimated or low biased. Yet, accurate TC intensity guidance is crucial not only for assessing the impact of the TC, but also for generating realistic projections of storms and their associated hazards. This is essential for effective risk evaluation. Conventional TC intensity forecasting mainly relies on three approaches: statistical, dynamical, and statistical-dynamical methods.

Dynamical models, also known as numerical models, are the most complex and use high performance computing (HPC) to solve the physical equations of motion governing the atmosphere. While statistical models do not explicitly consider the physics of the atmosphere, they are based on historical relationships between storm behavior and storm-specific details such as location and intensity.

The rise of Machine Learning (ML) and Deep Learning (DL) has led to attempts to create breakthroughs in climate modeling and weather forecasting. Recent advances in computational capabilities and the availability of extensive reanalysis of observational or numerical datasets have reignited interest in developing various ML methods for predicting and understanding the dynamics of complex systems.

One of our key objectives is to build a quantum reservoir computing-based model, capable of processing climate model outputs and storm environment parameters, to provide more accurate forecasting, will improve short-term and real-time TC risk analysis.

Official modeling centers use consensus or ensemble-based dynamical models and represent the state of the art in tropical cyclone forecasting. However, these physics-based models may be subject to bias derived from high wind shear, low sea surface temperatures, or the storms location in the basin. By learning from past forecasting errors, we may be able to identify and correct past model biases, thereby greatly enhancing the quality of future forecasting and risk modeling products. The long-term aim is to integrate ML-based elements into coarse global climate models to improve their resolution and include natural dynamical processes currently absent in these models.

Reservoir Computing (RC) is a novel machine-learning algorithm particularly suited to quantum computers and has shown promising results in early non-linear time series prediction tests. In a classical setting, RC is stable and computationally simple. It works by mapping input time series signals into a higher dimensional computational space through the dynamics of a fixed, non-linear system known as a reservoir. This method is efficient, trainable, and has a low computational cost, making it a valuable tool for large-scale climate modeling.

While quantum machine learning has been considered a promising application for near-term quantum computers, current quantum machine learning methods require large quantum resources and suffer from gradient vanishing issues. Quantum Reservoir Computing (QRC) has the potential to combine the efficient machine learning of classical RC with the computing power of complex and high-dimensional quantum dynamics. QRC takes RC a step further by leveraging the unique capabilities of quantum processing units (QPUs) and their exponentially large state space, resulting in rich dynamics that cannot be simulated on a conventional computer. In particular, the flexible atom arrangements and tunability of optical controls within QuEras neutral atom QPU enable the realization of a rich class of Hamiltonians acting as the reservoir.

Recent studies on quantum computing simulators and hardware suggest that certain quantum model architectures used for learning on classical data can achieve results similar to that of classical machine learning models while using significantly fewer parameters. Overall, QRC offers a promising approach to resource-efficient, noise-resilient, and scalable quantum machine learning.

In this project, we are collaborating with QuEra Computing, the leading provider of quantum computers based on neutral-atoms , to explore the benefits of using quantum reservoir computing in climate science and to investigate the potential advantages that the quantum layer from QuEra can bring. QuEra's neutral atom QPU and the types of quantum simulations it can perform give rise to different quantum reservoirs. This unique capability can potentially enhance the modeling of tropical cyclone intensity forecasts and data.

This collaboration involves multiple stakeholders and partners, including QuEra Computing Inc., Moodys RMS technical team, and Moodys Quantum Taskforce. The work is supported by a DARPA grant award, underscoring its significance and potential impact in tropical cyclone modeling and forecasting.

In summary, combining quantum machine learning methods, reservoir computing, and the quantum capabilities of QuEra's technology offers a promising approach to addressing the challenges in predicting tropical cyclone intensity. This collaboration aims to enhance the quality and efficiency of tropical cyclone modeling, ultimately aiding in better risk assessment and decision making in the face of these natural disasters.

Read more here:
Exploring the potential of quantum reservoir computing in forecasting the intensity of tropical cyclones - Moody's Analytics

Read More..

Creating ‘good’ AGI that won’t kill us all: Crypto’s Artificial Superintelligence Alliance – Cointelegraph

After a year of increasingly dire warnings about the imminent demise of humanity at the hands of superintelligent artificial intelligence (AI), Magazine is in Panama at the Beneficial AGI Conference to hear the other side of the story. Attendees include an eclectic mix of transhumanists, crypto folk, sci-fi authors including David Brin, futurists and academics.

The conference is run by SingularityNET, a key member of the proposed new Artificial Superintelligence Alliance, to find out what happens if everything goes right with creating artificial general intelligence (AGI) human-level,artificial general intelligence.

But how do we bring about that future, rather than the scenario in which Skynet goes rogue and kills us all?

One of the best insights into why those questions are so important comes from futurist Jose Luis Cordeiro, author of The Death of Death, who believes humanity will cure all diseases and aging thanks to AGI.

He tells Magazine of some sage wisdom that Arthur C. Clarke, the author of 2001: A Space Odyssey, once told him.

He said: We have to be positive about the future because the images of the future of whats possible begin with our minds. If we think we will self-destroy, most likely we will. But if we think that we will survive, [that] we will move into a better world [then we] will work toward that and we will achieve it. So it begins in our minds.

Humans are hardwired to focus more on the existential threats from AGI than on the benefits.

Evolutionary speaking, its better that our species worries nine times too often that the wind rustling in the bushes could be a tiger than it is to be blithely unconcerned about the rustling and get eaten by a tiger on the 10th occurrence.

Even the doomers dont put a high percentage chance of AGI killing us all, with a survey of almost 3000 AI researchers suggesting the chance of an extremely bad outcome ranges from around 5% to 10%. So while thats worryingly high, the odds are still in our favor.

Opening the conference, SingularityNET founder and the Father of AGI, Dr. Ben Goertzel, paid tribute to Ethereum founder Vitalik Buterins concept of defensive accelerationism. Thats the midpoint between the effective accelerationism techno-optimists and their move fast and break things ethos, and the decelerationists, who want to slow down or halt the galloping pace of AI development.

Goertzel believes that deceleration is impossible but concedes theres a small chance things could go horribly wrong with AGI. So hes in favor of pursuing AGI while being mindful of the potential dangers. Like many in the AI/crypto field, he believes the solution is open-sourcing the technology and decentralizing the hardware and governance.

This week SingularityNET announced it has teamed up with the decentralized multi-agent platform FetchAI founded by DeepMind veteran Humayun Sheikh and the data exchange platform Ocean Protocol to form the Artificial Superintelligence Alliance (ASI).

It will be the largest open-sourced independent player in AI research and development and has proposed merging SingularityNET, FetchAI and Ocean Protocols existing tokens into a new one called ASI. It would have a fully diluted market cap of around $7.5 billion subject to approval votes over the next two weeks. The three platforms would continue to operate as separate entities under the guidance of Goertzel, with Sheikh as chair.

According to the Alliance, the aim is to create a powerful compelling alternative to Big Techs control over AI development, use and monetization by creating decentralized AI infrastructure at scale and accelerating investment into blockchain-based AGI.

Probably the most obvious beneficial impact is AGIs potential to analyze huge swathes of data to help solve many of our most difficult scientific, environmental, social and medical issues.

Weve already seen some amazing medical breakthroughs, with MIT researchers using AI models to evaluate tens of thousands of potential chemical compounds and discovered the first new class of antibiotics in 60 years, one thats effective against the hitherto drug-resistant MRSA bacteria. Its the sort of scaling up of research thats almost impossible for humans to achieve.

Also read: Ben Goertzel profile How to prevent AI from annihilating humanity using blockchain

And thats all before we get to the immortality and mind-uploading stuff that the transhumanists get very excited about but which weirds most people out.

This ability to analyze great swathes of data also suggests the technology will be able to give early warnings of pandemics, natural disasters and environmental issues. AI and AGI also have the potential to free humans from drudgery and repetitive work, from coding to customer service help desks.

While this will cause a massive upheaval to the workforce, the invention of washing machines and Amazons online businesses had big impacts on particular occupations. The hope is that a bunch of new jobs will be created instead.

Economic professor Robin Hanson says this has happened over the past two decades, even though people were very concerned at the turn of the century that automation would replace workers.

Hansons study of the data on how automation impacted wages and employment across various industries between 1999 and 2019 found that despite big changes, most people still had jobs and were paid pretty much the same.

On average, there wasnt a net effect on wages or jobs in automation of U.S. jobs from 1999 to 2018, he says.

Janet Adams, the optimistic COO of SingularityNET, explains that AGI has the potential to be extraordinarily positive for all humanity.

I see a future in which our future AGIs are making decisions which are more ethical than the decisions which humans make. And they can do that because they dont have emotions or jealousy or greed or hidden agendas, she says.

Adams points out that 25,000 people die every day from hunger, even as people in rich countries throw away mountains of food. Its a problem that could be solved by intelligent allocation of resources across the planet, she says.

But Adams warns AGI needs to be trained on data sets reflecting the entire worlds population and not just the top 1% so that when they make decisions, they wont make them just for the benefit of the powerful few, they will make them for the benefit of the broader civilization, broader humanity.

Anyone who watched the early utopian dreams of a decentralized internet crumble into a corporate ad-filled landscape of addictive design and engagement farming may have doubts this rosy future is possible.

Building high-end AI requires a mountain of computing and other resources that are currently out of reach of all but a handful of the usual suspects: Nvidia, Google, Meta and Microsoft. So the default assumption is that one of these tech giants will end up controlling AGI.

Goertzel, a long-haired hippy who plays in a surprisingly good band fronted by a robot, wants to challenge that assumption.

Goertzel points out that the default assumption used to be that companies like IBM would win the computing industry and Yahoo would win search.

The reason these things change is because people were concretely fighting to change it in each instance, he says. Instead, Bill Gates, Steve Jobs and the Google guys came along.

The founder of SingularityNET, hes been thinking about the Singularity (a theoretical moment when technological development increases exponentially) since the early 1970s when he read an early book on the subject called The Prometheus Project.

Hes been working on AGI for much of the time since then, popularizing the term AGI and launching the OpenCog AI framework in 2008.

Adams says Goertzel is a key reason SingularityNET has a credible shot.

We are the biggest not-for-profit, crypto-funded AI science and research team on the planet, Adams says, noting their competitors have been focused on narrow AIs like ChatGPT and are only now shifting their strategy to AGI.

Theyre years behind us, she says. We have three decades of research with Dr. Ben Goertzel in neural symbolic methods.

But she adds that opening up the platform to any and all developers around the world and rewarding them for their contribution will give it the edge even over the mega-corporations who currently dominate the space.

Because we have a powerful vision and a powerful commitment to building the most advanced, most intelligent AGI in a democratic way, its hard to imagine that Big Tech or any other player could come in and compete, particularly when youre up against open source.

[We will] see a potentially huge influx of people developing on the SingularityNET marketplace and the continued escalation of pace toward AGI. Theres a good chance it will be us.

The Prometheus Project proposed that AI was such an earth-shattering development that everyone in the world should get a democratic vote on its development.

So when blockchain emerged, it seemed like implementing decentralized infrastructure and token-based governance for AI was the next most practical alternative.

HyperCycle CEO Toufi Saliba tells Magazine this mitigates the threat of a centralized company or authoritarian country gaining immense power from developing AGI first, which would be the worst thing that ever happened to humanity.

Also read: Real AI use cases in crypto, No 1: The best use of money for AI is crypto

Its not the only potential solution to the problem. Meta chief AI scientist Yan Le Cun is a big proponent of open-sourcing AI models and letting a thousand flowers bloom, while X owner Elon Musk recently open-sourced the model for Grok.

But blockchain is arguably a big step up. SingularityNET aims to network the technology around the world, with different components controlled by different communities, thereby spreading the risk of any single company, group or government controlling the AGI.

So you could use these infrastructures to implement decentralized deep neural networks, you could use them to implement a huge logic engine, you can use them to implement an artificial life approach where you have a simulated ecosystem and a bunch of little artificial animals interacting and trying to evolve toward intelligence, explains Goertzel.

I want to foster creative contributions from everywhere, and it may be some, you know, 12-year-old genius from Tajikistan comes up with a new artificial life innovation that provides a breakthrough to AGI.

HyperCycle is a ledgerless blockchain thats fast enough to allow AI components to communicate, coordinate and transact to finality in under 300 milliseconds. The idea is to give AIs a way to call on the resources of other AIs, paid for via microtransactions.

For now, the fledgling network is being used for small-scale applications, like an AI app calling on another AI service to help complete a task. But in time, as the network scales, its theoretically possible that AGI might be an emergent property of the various AI components working together in a sort of distributed brain.

So, in that approach, the entire world has a much higher chance to get to AGI as a single entity, Saliba says.

Goertzel didnt develop HyperCycle for that reason he just needed something miles faster than existing blockchains to enable AIs to work together.

The project hes most excited about is OpenCog Hyperon, which launches in alpha this month. It combines together deep neural nets, logic engines, evolutionary learning and other AI paradigms in the same software framework, for updating the same extremely decentralized Knowledge Graph.

The idea is to throw open the doors to anyone who wants to work on it in the hope they can improve the METTA AGI programming language so it can scale up massively. We will have the complete toolset for building the baby AGI, he says. To get something I would want to call it baby AGI we will need that million times speed up of the METTA interpreter, he says.

My own best guess is that Opencog Hyperon may be the system to make the [AGI] breakthrough.

Of course, decentralization does not ensure things will go right with AGI. As Goertzel points out, the government of Somalia was decentralized very widely in the 1990s under a bunch of warlords and militias, but it would have been preferable at the time to live under the centralized government of Finland.

Furthermore, token-based governance is a long way from being fit for prime time. In projects like Uniswap and Maker, large holders like a16z and the core team have so many tokens its almost not worth anyone else voting. Many other decentralized autonomous organizations are wracked by politics and infighting.

The surging price of crypto/AI projects has attracted a bunch of token speculators. Are these really the people we want to put in control of AGI?

Goertzel argues that while blockchain projects are currently primarily attractive to people interested in making money, that will change as the use case evolves.

If we roll out the worlds smartest AI on decentralized networks, you will get a lot of other people involved who are not primarily oriented toward financial speculation. And then itll be a different culture.

But if the Artificial Superintelligence Alliance does achieve AGI, wouldnt its tokens be ludicrously expensive and out of reach of those primarily interested in beneficial AGI?

Goetzel suggests that perhaps a weighted voting system that prioritizes those who have contributed to the project may be required:

I think for guiding the mind of the AGI, we want to roll out a fairly sophisticated, decentralized reputation system and have something closer to one person, one vote, but where people who have some track record of contributing to the AI network and making some sense, get a higher weighting.

Subscribe

The most engaging reads in blockchain. Delivered once a week.

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Read the rest here:

Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph

Read More..

Whoever develops artificial general intelligence first wins the whole game – ForexLive

High risk warning: Foreign exchange trading carries a high level of risk that may not be suitable for all investors. Leverage creates additional risk and loss exposure. Before you decide to trade foreign exchange, carefully consider your investment objectives, experience level, and risk tolerance. You could lose some or all your initial investment; do not invest money that you cannot afford to lose. Educate yourself on the risks associated with foreign exchange trading and seek advice from an independent financial or tax advisor if you have any questions.

Advisory warning: FOREXLIVE is not an investment advisor, FOREXLIVE provides references and links to selected news, blogs and other sources of economic and market information for informational purposes and as an educational service to its clients and prospects and does not endorse the opinions or recommendations of the blogs or other sources of information. Clients and prospects are advised to carefully consider the opinions and analysis offered in the blogs or other information sources in the context of the client or prospect's individual analysis and decision making. None of the blogs or other sources of information is to be considered as constituting a track record. Past performance is no guarantee of future results and FOREXLIVE specifically hereby acknowledges clients and prospects to carefully review all claims and representations made by advisors, bloggers, money managers and system vendors before investing any funds or opening an account with any Forex dealer. Any news, opinions, research, data, or other information contained within this website is provided on an "as-is" basis as a general market commentary and does not constitute investment or trading advice, and we do not purport to present the entire relevant or available public information with respect to a specific market or security. FOREXLIVE expressly disclaims any liability for any lost principal or profits which may arise directly or indirectly from the use of or reliance on such information, or with respect to any of the content presented within its website, nor its editorial choices.

Disclaimer: FOREXLIVE may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.

Finance Magnates CY Limited

Follow this link:

Whoever develops artificial general intelligence first wins the whole game - ForexLive

Read More..

Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press – Tech Policy Press

Based on the number of new bills across the states and in Congress, the number of working groups and reports commissioned by city, state, and local governments, and the drumbeat of activity from the White House, it would appear that it is an agenda-setting moment for policy regarding artificial intelligence (AI) in the United States. But the language describing AI research and applications continues to generate confusion and seed the ground for potentially harmful missteps.

Stakeholders agree that AI warrants thoughtful legislation, but struggle for consensus around problems and corresponding solutions. An aspect of this confusion is embodied by words we use. It is imperative that we not only know what we are talking about regarding AI, but agree on how we talk about it.

Last fall, the US Senate convened a series of closed-door meetings to inform US AI strategy. It brought together academics and civil society leaders, but was disproportionately headlined by prominent industry voices who have an interest in defining the terms of the discussion. From the expanding functionality of ever-larger AI models to the seemingly far-off threat to human existence, lawmakers and the public are immersed in AI branding and storytelling. Loaded terminology can mislead policymakers and stakeholders, ultimately causing friction between competing aspects of an effective AI agenda. While speculative and imprecise language has always permeated AI, we must emphasize nomenclature leaning more towards objectivity than sensationalism. Otherwise, US AI strategy could be misplaced or unbalanced.

Intelligence represents the promise of AI, yet its a construct thats difficult to measure. The very notion is multifaceted and characterized by a fraught history. The intelligence quotient (IQ), the supposed numerical representation of cognitive ability, remains misused and misinterpreted to this day. Corresponding research has led to contentious debates regarding purported fundamental differences between IQ scores of Black, White, and Hispanic people in the US. There's a long record of dubious attempts to quantify intelligence in ways that cause a lot of harm, and it poses a real danger that language about AI might do the same.

Modern discussions in the public sphere give full credence to AI imbued with human-like attributes. Yet, this idea serves as a shaky foundation for debate about the technology. Evaluating the power of current AI models relies on how theyre tested, but the alignment between test results and our understanding of what they can do is often not clear. AI taxonomy today is predominantly defined by commercial institutions. Artificial general intelligence (AGI), for example, is a phrase intended to illustrate the point at which AI matches or surpasses humans on a variety of tasks. It suggests a future where computers serve as equally competent partners. One by one, industry leaders have now made AGI a business milestone. But its uncertain how to know once weve crossed that threshold, and so the mystique seeps into the ethos.

Other examples illustrate this sentiment as well. The idea of a models emergent capabilities nods to AIs inherent capacity to develop and even seem to learn in unexpected ways. Similar developments have convinced some users of a large language models (LLM) sentience.

However, while these concepts are currently disputed, other scientists contend that even though bigger LLMs typically yield better performance, the presence of these phenomena ultimately relies on a practitioners test metrics.

The language and research of the private sector disproportionately influences society on AI. Perhaps its their prerogative; entrepreneurs and industry experts arent wrong to characterize their vision in their own way, and aspirational vocabulary helps aim higher and broader. But it may not always be in the public interest.

These terms arent technical jargon buried deep in a peer-review article. They are tossed around every day in print, on television, and in congressional hearings. Theres an ever-present tinge of not-quite-proven positive valence. On one hand, its propped up with bold attributes full of potential, but on the other, often dismissed and reduced to a mechanical implement when things go wrong.

The potential societal impact is inevitable when unproven themes are parroted by policymakers who may not always have time to do their homework.

Politicians are not immune to the hype. Examples abound in the speeches of world leaders like UK Prime Minister Rishi Sunak and in the statements of President Joe Biden. Congressional hearings and global meetings of the United Nations have adopted language from the loudest, most visible voices providing a wholesale dressing for the entire sector.

Whats missing here is the acknowledgement of how much language sets the conditions for our reality, and how these conversations play out in front of the media and public. We lack common, empirical, and objective terminology. Modern AI descriptors mean one thing to researchers, but may express something entirely different to the public.

We must call for intentional efforts to define and interrogate the words we use to describe AI products and their potential functionality. Exhaustive and appropriate test metrics must also justify claims. Ultimately, hypothetical metaphors can be distorting to the public and lawmakers, and this can influence the suitability of laws or inspire emerging AI institutions with ill-defined missions.

We cant press reset, but we can provide more thoughtful framing.

The effects of AI language are incredibly broad and indirect but, in total, can be enormously impactful. Steady and small-scale steps may deliver us to a place where our understanding of AI has been shaped, gradually modifying behavior by reinforcing small and successive approximations bringing us ever closer to a desired belief.

By the time we ask, how did we get here, the ground may have shifted underneath our feet.

Follow this link:

Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press

Read More..