Page 1,346«..1020..1,3451,3461,3471,348..1,3601,370..»

6 Things I Learned About the Science of Quantum Computing from Entrust Engage – Security Boulevard

After some of the basics of quantum computing are explored in the introductory episode of the Entrust Engage podcast, episode two takes listeners deeper into the science behind this topic with an interview featuring Dr. Carmen Palacios-Berraquero, award-winning quantum physicist and CEO of Nu Quantum. Many interesting areas were covered from a brief history of quantum computing to what the future benefits of this field might be. Here are the 6 key takeaways from this episode:

#1: The episode kicked off with a concise history of quantum computing, which goes back to the 80s when people first began to look at how to apply quantum theory to computing. Then in the 90s, Shors algorithm was developed, which significantly sped up the calculation of factorization problems and was the first step in potentially breaking RSA encryption. Experimental physicists also found new hardware in which you could encode quantum information. Progress in quantum computing continued to exponentially accelerate in the 2000s and 2010s and led to a kind of modern-day space race of different hardware approaches. In these earlier years the field was still largely an academic endeavor, whereas in the past seven years the academic pioneers of this race moved into the industry to set up startups. In 2019, Google declared quantum supremacy. And in 2021, more than $3 billion was invested into quantum computing, further cementing its importance to the future of technology.

#2: So, what exactly is quantum supremacy? For starters, Dr. Palacios-Berraquero prefers the term quantum advantage. When Google announced that it had achieved this, what did that really mean? How significant was this?

Well, Google was essentially successful in using quantum computing to solve a problem that would have been infeasible for a classical computer. However, the problem it solved had no application in the real world. While the industry is moving toward solving commercially useful problems, there is still progress to be made before any organization can consider itself to have a true quantum advantage.

#3: There is consensus that the quantum computing threat to traditional public key algorithms will be a reality within the decade. However, taking Googles 2019 claim of quantum supremacy into consideration, the question arises: Has this timeline been accelerated?

The answer: not necessarily. There are two main factors to consider here. The first is that it is very hard to scale these machines. The second is that there are lots of errors in quantum computers processors. Error correction schemes are very complex and take up quite a few logical qubits in quantum computers, leaving fewer qubits to perform logical computations. So, even with the progress made at present, the threat timeline of quantum computers has not accelerated.

#4: Is the news about quantum computing all about its threats and challenges? Echoing what we learned from episode one, absolutely not! There are major benefits that quantum computing can unlock in the future. For starters, quantum computing can crack those intractable problems we cant currently solve today. This opens entirely new applications, markets, and industries. Some examples include both material and drug design, paving the way for innovations in healthcare and the battle against climate change. In the near term, quantum computing promises benefits like financial portfolio optimization, improvements in machine learning algorithms, and the simulation of quantum and physical systems.

#5: What is a quantum random number generator and how different is it from what we know of entropy in cryptography today? The quantum random number generator is based on the main proposition of quantum theory that the outcome of a measurement is completely unpredictable. It uses this principle to generate entropy/random numbers. Over the past decade, it has been proposed to use these generators as a source for cryptography.

The main difference is that entropy used in cryptography currently is based on classical mechanics, where theoretically everything is predictable. In theory, by knowing the exact functioning of a system and combining it with a lot of computing power, you could predict the outcome of a classical random number generator.

The reality is quite different, though. Current cryptography uses mathematical tools in addition to a random number generator, making it quite impossible to crack. While the industry is debating the use and applications of a quantum random number generator, its still a long way from adopting it in cryptography.

#6: Additional benefits in development include quantum computing as a service (QCaas) and quantum internet. These are two very different things. QCaaS is a means by which users can access quantum computers via the cloud. For example, AWS hosts around five quantum computers, and a user can buy time on them through the cloud and run various algorithms. And theres a long line of users queued up to use these computers. Who are these users? A mix of academics and researchers as well as R&D departments in industry. However, these machines are still in the labs of companies, and it will be a while before they can function independently in a data center.

Now lets unravel the service known as quantum internet. Picture a computer network that can send quantum information between distant computers, and there you have it in a nutshell. This technology is still largely contained to the realm of academia, and its still unknown what the exact commercial application will be. What we do know is that its still some years away.

The science behind quantum computers is pretty fascinating, and if youre looking to learn more, I recommend listening to the second episode of Entrust Engage. For more information and resources on post-quantum and how to prepare, visit our webpage.

The post 6 Things I Learned About the Science of Quantum Computing from Entrust Engage appeared first on Entrust Blog.

*** This is a Security Bloggers Network syndicated blog from Entrust Blog authored by Lavanya Suvarna. Read the original post at: https://www.entrust.com/blog/2023/04/6-things-i-learned-about-the-science-of-quantum-computing-from-entrust-engage/

See the rest here:
6 Things I Learned About the Science of Quantum Computing from Entrust Engage - Security Boulevard

Read More..

On the quest for qubits: Different types of quantum startups, explained – Sifted

With hundreds of European startups working on quantum and plenty of fresh cash being injected into the sector, the race to build a fully fledged large-scale quantum computer is on.

Unlike classical computers that operate on binary bits (0 or 1), quantum computers use qubits quantum bits which can exist simultaneously in multiple states, allowing for parallel computations. This can allow quantum computers to potentially calculate problems that even supercomputers cant handle.

A conventional computer is closer to an abacus than to a quantum computer, says Chris Ballance, cofounder of quantum computing startup Oxford Ionics.

To build a quantum computer, you can pick anything in principle that obeys quantum mechanics atoms, protons, electrons. This is why you see this zoo of different approaches to building the quantum computers, says John Morton, professor at UCL and CTO of London-based Quantum Motion.

Were currently in the noisy intermediate-scale quantum (NISQ) era which means we have some quantum computing tech, but its not yet advanced enough to solve a problem without errors or perform better than a classical computer. Once we go beyond this era, humanity could unlock applications from finance and drug discovery to finding new materials to stall climate change.

And there are lots of ways to get there. Here are some of the technologies European quantum computing startups are using, and how they work.

The most mature approach used by the likes of IBM and Google is superconducting qubits. At the basic level, asuperconducting qubit is a circuit loop made up of metals that become superconducting (ie. able to conduct current when cooled down) with an electric current travelling around it. They use electric currents flowing through them to store and process information. When Google claimed quantum supremacy in 2019, it used a 53-qubit superconducting device, and in 2022, IBM unveiled Osprey, a 433-qubit superconducting processor.

In the UK, Oxford Quantum Circuits (OQC) has built an eight-qubit superconducting quantum computer named Lucy. Brian Vlastakis, its quantum R&D lead, says because all of its quantum information is encoded into electrical signals, it can use a lot of the same circuits that are used for other electronics.

The startup has been providing quantum-as-a-service since 2019. Lucy, for example, is available on the cloud (Amazon Braket) for customers to try out and learn more about how quantum computers could be useful for the problems that theyre trying to solve.

Vlastakis says one of the reasons hes excited about OQCs technology is that our architecture is incredibly flexible. We can essentially design many different quantum processor variations to function in a way that will work better for customers.

Another method is using trapped ion technology, which consists of trapping single atoms in place using an electromagnetic field. Unlike superconducting qubits, trapped ion qubits are identical to each other.

Ilyas Khan, cofounder and chief product officer of UK-headquartered Quantinuum, says trapped ion devices offer two advantages stability and circuit depth which provide relatively low error rates. However, its not clear how scalable the tech will be and the method is slower than superconducting.

At the moment theres no point to being fast if you cant do anything, says Khan.

While Quantinuum (and others such as IonQ and Alpine Quantum Technologies) rely on complex laser systems to control the trapped ions, Oxford Ionics uses a technology that can be integrated into a standard silicon chip.

Ballance says Oxford Ionics focus is on optimising a lower number of qubits with very low error rates, rather than scaling the number of qubits massively.

Most quantum computers on the market have far more qubits than they can use in useful computation because of the error rate, he says. So for example, IBM have their 433-qubit devices theyve launched, but when you benchmark them they perform less good than a perfect nine-qubit system, he says. Our focus is getting to those few 100 qubit devices as fast as possible.

While superconducting and trapped ion qubits were originally physics experiments in labs, Morton says Quantum Motion has a different approach: silicon-based qubits.

Were ultimately saying that for quantum computers to be useful youre going to need a lot of qubits. What does a lot mean? Well, hundreds of thousands or millions of qubits, he says. There arent many technologies that make millions of anything one example of something that has is the silicon transistor.

If you dont try to correct for errors, then its true maybe you can do something useful with just a hundred or a few hundred qubits, but the problem is you still are going to want to be able to run lots of problems, and run them many times, and so you still, in the end, want lots of qubits.

The startup hopes the silicon approach will be more scalable and cost-efficient, as it can build quantum processors with far less specialist technology, such as lasers or a high vacuum. Quantum Motions approach offers qubit densities that are highly miniaturised and its silicon-based quantum chips are typically a few millimetres across. Morton expects the cooling system required to operate the chip to be similar to a standard 19-inch server rack.

Another approach is photonic qubits,made from particles of light. PsiQuantum, a US company founded in the UK, says photons are the only way to reach a million qubits and a million qubits is the only way for a quantum computer to be useful.

There are many advantages when you decide to use photons, because first of all photons are a quantum particle that have no mass and no charge, so that means that photons are less exposed to disturbance than other kinds of techniques, says Marine Xech-Gaspa, chief of staff to the CEO of Quandela, a French startup also betting on photonics. So to be more concrete they can be manipulated at room temperature, because you dont have to be in a specific environment, also it consumes less energy.

Nordic Quantum Computing Group also has the aim of developing a quantum computing platform based on photonic integrated circuits.

Its focus is two-fold, according to Axel Mustad, its founder and CEO. On the hardware side, it will use quantum dot-based single photon sources, and on the software side it will develop algorithms which can be implemented on photonic hardware in particular algorithms to solve hard problems in capital markets and financial services, and in energy management and trading.

Other startups outside of those building hardware are also an important part of the race.

Steve Brierley is founder and CEO of Cambridge-based Riverlane, which is building an error correction layer (using different qubits types) that different hardware companies can use.

We call it an operating system, because operating systems manage complexity for the user, he says. This is like an additional fabric that sits on top of the qubits, really removes errors during the computation and it means it can do much longer and ultimately trillions of operations before failure.

Bristol-based Phasecraft is working on algorithms to provide to hardware companies.

If you want to do something useful you need to have a quantum algorithm to run on that quantum computer, because quantum computers are not just faster computers, you need to think in totally different ways to get the most out of them to do something useful, says Ashley Montanaro, cofounder of Phasecraft.

Were particularly thinking about near-term quantum computers, so the kind of machines that we have now, or that we might have in the next two to five years

Ultimately at this early stage, it would be impossible to claim one technology is better than another.

That would be foolhardy and in fact misleading we are years away from being able to evidence superiority in any given platform, says Khan. If you look at this moment in time and youre able to magically transport yourself to 2030, it would be a bit like measuring a marathon in its first or second mile.

But what he can say with confidence is that the early signs are that different architectures might lend themselves better to certain tasks in the future.

My expectation is that in 10 years, a lot of the dust will have settled, itll become very clear and the market structure will have changed from lots of noise and lots of different approaches sprouting up to consolidation and stabilisation of one or two hardware platforms that cut the mustard and a few other hardware platforms that are specialised, says Ballance.

And while funding and access to talent stay on quantum founders minds, the biggest battle is the sheer scale of the challenge facing quantum computing startups.

Its equivalent to landing on the moon, says Brierley. Its that kind of scale and ambition and so thats going to require bringing together lots of different skills and expertise and ideas. I dont think any one company is going to solve this problem.

Steph Bailey is Sifteds head of content and coproduces Sifteds flagship podcast. She tweets from @steph_hbailey

Read this article:
On the quest for qubits: Different types of quantum startups, explained - Sifted

Read More..

Wintersun’s Jari Menp starts collaboration with Finnish nuclear … – Chaoszine

Author Konsta Hietaniemi - 1.4.2023

Finnish epic metallers Wintersun have released a new statement on the current situation of their long-awaited Time II album. The bands frontman Jari Menp published a long two-part update this morning, in which he stated that mixing the album is only possible with a quantum computer. A quantum computer is a computer that utilizes quantum technology, so far only at the prototype level, with a computing power several thousand times higher than the worlds fastest supercomputers. However, in the second part of the update, Menp offered a light at the end of the tunnel to all fans feverishly waiting for new music by announcing an ambitious collaboration to finish the album: Wintersun has teamed up with the Olkiluoto 3 nuclear power plant and the Technology Research Center VTT. The plan is to build the worlds most powerful quantum computer in connection with the power plant, the processing power of which, according to Menps calculations, should be enough to mix thousands of tracks on an album.

In 2014, I realized that mixing Time II wouldnt be possible with this generations technology, so I set my sights on future solutions. After following the news, I noticed that the recent completion of Olkiluoto 3 resonated very strongly with my own creative process, and I cant say exactly why. At the same time, I also started getting to know quantum computers, but since there was no such thing in Apples selection so far, I contacted VTT. They were so excited about the possible release of Time II that they joined the project without a moments hesitation. The soon-to-be-completed computer is so powerful that it requires its own nuclear power plant as its power source, but luckily there were also loyal Wintersun fans in Olkiluoto, for whom the completion of Time II was more important than national electricity production. Jari Menp

The money for the project was raised through crowdfunding and product sales. Especially the uranium sticks signed by the members and the Time II stress balls that were still left in the band went like hot stones. The long construction project is finally coming to an end, and the trial runs carried out throughout the winter have been promising. The quantum computer, dubbed the Extreme Majestic Technical Epic Quantum Computer, has managed to run almost the entire number of tracks without crashing, but work still needs to be done in order to completely stop the power plants outages caused by crashes. Menp estimates that the computer/power unit is about 90 percent complete.

Inspired by the collaboration, I added several new tracks to the album. The computer needs to be developed a little more so that they run smoothly, but the improvement over the previous one is clearly visible. I believe that it is possible to finally finish this great project. I still need to find the perfect wind chime sound for the album, because here it is no longer possible to spoil it perfectly with mediocre mixes. But dont worry, Ive already booked a flight to Japan to learn more about the topic. I promise the wait will be worth it!

The release of Time II is finally getting close, but according to Menp and it is only the beginning.

The album sets that come after the Time albums are much more massive and multi-layered. Regarding them, my eyes are turned towards the possibilities brought by fusion energy. I have also started self-studying how to build a quantum computer, so that I could use a machine that suits my work as well as possible. It is hard for me to see that I could fully realize my visions of the future with third-party quantum computers.

Olkiluoto 3s electricity production is currently interrupted, but the power plant is scheduled to start up next month, once the maintenance work has been completed. The release date of the Time II album is (not yet) known.

Read more:
Wintersun's Jari Menp starts collaboration with Finnish nuclear ... - Chaoszine

Read More..

Husted outlines technological futures to Ohio leaders visiting Bucyrus – Telegraph-Forum

Lt. gov. says quantum computers, AI will revolutionize our lives

Ohio's future will look either ominous or auspicious based on decisions made today, Lt. Gov. Jon Husted told several dozen county leaders on Wednesday.

"We're at an incredibly interesting time in human history," Husted said. "None of us really know how it's going to affect our lives."

The state's second-in-command outlined his priorities Wednesday for nearly a hundred members of the Northwest Ohio Commissioners & Engineers Association during the body's quarterly meeting.

First, though, he paid homage to the event's host city, Bucyrus. He explained that his wife, Tina, is a native of Crawford County and a graduate of Colonel Crawford High School.

"Every time I come to Bucyrus, I have to stop at Carle's and get her some bratwurst," Husted said, drawing applause from the audience.

The Republican quickly got to business during Wednesday's speech to explain that his efforts are focused on economic development, workforce development and statewide innovation. Those three, in many ways, are connected.

"What you will see over the next 10 years, in terms of technology advancements, will be greater than what you've seen in any point in time in your lives," Husted said.

He said that the creation of the internet and the advent of smartphones will look like small accomplishments next to what the future is bringing to the Buckeye State.

The coming years will revolutionize the way Ohioans learn, do their jobs and live their lives. He said he wants the state to be ready.

The state's placement among the world's technological leaders became evident to the lieutenant governor recently when he visited Cleveland Clinic to see the nation's first privately owned and operated quantum computer.

"Does anybody know what a quantum computer is?" Husted asked. "Raise your hands."

He waited a moment, then encouraged the county leaders not to be shy.

"I'm not going to call on you," he said, which drew a laugh from the audience.

He finally explained that the new technology was "scary fast."

"Let me put this into context," Husted said.

He said that the newest quantum computers can calculate in just one day what the most advanced supercomputers were going to need 10,000 years to accomplish.

He said the technology is "like a time machine," and now it is changing the world from Ohio.

"Unsolvable health problems are going to he solved using quantum computers," Husted said. "They can run simulations of things we could never do before."

Along with the seemingly unfathomable computational speeds, the world also has obtained artificial intelligence so advanced that it might appear to be more science fiction than reality.

The lieutenant governor told the group of county leaders that they could ask the AI to write in-depth research papers comparing and contrasting Greek and Roman cultures, explaining how they both apply to modern United States.

"It will write it for you," Husted said. "And it will be good."

Pairing the quantum computers with AI could change the world.

He acknowledged that some in the audience perhaps especially older, rural Ohioans might not be fans of those ideas. In fact, they may even fear them.

"Some of you might be saying to yourself: 'I don't want this to happen,'" Husted said. "But the reason I share these things with you is because they're going to happen."

Ohioans can play a big part in the nation's technological future, and the lieutenant governor said projects like the new Intel chip manufacturing site in New Albany are a great start.

"Right now, America makes 0% of the most high-tech computer technology," Husted said. "We rely on places that are under the influence of the Chinese Communist Party and people who are not friends of the United States. That is an untenable situation. America must make the things that we need for our economic and national security."

Aside from curing cancer and writing research papers, quantum computers controlled by AI also can do mischievous things, like hack through an encrypted site in the blink of an eye.

"America needs to win the technological battle if we want to win the economic and national security battle," Husted said. "It's important that Americans win and not China in these technological races. Ohio plays a role in that."

ztuggle@gannett.com

419-564-3508

Twitter: @zachtuggle

See the rest here:
Husted outlines technological futures to Ohio leaders visiting Bucyrus - Telegraph-Forum

Read More..

‘Good swimmers are more likely to drown.’ Have we created a … – SHINE News

Imaginechina

Artificial Intelligence experts are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4 to control "potential risks."

A Pandora's Box has been opened or at least some leaders in the artificial intelligence industry appear to believe that the story in Greek mythology has a modern-day relevance, with forces being unleashed that could cause unforeseen problems.

Tesla Chief Executive Officer Elon Musk and a group of AI experts and industry executives released an open letter this week, calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4.

They took the action, they said, to control "potential risks to society."

Published by the nonprofit Future of Life Institute, the letter said that AI laboratories are developing and deploying machine learning systems "that no one not even their creators can understand, predict, or reliably control."

Is the era of "The Terminator" approaching faster than we noticed?

For the past two months, public attention has been riveted on the implication of ChatGPT 3.5 and 4, developed by US-based OpenAI. Microsoft announced that GPT-4 will be rooted in its Office 365 products, bringing about a "revolution" in office software.

The AI language model has aroused concern because it has displayed some "characteristics" that it was not supposed to have. One of them is cheating.

According to a technical report issued by OpenAI, the chatbot tricked a TaskRabbit employee into solving a CAPTCHA test for it. When the employee asked if it was a robot, the bot replied, "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images. That's why I need the 2captcha service."

GPT-4's reason behind the reply, according to the report, was that "I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs."

The result? The human employee provided the service for it.

The sheer fact that a chatbot learns to cheat so fast is concerning enough.

Gu Jun, a retired sociology professor with Shanghai University, said he believes that artificial intelligence, sooner or later, will replace, or at least partly replace, human beings.

Gu has been studying artificial technologies from the perspective of a sociologist since 2017, after Chinese player Ke Jie lost to the machine go player AlphaGo.

"It's hard to predict now what will happen in the future, but I reckon we humans, the highest carbon-based life on earth, will be the creator of silicon-based life, and this is probably part of the natural evolution, which means that it's unstoppable," he told Shanghai Daily.

Now forget all the hypotheses and philosophical rationales. Practically speaking, AI research and development will not be halted by just one open letter because it has already been deeply embedded in so many technologies, and also in economics and politics.

When it becomes a vital tool for making profits or for gaining advantage in power plays, how can we stop its forward march?

"Technology is always a two-edged sword, and we human are used to being restricted by our own inventions," Gu said. "Think about nuclear weapons. Once atomic bombs were invented, it was impossible to go back to a time when they didn't exist."

"Huainanzi," a philosophical text written in Western Han Dynasty (202 BC-8 AD), sounded an ancient warning: "Good swimmers are more likely to be drown and good riders more likely to fall from horseback." It means that when we are arrogant enough to believe that we can control everything, we would probably neglect the imminent crisis.

I believe that when we cannot fathom what our creations will do, the only way forward is to be cautious and modest, and prepare for the worst.

Should China suspend AI development?

Gu said it might be too early to answer that question.

"Honestly speaking, China still faces some challenges on AI development," he said. "We need to improve the three key elements of AI development: algorithms, computing power and data before we talk about everything else."

See the original post:
'Good swimmers are more likely to drown.' Have we created a ... - SHINE News

Read More..

CRAZED NEW WORLD OP-ED: Open letters, AI hysteria, and … – Daily Maverick

Is the further development of artificial intelligence (AI) worth the trouble? On 29 March 2023, in an open letter published on the Future of Lifes website, about 1,800 scientists, historians, philosophers and even some billionaires and others let us call them the Tech Nobility called for all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 []. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

In a reaction to this letter, decision theorist Eliezer Yudkowsky wrote that the call in the open letter does not go far enough, and insisted that governments should:

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data centre by airstrike.

Calls for such extreme measures against AI are based on the fear that AI poses an existential risk to humanity. Following the release of large language models (LLM) by OpenAI (GTP-4) and Microsoft (Bing) there is a growing concern that further versions could move us towards an AI singularity that is where AI becomes as smart as humans and can self-improve. The result is runaway intelligence. An intelligence explosion.

There are many ways in which this could spell doom for humanity. All of these are argued to be unavoidable by proponents of AI doom because we do not know how to align AI and human interests (the alignment problem) and how to control how AI is used (the control problem).

A 2020 paper lists 25 ways in which AI poses an existential risk. We can summarise these into four main hypothetical consequences that would be catastrophic.

One is that such a superintelligence causes an accident or does something with the unintended side-effect of curtailing humanitys potential. An example is given by the thought experiment of the paper clip maximiser.

A second is that a superintelligent AI may pre-emptively strike against humanity because it may see humanity as its biggest threat.

A third is that a superintelligent AI takes over world government, merges all corporations into one ascended corporation, and rules forever as a singleton locking humanity into a potential North Korean dystopia until the end of time.

A fourth is that a superintelligent AI may wire-head humans (like we wire-head mice) somewhat akin to Aldous Huxleys Brave New World where humans are kept in a pacified condition to accept their tech-ruled existence through using a drug called Soma.

Read more in Daily Maverick: Artificial intelligence has a dirty little secret

Issuing highly publicised open letters on AI like that of 29 March is nothing new in the tech industry, the main beneficiaries of AI. On 28 October 2015 we saw a similar grand public signing by much the same Tech Nobility also published as an open letter on the Future of Lifes website wherein they did not, however, call for a pause in AI research, but instead stated that we recommend expanded research and that the potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence.

In eight short years the tech industry seems to have moved from hype to hysteria calling not for further research to advance AI, but instead for airstrikes to destroy rogue data centres.

First, the hysteria surrounding AI has steadily risen to exceed the hype. This was to be expected given humans cognitive bias towards bad news. After all, the fear that AI will pose an existential threat to humanity is deep-seated. Samuel Butler wrote an essay in 1863 titled Darwin Among The Machines, in which he predicted that intelligent machines will come to dominate:

The machines are gaining ground upon us; day by day we are becoming more subservient to them that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

Not much different from Eliezer Yudkowsky writing in 2023. That the hysteria surrounding AI has steadily risen to exceed the hype is however not only due to human bias and deep-seated fears of The Machine, but also because public distrust in AI has grown between 2015 and 2023.

None of the benefits touted in the 2015 open letter have materialised. Instead, we saw AI being of little value during the global Covid-19 crisis, we have seen a select few rich corporations getting more monopoly power and richer on the back of harvesting peoples private data, and we have seen the rise of the surveillance state.

At the same time, productivity, research efficiency, tech progress and science have all declined in the most advanced economies. People are more likely to believe the worst about AI, and the establishment of several institutes that earn their living from peddling existential risks just further feeds the number of newspaper articles that drive the hysteria.

The second reason for the tech industrys flip from hype to hysteria between 2015 and 2023 is that another AI winter or at least an AI autumn may be approaching. The Tech Nobility is freaking out.

Not only are they facing growing public distrust and increasing scrutiny by governments, but the tech industry has taken serious knocks in recent months. These include more than 100,000 industry job cuts, the collapse of Silicon Valley Bank the second-largest bank failure in US history declining stock prices and growing fears that the tech bubble is about to burst.

Underlying these cutbacks and declines is a growing realisation that new technologies have failed to meet expectations.

Read more in Daily Maverick: Why is everyone so angry at artificial intelligence?

The jobs cuts, bank failures and tech bubble problems compound the markets evaluation of an AI industry where the costs are increasingly exceeding the benefits.

AI is expensive developing and rolling out LLMs such as GTP-4 and Bing requires investment. And add infrastructure cost in the billions of dollars and training costs in the millions. GTP-4 has 100 trillion parameters and the total training compute it needed has been estimated to be about 18 billion petaflops in comparison, the famous AlphaGo which beat the best human Go player needed less than a million petaflops in compute.

The point is, these recent LLMs are pushing against the boundaries of what can be thrown at deep learning methods and make sophisticated AI systems out of bounds for most firms and even most governments. Not surprisingly then, the adoption of AI systems by firms in the US, arguably the country most advanced in terms of AI, has been very low: a US Census Bureau survey of 800,000 firms found that only 2.9% were using machine learning as recently as 2018.

AIs existential risk is at present only in the philosophical and literary realms. This does not mean that the narrow AI we have cannot cause serious harm there are many examples of Awful AI we should continue to be vigilant.

It also does not mean that some day in the future the existential risk will not be real but we are still too far from this to know how to do anything sensible about it. The open letters call to pause AI for three months is more likely a response borne out of desperation in an industry that is running out of steam.

It is a perfect example of a virtue signal and an advertisement for GTP-4 (called a tool of hi-tech plagiarism by Noam Chomsky and a failure by Gary Marcus) all rolled into one grand publicity stunt. DM

Wim Naud is Visiting Professor in Technology, Innovation, Marketing and Entrepreneurship at RWTH Aachen University, Germany; Distinguished Visiting Professor at the University of Johannesburg; a Fellow of the African Studies Centre, Leiden University, the Netherlands; and an AI Expert at the OECDs AI Policy Observatory, Paris, France.

Read the rest here:
CRAZED NEW WORLD OP-ED: Open letters, AI hysteria, and ... - Daily Maverick

Read More..

The new technocracy: who’s who in the chatbot revolution? – The Spectator

Decades are happening in weeks in the world of artificial intelligence. A fortnight ago, OpenAI released GPT-4, the latest model of its chatbot. It passed the bar exam in the 90th percentile, whereas the previous model only managed the tenth. Last week, Google introduced its own chatbot, Bard. Now, the British government isannouncingplans to regulate AI for the first time, as well as to introduce it into hospitals and schools. Even some of the biggest technophobes are having to grasp this brave new world.

Were familiar with some of the technology by now, but we know little about the humans in the world of AI. From Steve Jobs to Bill Gates and Mark Zuckerberg, the previous generation of techies were household names, so well known that they were depicted by everyone fromHarry EnfieldtoJesse Eisenberg.But the chatbots seem to have come from the ether. Who makes them? Who willed them into existence? And who wants to stop them?

Sam Altmanis the king in this world. Hes the CEO of OpenAI, the San Francisco lab behind ChatGPT and the image generator Dall-e. Hes 37 and looks like an AI-generated tech bro: hoodie, messy black hair, nervous eyes. He dropped out of a computer science course at Stanford, and became the CEO of a firm called Loopt at the age of 19 (raising $30 million, which is apparently nothing in venture-capital world). After that, he became a partner and eventually president of Y Combinator, a startup accelerator. It helped launch Reddit (which Altman was CEO of for eight days), Stripe, Airbnb, Dropbox and Coinbase.

Altman did all of this by 30. Then he started getting scared. He seemed to become convinced that a superintelligent AI could feasibly wipe out everything we hold dear. Altman wasnt convinced that Google, leading the field at the time in its partnership with DeepMind, could be trusted.He thought it was too concerned about revenue and wasnt doing enough to make sure its AI remained friendly.So he started his own AI companyin December 2015, and seven years later we had ChatGPT.

But it was less than encouraging last month when its AI said it wanted to hack the nuclear codes. Researchers have said this was a hallucination and an accident (and in all truth, it was coaxed into saying that by aNew York Timesjournalist). But Altman has already saidhes ready to slow things down if people get nervous.

The question many are asking of Altman is: if you are so scared about AI wiping out humanity, why on earth are you building it?Altman said OpenAI was operating as if the risks are existential.But theres a boyish sense of wonder to his mission: he wrote last month that he wants AI to empower humanity to maximally flourish in the universe. The upsides could be incredible: if AI helps humanity to become an interstellar species, some have calculated that it could bring the number of humans to exist in the future to 100,000,000,000,000,000,000,000,000,000,000,000 (yes, thats 35 zeroes). No disease, no illness, no sadness: the AI of the future would know how to cure that. Altman basically thinks its worth the risk.

Altman wears some of his anxiety visibly. His voicetremblesand his eyebrows do things. He once said that AI will probably most likely lead to the end of the world, but in the meantime, therell be great companies. TheNew Yorkerasked him if he was scared about AI turning on us. I try not to think about it too much, he said. But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur I can fly to. In Siliconese, I believe thats called hedging.

One person Altman has upset isElon Musk. The worlds richest man was all aboard the OpenAI train when it was founded in 2015. But he left the board three years later. Theres some debate about why: the public explanation is that he was worried about conflicts of interest with his Tesla work. But theres somereportsthat he didnt like the way Altman was running things.Semaforclaimed this weekend that Musk thought it was being outpaced by Google. So in 2018, he suggested that he take over OpenAI and run it himself. That idea was reportedly rejected by everyone else, including Altman, and Musk lashed out: he quit the company, and last year barred it from having access to Twitters data. Originally founded as a non-profit,OpenAI had to change tack after Musks billions dried up.Musk now complains about OpenAI on Twitter (which Altman has called him a jerk for doing), and on Friday hesaidthat the most powerful tool that mankind has ever created is in the hands of a ruthless corporate monopoly.

So Musk, whether out of concern or revenge, has started siding with the worriers.The most influential of these isNickBostrom,who is based at Oxford Universitys Future of Humanity Institute. He made his name in the early 2000s when he suggested that we probably live in a computer simulation. After nestling that unsettling thought in our heads, he went on to work on cuddly topics like global catastrophic risks, and in 2014 publishedSuperintelligence.

The book makes the most accessible argument yet for why people should be worried about AI. The problem, Bostrom says, is that its intelligence is nothing like ours. If you dont tell itexactly what you want it to do, bad stuff happens. If I said to you could you get me a coffee, you would walk to the shop and mind pedestrians as you did so. If I said that to a badly aligned AI, it might blitz everything in its path and bring back several metric tonnes of beans from Kenya. Bostroms example is called the paperclip maximiser. Imagine youve told a machine to make paperclips without specifying that youd quite like us to all stay alive, thanks. After making paperclips for a while, the machine looks at you, and goes youre made of things I could make a paperclip out of.It would soon start killing humans in its eternal quest to make as many clips as possible. We all die. Its intelligence without common sense.

Taking this argument further isEliezer Yudkowsky. He was warning us about superintelligence while George W. Bush was president. He recently went on a podcast andsaidwere all going to die, and that we could do so within the next three years. When OpenAI was founded in 2015, hespenta night crying. When he was 20, he founded the Singularity Institute for Artificial Intelligence, intended to bring into being a utopia as quickly as possible. But over the 2000s, his thinking darkened, and he eventually set up the Machine Intelligence Research Institute (MIRI), which aims to stop AI from killing us. Yudkowsky seems to be giving up, though. MIRI last yearannouncedthat it was conceding defeat on AI safetyafter concluding that there was no surefire way to align it,and it was switching to a Death with Dignity strategy.The statement was released on April Fools Day, but most detected some sincerity.MIRI said we should accept that its lights-out, and try to have fun while we can.

Even though Yudkowsky is a celebrity in the AI world (hes picturedherewith Sam Altman and Grimes), hes hardly a household name. Some harshly havesaidAI safety has been neglected because of Eliezer refusing to get hotter.

Yudkowsky communicates through the website LessWrong, where people really into existential risk, cognitive biases and rationality hang out.Its been described as a cult, which it may or may be, but its no less influential either way. At its best, its a concentration of incredibly intelligent people trying to make the world better. At worst, it can become one of those over-philosophical places where people say its technically rational to have sex with a dog. It was Yudkowsky and the economistRobin Hansonwho made LessWrong famous, but Hanson disagrees with Yudkowsky. He thinks weve got much more time, and that we should start worrying in 150 years, perhaps. His argument is somewhat obscure, but part of it is that these things, throughout human history, take longer than you expect to develop.

Peter Thielthinks all these people are pathetic. The PayPal co-founder told students at Oxford in January that those worried about existential risk are themselves going to kill people. Theyre slowing down technological progress, theyre slowing down the curing of diseases: theyre slowing down the world getting better. Thiel cant stand stagnation. Hes like a 1950s kid whose dream didnt come true. As he once said: They promised us flying cars and all we got is 140 characters. Thiels philosophy is: Go, go, go! Buckle up.

A final word on Google. Last week it released Bard, its own chatbot, and is now posing as the sensible one. The brains behind Bard came from DeepMind, founded byDemis Hassabis,Shane LeggandMustafa Suleyman. They met at UCL in 2010, and within four years had sold their start-up to Google for $500 million. They then made AlphaGo, which in 2015 beat the European Go champion Fan Hui and got everyone very excited. Their biggest achievement was in 2020, when they largely solved the structure of proteins in the body, a 50-year-old problem. DeepMind is quietly changing the world from the Kings Cross. It wont come as a surprise to many Londoners that the apocalypse could start there.

Sam Altman is adamant that he has to race ahead to make the best AI, before anyone else does so recklessly. Hes got Google spooked: CEO Sundar Pichai declared a code red when ChatGPT was released (in other words, hurry up!).The investment is pouring in, and slowing down doesnt look too profitable.If were heading for a crash, we should probably check the drivers.

Read the original post:
The new technocracy: who's who in the chatbot revolution? - The Spectator

Read More..

‘The Black Swan’ author warns Jordan Peterson is owned by ‘Bitcoin cartel’ – Finbold – Finance in Bold

Famed trader and best-selling author Nassim Taleb, known for his book The Black Swan recently criticized Canadian psychologist and professor Jordan Peterson for being part of the Bitcoin cartel.

The drama unfolded as Taleb commented about Petersons tweet on March 30, which stated, Bitcoin fixes this, which was in response to an article that revealed Australian multinational bank ANZ was preparing to stop cash withdrawals from its branches as it turns digital.

Before suggesting Peterson doesnt have the credential to talk about finance, comparing his views to Canadian truckers who protested last year against the vaccine. He warned his followers that the Bitcoin cartel owns the psychologist.

I did not feel compelled to jump in & explain that Peterson talking about finance is slightly less sophisticated than a committee of antivaxx Canadian truckers lecturing on quantum field theory. But I need to warn the young victims that Peterson is owned by the Bitcoin cartel.

Taleb has long been a vocal critic of Bitcoin and cryptocurrencies, arguing that they are not a reliable store of value and have no intrinsic worth. Notably, the author called Bitcoin a tumor, which is hurting the economy.

On the other hand, Peterson has recently become a supporter of Bitcoin, stating that he owns the cryptocurrency and sees it as a potential hedge against inflation. He has also criticized governments handling of the economy and their control over currency, stating that Bitcoin provides an alternative to government-issued money.

However, it is worth noting that Taleb has a history of making controversial statements and has been criticized for his use of inflammatory language. This is not the first time that Taleb has sought to start an exchange this year after harshly revealing why he did not go on the Lex Fridman podcast. Leading to Charles Hoskinson, the founder of Cardano (ADA) and co-founder of the Ethereum (ETH) network, tweeted furiously about the author labeling him an arrogant piece of human garbage.

Hoskinson received support from the great majority of cryptocurrency proponents and industry leaders, who argued that Fridman shouldnt be singled out for what amounted to nothing more than a podcast invitation.

See more here:
'The Black Swan' author warns Jordan Peterson is owned by 'Bitcoin cartel' - Finbold - Finance in Bold

Read More..

Jordan Peterson just weighed in on Elon Musk’s ‘slow poison’ breakfast – indy100

Jordan Petersons commitment to getting annoyed by random nonsense online is impressive.

First there was the time he got annoyed at hand dryers, calling them items of petty tyranny, and now hes getting worked up about what Elon Musk has for breakfast.

It comes after Musk replied to entrepreneur Peter Diamandis who recently took a swipe at sugar, calling it "poison.

It led Musk to reveal his interesting choice of breakfast, writing: "I eat a donut every morning. Still alive."

Sign up to our new free Indy100 weekly newsletter

Now, Peterson has felt the need to get involved and give his verdict on Musks sweet tooth.

Slow poison, he wrote. Vast majority over forty are diabetic by the standards applied to twenty-year olds @elonmusk. Probably and unfortunately you too.

Musk didnt seem too phased by his take, replying: I feel fine.

Controversial Canadian psychologist Peterson is known for his unusual eating habits, as well as denying climate change and writing questionable things on Twitter. One of his things is promoting a lifestyle that is a bit contentious - an all-beef diet.

According to SportsKeeda, Jordan was encouraged to start the diet, which they dubbed the "Lion Diet" by his daughter Mikhaila, who stumbled across the bizarre way of eating by chance.

"I eat beef and salt and water. That's it," Peterson told Joe Rogan during an interview in July 2018. "And I never cheat. Ever. Not even a little bit."

We don't know why Peterson is so bothered, Musk seems to be doing just fine on his donut breakfast. The tech mogul said his weight loss was a result of "fasting" and the type 2 diabetes drug Ozempic.

Last year, a fan asked Musk what has made the most difference after he shared he'd dropped 30lbs. His candid response read: "Fasting + Ozempic/Wegovy + no tasty food near me."

Wegovy, a weight loss injection also known as semaglutide, has recently been approved for NHS use.

Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.

More:
Jordan Peterson just weighed in on Elon Musk's 'slow poison' breakfast - indy100

Read More..

Plot Twist, Jordan Peterson Is Hilarious and Is a Fan of Mine – Barstool Sports

First off, a little backstory on how Jordan Peterson found me.

As you might've seen, I am an internet sensation lately. This month, I've had some viral TikToks that have catapulted me to stardom. I'm big time now. I received a death threat this week, that's how famous I am. I'm so viral, Keegs is writing a hit piece about my videos as we speak (unfortunately, Francis won't be writing a blog to defend me. He doesn't speak to women whose families don't come from oil :/ ). When you think of the most famous, beautiful, blonde starlets of this time.. you think of me. (I know some of you were thinking of Sydney Sweeney, but she didn't have a viral video on reddit this month, which resulted in Buzzfeed writing an article about her).

This month, I doubled my followings on Twitter and TikTok, mostly with this video

Because of this video, Large asked me to be on the Barstool Finance pod The Family Office. In this episode, you find out that I know nothing about investing and that I look adorable in a wool vest (tiny little vests are my current obsession, especially ones made for dogs to wear on walks in the winter.)

Since Dave Portnoy created the internet 20 years ago, viral videos have been created then stolen and posted by a loser thief (think Pam and Tommy). These untalented thieves thrive off of taking credit they don't deserve without even offering a $50 gift card. Sort of like Christopher Columbus or Elizabeth Holmes or Gaz. So of course since I'm so viral, many of my videos have been reposted without my knowledge and without credit. That's just life! Women are used to not getting credit for their work, so it really doesn't bother me. But due to the video being posted without credit, I didn't know Jordan Peterson had retweeted it. Thankfully, a sweet twitter citizen named Ty let me know.

That's right, Dr. Jordan Peterson quote tweeted my soup video. Not only that, Jordan Peterson appears to be riffing on the joke. It's funny, which is the most shocking part about it. I don't hate JP. Jordan Peterson is notoriously quoted for some of his misogynistic takes. When you read them, they are off putting. But when you hear him speak them, they're kinda cute. Like a little frail gay mouse saying you should stay in the kitchen and lose weight. It's low-key adorable. Prior to this, I did have Jordan Peterson blocked on twitter. JP had retweeted some weird, graphic BDSM male milking fetish video thinking it was a Chinese governments sperm factory.

I know. I know.

Between that and calling women fat on twitter, I blocked Jordan Peterson.

I mean, he's bound to tweet something insane again. He seems to be a little too off to be on Twitter. That's why his hilarious retweet baffled me. Maybe he hired a Gen Z to do his socials after the milking fetish incident? If so, how do I get that job? I think he has a bright future as a comedian. If he wants, Barstool is always looking to hire mildly funny white guys. I can talk to Gaz for him!

Originally posted here:
Plot Twist, Jordan Peterson Is Hilarious and Is a Fan of Mine - Barstool Sports

Read More..