Page 1,508«..1020..1,5071,5081,5091,510..1,5201,530..»

The Internet of Money review7 years later – CoinGeek

Backstory

As a brand new bitcoiner, circa 2012-2015, I recall hearing the mellifluous cadence of Andreas Antonopoulos speaking in what I presumed were distinguished halls of higher learning. His words hopped and flowed with the timbre of an academic technologist but with an undercurrent of fire in his Bitcoin advocacy. His speeches effortlessly captivated me with his seemingly profound understanding of this complicated new technology.

Years later, the book The Internet of Money collectivized and canonized these speeches to the written word. I picked it up in 2016 and gave away a few copies as curios and gifts to my closest friends and my monetary activist allies of the era.

As I navigated the labyrinth of essays and talks, the depth of Antonopoulos insights into the world of Bitcoin was nothing short of breathtaking.

From the very first page, The Internet of Money enveloped me in an intoxicating blend of revolutionary ideas and audacious predictions, challenging long-held beliefs about the nature of money and its place in our interconnected world. As I progressed through the book, the tension between established financial systems and the untamed frontier of bitcoin became palpable. I was excited that finally there was more than criticism of the fiat system, but also a real, workable solution to the problems that activists such as myself could get behind, build upon and create a true competitive marketplace for money!

Armed with incisive wit and unapologetic candor, he endeavored to inspire readers to ponder the significance of this burgeoning technology, ultimately leaving them with a sense of wonder for the transformative power of bitcoin and the boundless possibilities it was believed to unlock in an increasingly interconnected world. Regrettably, time has demonstrated the fallibility of his once-illustrious predictions and cast a somber shadow over the once-promising future of bitcoin.

Cracks forming

Despite the initial promise and enthusiasm, it is with a heavy heart that one must acknowledge the authors inadvertent role in undermining his own advocacy. In his fervor to champion the cause of bitcoin, Antonopoulos may have contributed to the very pitfalls that have plagued its progress. By presenting a perhaps overly optimistic and idealistic view of Bitcoin as an asset (rather than as a network or the technology), he may have inadvertently fueled a feverish speculative bubble and fostered an environment ripe for opportunistic scammers and malicious actors at the expense of any epistemologically self-conscious discussion of Bitcoin.

As the once-luminous star of BTC dims, the challenges it faces become more apparent, and the schisms in Bitcoin culture deepen, its hard not to look back with disappointment and irritation about how things played out under the narrative stewardship of folks like Antonopoulos.

The shatter

While Antonopolous feigned support for sound money and the unstoppable nature of Bitcoin to fulfill the promises of a frictionless digital gold that was fundamentally intertwined with the internet, he reneged heartily when pressures mounted in 2017 to take a stand for Bitcoins fundamentals.

He, instead, chose to support a progressive view that Bitcoin was a soft protocol and open to change according to whatever opinions become popular. Popularity, in this case, is determined by low cost node endpoints that vote anonymously for changes that cannot be resisted once popularity reaches a threshold. In short, he believes in a sort of democratically governed technocracy for Bitcoin rather than it being an unchanging, sound money system.

Re-reading chapters of the book are tragic; heartbreaking. We really had a chance to change everything with Bitcoin, and Antonopoulos lack of fortitude contributed to making it culturally acceptable to treat bitcoin as a Hegelian popularity contest rather than a bullet-proof internet of money.

Is the book all bad? No. Much of it is good, especially in context of the era from which the content was created, but so much has been tainted by opportunism and schisms, and in hindsight, it is hard not to treat the work as one of a few bricks in a foundation made of sand.

The Internet of Money: 5/10. Would not recommend.

Today, you can find him writing about Ethereum and Lightning Network and testifying as an expert witness against Satoshi Nakamoto while principled bitcoiners work tirelessly and steadfastly in the BSV ecosystem.

George Gilder: Internet security and the scandal of money

New to Bitcoin? Check out CoinGeeksBitcoin for Beginnerssection, the ultimate resource guide to learn more about Bitcoinas originally envisioned by Satoshi Nakamotoand blockchain.

See the rest here:

The Internet of Money review7 years later - CoinGeek

Read More..

The Countdown Begins Public Nominations for the 14 Categories of the Inaugural NFTY Awards Close on April – EIN News

Official Poster The NFTYS

Public Nominations for the Inaugural NFTY Awards, Executive Produced by Web3 Entrepreneur Chris J. Snook and E! Co-founder Larry Namer, Close on April 15, 2023

Chris J. Snook

To be eligible for NFTY Award consideration, nominated candidates must demonstrate one or more of the following: inspire, enable, validate, empower, or unlock the potential, use, or education of people on the vast array of web3 solutions, and the necessity for web3 adoption through their work and project.

The nominees who make the final shortlist will be announced on May 15, 2023. Winners will be unveiled during the live broadcast to be held at The NIKO Theatre at Worre Studios in Las Vegas, Nevada at 8 p.m. EST on June 14, 2023. The NFTYS are executive produced by web3 entrepreneur Chris J. Snook and E! Entertainment Television co-founder Larry Namer.

Whats unique about The NFTYS is that qualifying nominees can compete alongside some of their favorite celebrities and their web3 projects, comments Snook. Some of our famous nominees who have received public nominations for certain categories in the early returns include Beeple, Steve Aoki, World of Women, Project 17, Punk6529, Gary Vaynerchuk, Michael Saylor, Raoul Pal, Caitlin Long, and Bored & Hungry, to name a few."

The NFTYS 14 categories are:

The Nakamoto NFTY- Innovator of the Year Honoring exceptional innovators driving ground-breaking advancements in technology in the name of Satoshi Nakamoto, the creator of Bitcoin.

The Unsung Trailblazer NFTY Celebrating hidden pioneers, past and present, shaping technology's future with impactful contributions.

The Cultural Tastemaker of the Year NFTY Recognizing influential web3 trendsetters shaping the cultural narrative through creative excellence that inspire mass adoption.

Web3 Fashion Innovator of the Year NFTY Celebrating visionary designers revolutionizing fashion in the decentralized web3 space.

Bridge Builder NFTY An empowering leader whose work is connecting web3 innovations to tangible, global change and movements.

Most Influential Celebrity NFTY Acknowledging prominent celebrities with their own projects and that are champions of web3, driving adoption and mainstream awareness.

Protocol of the Year NFTY Celebrating transformative blockchain protocols driving innovation and reshaping industries with their network effects and technology.

Visionary Immersive Reality-Artist of the Year NFTY ("aka The VIRA") Recognizing trailblazing visual and mixed-reality artists merging digital and physical realms, captivating audiences.

s3rv3nt Leader of the Year NFTY Honoring influential policy shapers and legislators driving progressive change for a better global society through web3 technologies, blockchain, and DeFi.

Open Source App of the Year NFTY Highlighting exceptional web3 applications that are transforming industries, redefining user experiences, and creating orders of magnitude gains in productivity or data security.

Thought Leader of the Year NFTY Celebrating visionary thinkers, researchers, and communicators inspiring change and shaping and advancing web3 industry discourse with insight.

Native Web3 IP with IRL Biz Collaboration of the Year NFTY Celebrating the most innovative blend of new (native) web3 or NFT Intellectual Property licensed to create a new in-real-life business or product for customers to love.

NFT of the Year in Sports, Entertainment, or Music NFTY Celebrating visionary use cases of Non-Fungible Tokens in the worlds of sports, media, and music for unlocking new business models for fan engagement and community commerce.

The People's Project NFTY- People's choice for most enthusiastic web3 project community by sheer number of unique nominations received.

People and projects qualifying for more than one of these categories can be nominated by the public, their following or themselves as many times as they wish, with only unique votes to be counted, for any of the categories, and in particular, The People's Project NFTY.

The NFTY is an iconic reflection on the historical depth that culture has always played in moving society into its next revolution or evolution, comments Snook about the award design. The award is a representation of the bold and bright future that lies ahead because of the creatives and innovators that earn her recognition.

Our goal with the presentation of the NFTY Award is to become a true career highlight for its winners, and a genuine keepsake befitting of the honor it bestows on its recipients, adds Namer.

Each NFTY will be simultaneously minted as a digital twin when awarded to the recipient on the XRP Ledger as part of the support received from the Ripple Creator Fund, with an enhanced and vibrantly animated 3D digital twin and model of itself with each winner's metadata. This will be stored on the Arweave PermaWeb to ensure an immutable historical record of the awards.

Brand partners seeking to be part of The NFTYS can request the sponsorship deck by emailing: sponsorships@nftys.org

ABOUT THE ACADEMY OF DIGITAL ART, SCIENCES, & CULTURE (ADASC) - ADASC is a social benefit diversified DAO advancing the benevolent use of breakthrough technology innovation across Art, Sciences, and Culture. Its membership includes a variety of invite-only and public tiers that form the decentralized and diverse nomination committee of awardees and grants at its annual public celebration and broadcast of The Annual NFTY Awards Gala. For more information about membership to the Academy and nominations for The NFTYS please visit https://nftys.org

Nicole GoesseringerThe NFTYSnicolekultura@gmail.comVisit us on social media:TwitterLinkedIn

View post:

The Countdown Begins Public Nominations for the 14 Categories of the Inaugural NFTY Awards Close on April - EIN News

Read More..

KPMG and Classiq join forces to offer quantum computing capabilities to enterprise customers – CTech

KPMG's Global Quantum Hub announced on Tuesday a collaboration with Classiq, the Israeli quantum software company, to bring innovative quantum solutions to clients.

Classiq and KPMG have extensive experience of supporting and enabling quantum newcomers and quantum experts. The collaboration will target a range of industry verticals including financial services, automotive, pharma, energy, telco and logistics. The companies efforts will focus on quantum use-case exploration and quantum capability development.

"By bringing together our expertise in quantum strategy, technology and client processes with Classiq's cutting-edge quantum software platform, we will provide clients with innovative solutions that will help them drive business value through quantum computing," said Troels Steenstrup, Head of KPMG's Global Quantum Hub.

"Classiq is committed to making quantum computing a scalable, accessible and powerful technology for enterprises," said Nir Minerbi, CEO of Classiq. "We are excited to work with KPMG to help organizations adopt quantum technologies and drive real-world impact through the use of quantum computing."

Classiq, which raised $63 million since its 2020 inception, provides an end-to-end platform for designing, executing, and analyzing quantum software. Built for organizations that want to accelerate their quantum computing programs, Classiqs patented software automatically converts high-level functional models into optimized quantum circuits for most quantum computers and cloud providers.

See the article here:
KPMG and Classiq join forces to offer quantum computing capabilities to enterprise customers - CTech

Read More..

Interpreting the impact of AI large language models on chemistry – Chemistry World

Is AI on the brink of something massive? Thats been the buzz over the past several months, thanks to the release of improved large language models (LLMs) such as OpenAIs GPT-4, the successor to ChatGPT. Developed as tools for language processing, these algorithms respond so fluently and naturally that some users become convinced they are conversing with a genuine intelligence. Some researchers have suggested that LLMs go beyond traditional deep-learning AI methods by displaying emergent features of the human mind, such as a theory of mind that attributes other agents with autonomy and motives. Others argue that, for all their impressive capabilities, LLMs remain exercises in finding correlations and are devoid not just of sentience but also of any kind of semantic understanding of the world they purport to be talking about as revealed, for example, in the way LLMs can still make absurd or illogical mistakes or invent false facts. The dangers were illustrated when Bings search chatbot Sydney, which incorporated ChatGPT, threatened to kill an Australian researcher and tried to break up the marriage of a New York-based journalist after professing its love.

AI and complexity experts Melanie Mitchell and David Krakauer of the Santa Fe Institute, US, meanwhile, suggest a third possibility: that LLMs do possess a genuine kind of understanding, but one that we dont yet understand ourselves and which is quite distinct from that of the human mind.1

Despite their name, LLMs are not only useful for language. Like other types of deep-learning methods, such as those behind DeepMinds protein-structure algorithm AlphaFold, they mine vast data sets for correlations between variables that, after a period of training, enable them to provide reliable responses to new input prompts. The difference is that LLMs use a neural-network architecture called a transformer, in which the neurons attend more to some of its connections than to others. This feature enhances the ability of LLMs to generate naturalistic text, but it also makes them potentially better able to cope with inputs outside the training set because, some claim, the algorithms deduce some of the underlying conceptual principles and so dont need to be told as much in training.

The inner workings of these networks are largely opaque

Melanie Mitchell and David Krakauer, Santa Fe Institute

This suggests that LLMs might also do better than conventional deep learning when applied to scientific problems. Thats the implication of a recent paper that applied a LLM to the AlphaFold problem of deducing protein structure purely from sequence.2(Im reluctant to call it the protein-folding problem, because thats a little different.) Alphafolds capabilities have been rightly lauded, and theres even some reason to think it can infer some of the features of the underlying energy landscape. But Alexander Rives at Meta AI in New York and his colleagues say that their family of transformer protein language models collectively called ESM-2, and a model called ESMFold derived from it, do even better. The language models are faster by up to two orders of magnitude, need less training data, and dont rely on collections of so-called multiple sequence alignments: sequences closely related to the target structure. The researchers ran the model on around 617 million protein sequences in the MGnify90 database curated by the European Bioinformatics Institute. More than a third of these yield high-confidence predictions, including some that have no precedent in experimentally determined structures.

The authors claim that these improvements in performance are indeed because such LLMs have better conceptual understanding of the problem. As they put it the language model internalises evolutionary patterns linked to structure which means that it potentially opens up a deep view into the natural diversity of proteins. With around 15 billion parameters in the model, it is not yet easy to extract with any certainty what the internal representations are that feed the improvements in performance. But such a claim, if well supported, makes LLMs much more exciting for doing science, because they might work with or even help reveal the underlying physical principles involved.

The authors claim that these improvements in performance are indeed because such LLMs have better conceptual understanding of the problem. As they put it the language model internalises evolutionary patterns linked to structure which means that it potentially opens up a deep view into the natural diversity of proteins. With around 15 billion parameters in the model, it is not yet easy to extract with any certainty what the internal representations are that feed the improvements in performance: The inner workings of these networks are largely opaque, say Mitchell and Krakauer. But such a claim, if well supported, makes LLMs much more exciting for doing science, because they might work with or even help reveal the underlying physical principles involved.

There may yet be a way to go, however. When chemists Cayque Monteiro Castro Nascimento and Andr Silva Pimentel of the Pontifcia Universidade Catlica do Rio de Janeiro in Brazil set ChatGPT some basic chemical challenges, such as converting compound names into Smiles chemical representations, the outcomes were mixed. The algorithm correctly identified the symmetry point groups of six out of ten simple molecules and did a fair job of predicting the water solubility of 11 different polymers. But it did not seem to know the difference between alkanes and alkenes, or benzene and cyclohexene. As with language applications, getting good results here might depend partly on posing the right questions: there is now an emerging field of prompt engineering to do this. Then again, asking the right question is surely one of the most important tasks for doing any kind of science.

Read more here:
Interpreting the impact of AI large language models on chemistry - Chemistry World

Read More..

What CEOs talked about in Q1/2023: Economic uncertainty, layoffs … – IoT Analytics

In short

In Q1 2023, economic uncertainty was at the forefront of CEOs minds globally and across the board. In 2023, 61% of all earnings calls discussed inflation, 23% talked about recession, and 38% mentioned interest rates. Even though inflation (-6% decline in mentions compared to Q4/2022) and recession (-25%) were less prevalent than in the last quarter of 2022, economic uncertainty was still the elephant in the room.

AI was discussed by 17% of CEOs (+ 41%). The interest in AI and machine learning was sparked by the release of ChatGPT and the discussions around potential use cases. ChatGPT was mentioned by 2.7% of companies in earnings calls in Q1/2023 (compared to no mentions in the previous quarter).

The labor market continues to rise in importance on CEOs list of topics. Layoffs were discussed in 6% of all earnings calls (+84% compared to Q4/2022), and 18% discussed wages (+8% from Q4/2022).

Another key upcoming theme in Q1/2023 is Industry 4.0 and related topics. Industrial automation was discussed in 0.6% of all earnings calls(+57% compared to Q4/2022). However, the strongest increase in this group was registered for the keyword predictive maintenance, which increased by +136%.

With supply chains slowly improving and supply shortages easing, discussions around shortages in general (-21%) and chip shortages (-35%) more specifically decreased strongly in Q1/2023.

Another theme that reduced in importance in Q1/2023 was the metaverse. The metaverse as a keyword declined by -64% in Q1/2023. Related technologies, such as virtual reality (-24%) and augmented reality (-12%), also declined.

The analysis highlighted in this article presents the results of IoT Analytics research involving the Q1/2023 earnings calls of ~3,000 US-listed companies. The resulting visualization is an indication of the digital and related topics that CEOs prioritized in Q1/2023. The chart visualizes keyword importance and growth.

X-axis:Keyword importance (i.e., how many companies mentioned the keyword in earnings calls in Q2)the further out the keyword falls on the x-axis, the more often the topic was mentioned.

Y-axis:Keyword growth (i.e., the increase or decrease in mentions from Q4/2022 to Q1/2023)a higher number on the y-axis indicated that the topic had gained importance, while a negative number indicated decreased importance.

Read our Q4/2022 analysishere.

Three technological themes of interest in Q1/2023 are highlighted in this article in greater depth: AI & ChatGPT, Industry 4.0, and the metaverse.

ChatGPT was released on 30 November 2022. The chatbot, which is built on a large language model by OpenAI, went viral and set records as thefastest-growingconsumer application in history, reaching 100 million active users within two months of its launch. ChatGPT sparked discussions around potential use cases of AI in general. 17% of all earnings calls mentioned AI, which constitutes a strong increase of +41% in Q1/2023. More specifically, generative AI was discussed by 2.7% of all earnings calls (an increase of nearly 1,600%) and conversational AI was mentioned in 0.5% of earnings calls.

ChatGPT was mentioned in 2.7% of all earnings calls, up from zero mentions in the last quarter. The CEO of IBM highlights the release of ChatGPT as one of the three key moments for AI in the last decade (after IBM Watson winning Jeopardy and DeepMind winning GO competitions). So far, most discussions that mention ChatGPT occur within earnings calls of technology companies that talk about how they want to market the new technology. Most applications have not reached large-scale adoption by enterprise end-users. But the potential is tremendous, including use cases of generative AI for IoT.

Were excited about ChatGPT being built on Azure and having the traction it has. So, we look to both. There is an investment part to it and there is a commercial partnership. But fundamentally, its going to be something thats going to drive, I think, innovation and competitive differentiation in every one of the Microsoft solutions by leading in AI.

AI has become a big topic of conversation this year [] If I think about it, over the last decade, I think there were three moments you can talk about, [] One, when IBM won Jeopardy with Watson, [] Second, when deep mind from Google or Alphabet started winning competitions around, for example, GO [] and now with OpenAI and ChatGPT.

Generative AI is an extremely exciting new area with so many applications, and one of my goals for Meta is to build on our research to become a leader in generative AI in addition to our leading work in recommendation AI.

If you think about it, early days of AI or training models and things like that, that needed access to big data sets. But I think as time goes on, big data sets have to be very real time to make decisions that are relevant in the moment.

And sometimes, they need to be kept at the edge because you have a lot of video data, for example, at the edge, to make good decisions on consumer behavior or inventory, [] all these new applications that are coming.

As this analysis clearly shows, many companies discuss how to fight rising inflation, labor costs, and general cost pressure. Many business leaders come to the result that more, not less, investment into digitization is the solution. Therefore, even during times of economic uncertainty, spending on enterprise IoT is expected to stay strong prior to 2027.

This expectation is also reflected in boardroom discussions: predictive maintenance (+136%), remote monitoring (+62%), and industrial automation (+57%) were discussed much more than in Q4 2022. Vendors and end users emphasize cost cutting through IoT-connected solutions. For example, machine vision enables many Industry 4.0-related use cases, such as flaw detection and operation optimization.

Note: The IoT Analytics team will be at the worlds largest industrial fair, Hannover Messe 2023, in mid-April 2023 to discuss industrial transformation and Industry 4.0. Make sure to reach out!

The market is still in the early stages of adopting machine vision. Most companies are still highly reliant on labor, and very few warehouses globally are realizing the full potential of automation.

[] We saw that in our cost of maintenance last year, our ability to run our equipment at, well, really flat maintenance costs in the face of very, very strong inflationary pressure. So thats a credit to that team and the work theyve done from a predictive maintenance standpoint.

Ive talked about how inflation has an impact on our cost, but conversely, it can also have a benefit in terms of our business and our value proposition. This is because our monitoring solutions reduce the personnel costs, travel time, emissions, and overall environmental impact required to maintain industrial equipment and critical systems. Therefore, as our customers costs increase, the return on investment of our services to them also improves. [] Remote monitoring will always be a significantly less expensive alternative than physical inspection, particularly with higher personnel and fuel costs.

Mentions of the metaverse decreased by 64%. The keyword was mentioned in 0.4% of all earnings calls in Q1 2023. That constitutes a steep decline since its peak in Q1/2022 when a couple of companies jumped on the hype train and announced their own (industrial) metaverse projects, including Microsoft, Siemens, Disney, Nvidia, and Meta. In Q1/2022, about 2% of all earnings calls discussed the metaverse. However, in the last quarter, a lot of related layoffs and announcements show that the trend might be over for now. Microsoft laid off its Metaverse core team of roughly 100 employees in February 2023, and Google announced the end of its Google Glass Enterprise Edition. Some consumer-focused companies, such as Disney, have ended their metaverse projects for now.

While the keyword metaverse might lose steam, related technologies are likely to stick around. The CEO of T-Mobile, Mike Sievert, said the following during an earnings call in October 2022: No matter what you believe about how the metaverse might or might not unfold, clearly more immersive 3D experiences are on their way.

Expected key product announcements from some leading tech companies are likely to set the tone for the market in the coming years and will play a role in whether the industrial metaverse becomes a reality. For example, in late March 2023, Nvidia and Microsoft announced a partnership to bring industrial metaverse applications to the cloud.

And as we continue to lead in gaming and the metaverse, we launched an innovative collaboration with Fortnite, targeted to next-gen consumers with additional exciting partnerships to come for spring and fall 23.

Metaverse-related developments are early in the lifecycle but overall remain an attractive opportunity for us potentially.

IoT Analytics is a leading global provider of market insights and strategic business intelligence for the Internet of Things (IoT), AI, Cloud, Edge, and Industry 4.0.

You may be interested in the following publications:

You may also be interested in the following recent articles:

You may be interested in the following IoT market data products:

Subscribe toournewsletterand follow us onLinkedInandTwitterto stayup-to-dateon the latest trends shaping the IoT markets. For complete enterprise IoT coverage with access toall ofIoT Analytics paid content & reports including dedicated analyst time check out Enterprise subscription.

Read more:
What CEOs talked about in Q1/2023: Economic uncertainty, layoffs ... - IoT Analytics

Read More..

Air Fryer vs Deep Fat Fryer: fried-and-tested by experts – Homes & Gardens

Whether youre looking to make fast, fluffy fries or quick crispy bacon, both air fryers and deep fat fryers are great options. The deep fat fryer is a classic, delivering on familiar taste and texture. Air fryers are becoming increasingly popular, establishing themselves as a kitchen staple.

After extensive research and testing, our expert team has the professional advice to guide you to an informed decision. We've tested the best air fryers on the market. After comparing these products to a classic deep fat fryer, we can give a fair verdict on which youll want to have in your kitchen.

When it comes down to it, an air fryer is better than a deep fat fryer. However, there's a lot to consider before you choose. We've compared both appliances on price, space, and taste to tell you what you need to know before you buy.

Today's best air fryer and deep far fryer deals

(Image credit: GettyImages)

Deep fat fryers heat oil to high temperatures. Once the oil is hot, you plunge your food into the oil, turning it to get an even fry. The cooking itself is quick, but make sure to account for time to heat and cool the oil before and after. Bear in mind that youll need to stay by the fryer the whole time that your food is cooking, too.

Air fryers work with little to no oil. They are smaller machines which rapidly circulate hot air around a basket container to cook your food. The cooking takes a little longer, but it can produce results which have a comparable taste and a similar texture. You wont need to stay near the appliance, because they often have paddles to keep food moving while it cooks. If they dont, the most itll need is a shake or mix half way through.

(Image credit: Future / Alex David)

Results

WINNER: It's a tie

To start, air fryers will only cook battered foods if they're frozen, like breaded chicken or fish. If you want to make food with wet batter like churros, youll need a deep fat fryer. Having tried frozen food, vegetables, and the benchmark for all frying fries, we were pleased with the results of both appliances.

Our team felt that deep frying gave the perfect results, as expected. However, Millie, our air fryer expert, preferred the taste of her air fryer's food. She told us that tthe air fryer and deep fryer produced food which was shockingly similar in taste. The main difference was, when deep-frying, I wasnt able to season my fries until after I had cooked them, which meant that the air fried plate was more flavorful. You could say that the flavors were baked right in during the cooking process.

If there isnt much difference in the way of taste, air fryers might win overall, since they have a lower fat content. However, if you want to make churros, youll need a deep fat fryer.

(Image credit: GettyImages)

Cleaning up

WINNER: Air fryer

A common grievance with deep-fryers is the clear-up process. Oil is tough to clean and, when hot, the fryer will likely spit oil onto your surfaces. Your food will have oil sitting on it after cooking, so youll want some kitchen roll to soak that up.

Once finished with frying, youll need to wait for the oil in your deep fat fryer to cool before either disposing of it, or storing it somewhere. The most common solution is to let your oil cool, pour it into a nonrecyclable container and either keep it, or put it in the garbage. Oil also has a lingering smell, so make sure you ventilate your kitchen.

On the whole, air fryers are easy to clean. They come with removable baskets which are often dishwasher safe. There isnt much oil involved in the process, so it doesnt get as messy as deep frying.

(Image credit: GettyImages)

Cost

WINNER: deep fat fryer

Air fryers tend to have a higher upfront cost than deep fat fryers. You can buy ovens with integrated air fryers if you are looking for value. We love the Instant Pot Duo Crisp with Ultimate Lid for covering multiple functions in one. Deep fryers tend to be less expensive, however, youll need to replace the oil in the deep fryer regularly. An example of a comparable deep fat fryer is the Progress EK2969P Compact Deep Fat Fryer (opens in new tab). Its small and easy to store.

Instant Pot Duo Crisp with Ultimate Lid

We love this because it's so much more than an air fryer. It performed exceptionally, was easy to clean, and had capacity for everything from roast chickens to mash potatoes. We loved that this has 11 different functions, so is an appliance that can do more than air fry.

Progress EK2969P Compact Deep Fat Fryer

Millie, our expert, liked it because it's a competitive size in comparison to air fryers. It's easy to store and doesn't need a huge amount of oil.However, because it is small, the capacity isn't particularly large, so it is really a single-person appliance.

(Image credit: Amazon)

(Image credit: Beautiful Kitchenware)

Size and look

WINNER: air fryer

As air fryers continue to improve, they are getting smaller, more storable, and much slicker. If you want to pack it into a drawer, the Ninja Max XL Air Fryer is a brilliant option. Equally, our team loved the look of Beautiful by Drew Barrymore Touchscreen Air Fryer to leave on your countertop. Deep fryers have less of an aesthetic appeal, but you can buy small ones and stow them away in a cupboard.

The Ninja Max XL Air Fryer can crisp up fries in minutes and is perfectly sized for small households, but its plastic finish lacks refinement

Beautiful 6-Quart Digital Air Fryer

The Beautiful 6-Quart Digital Air Fryer stands out thanks to its attractive design, which will look right at home in any contemporary kitchen.

(Image credit: Cosori)

Our verdict

For me, the air fryer is the clear winner. Even though the upfront cost can be a little more, its easier to store, clean, and use. The taste test really helps the air fryer sit in the top spot for me; its a healthier option, without compromising on flavor or texture. However, if you are looking to make churros and battered food, youll need to buy a deep fryer.

(Image credit: Getty Images)

How we test

We like all of our products to have been fried-and-tested, so we make sure that we have personally used an appliance before reviewing it. Where we haven't tried it, we research and read reviews thoroughly.

We were unable to try a deep fryer, but, luckily, Millie had already tested the T-Fal Actifry Genius + (alongside many others) against a Progress EK2969P Compact Deep Fat Fryer.

When testing, Millie was assessing each appliance on a number of factors:

Noise: Lots of noise doesn't always equate to lots of power and can make it hard to do other things around the house.

Speed: Deep fryers are quicker in cooking time, so it was important to look at how long these appliances would take exactly. Fries would take around 25 minutes in the air fryer, but some on our best air fryer list took 12 minutes.

Looks: These are often on your countertops, so we wanted to make sure that we accounted for how these look. In our roundup, we highlighted the less attractive features, if there were any.

Cleaning: Cleaning an air fryer is advertised as easy. Most baskets can do in the dishwasher. This was a key factor for choosing the air fryer over the deep fryer, so we scrutinised cleaning.

For more insight, our review guidelines explain more about our product review process.

For the most part, yes. Our expert tester, Millie Fender, told us that her partner couldnt tell the difference between most of the foods which she tested in the air fryer and deep fryer. However, if youre being picky, and looking for that guilty-pleasure grease, youll need a deep fat fryer.

That depends on what health means to you. Air fryers are praised for using less oil to cook your food. For example, rather than plunging fries into a deep fat fryer, you will use a tablespoon, at most, of oil in an air fryer. This means that the fat content of your food will be reduced. This is considered to be generally healthier, but that doesnt apply to all people.

Yes, but not homemade batter. You can make bacon, fries, vegetables, and heat up frozen battered food like chicken and fish or fish. However, the air fryer cannot crisp up a wet batter like a deep fryer can.

In many instances, yes. As above, you can do most of the jobs of a deep fryer with an air fryer, including making competitively crispy and fluffy fries.

Air fryers take longer to cook your food. They can take up to twenty minutes where the deep fryer might only take two. That being said, the clean-up process is much faster with an air fryer.

Yes. If you have a deep pot or pan, some oil, and a slotted spoon you can use your home equipment as a fryer. This is a good option for saving on space too.

Vegetable oil, canola oil, and peanut oil are the most popular options. They have a higher smoke point, so are the best oils to use.

People tend to recommend that you change the oil after eight to ten uses. The color and quality of the oil will affect the taste, so it depends how sensitive you are to flavor.

There are lots of benefits to both appliances and you can use them to make some great meals and snacks. Deep fryers are classic and, in many ways, offer you more versatility in what you can fry. However, lots of air fryers are becoming integrated into other multi-cookers, which offer fantastic value for money. Think about space, taste, and price and you won't go wrong.

Social Links Navigation

Millie Fender is Head of Reviews. She specializes in cooking appliances and also reviews outdoor grills and pizza ovens. She was tasked with reviewing the market leading air fryers, so is our expert on the topic. When she's not putting air fryers, and other appliances, through their paces in our testing kitchen, she'll be using the products at home in her day-to-day life.

More here:
Air Fryer vs Deep Fat Fryer: fried-and-tested by experts - Homes & Gardens

Read More..

Ontario poll shows deep dissatisfaction with Ford government despite high party support – Global News

Premier Doug Fords government is receiving poor marks for its handling of nearly all of the issues that are top of mind for Ontarians, according to a new public opinion poll.

Story continues below advertisement

The Angus Reid survey of 881 Ontario residents found that while a majority would still vote for the Progressive Conservatives if an election were to be held today, theres an underlying dissatisfaction with how the government is performing.

If an election was held today, 38 per cent of respondents said they would vote for Doug Fords PC Party and 30 per cent said they would support the Ontario NDP.

Support for the leaderless Ontario Liberals dropped to 20 per cent and Ontario Greens remain steady at six per cent of the total projected vote.

The poll, however, is less encouraging when it comes to key issues such as cost of living, housing affordability and health care.

A total of 83 per cent of those polled felt the government was doing a poor or very poor job on the issue of housing affordability, with 81 per cent critical of Ontarios record on the cost of living and inflation.

Trending Now

Story continues below advertisement

Health care an area the Ford government focused a flurry of announcements and new legislation on at the start of the year did not fare much better.

A total of 78 per cent of those polled felt the province had done a poor or very poor job on the health care file, compared to 19 per cent who felt it was good or very good.

Of all the issues polled, Ontarians appear to have the best impression of the Ford governments relationship with Ottawa, with just 47 per cent responding with poor or very poor.

Angus Reid suggested that even the provinces reported victories may not be registering much public support.

Its poll found just 37 per cent felt the government was doing a good job on the economy and job creation. This, after the province announced a Volkswagen gigafactory would open in St. Thomas, Ont. in 2027, the polling group said.

© 2023 Global News, a division of Corus Entertainment Inc.

Read the rest here:
Ontario poll shows deep dissatisfaction with Ford government despite high party support - Global News

Read More..

For ChatGPT creator OpenAI, Italys ban may just be the start of trouble in Europe – Fortune

OpenAI CEO Sam Altman loves Italy, but the affection may not be mutualat least not when it comes to OpenAIs flagship product, ChatGPT.

Italy temporarily banned ChatGPT last week on the grounds that it violates Europes strict data privacy law, GDPR. OpenAI immediately complied with the ban, saying it would work with Italian regulators to educate them on how OpenAIs A.I. software is trained and operates.

We of course defer to the Italian government and have ceased offering ChatGPT in Italy (though we think we are following all privacy laws), Altman tweeted, adding that Italy is one of my favorite countries and I look forward to visiting again soon!

The comments drew plenty of snark from other Twitter users for its slightly tone deaf, ugly American vibes. Meanwhile, Italys deputy prime minster took the countrys data regulator to task, saying the ban seemed excessive. But Romes decision may be just the start of generative A.I.s problems in Europe. As this newsletter was preparing to go to press, there were reports Germany was also considering a ban.

Meanwhile, here in the U.K., where Im based, the data protection regulator followed Italys ban with a warning that companies could very well fall afoul of Britains data protection laws too if they werent careful in how they developed and used generative A.I. The office issued a checklist for companies to use to help ensure they are in compliance with existing laws.

Complying with that checklist may be easier said than done. A number of European legal experts are actively debating whether any of the large foundation models at the core of todays generative A.I. boomall of which are trained on vast amounts of data scraped from the internet, including in some cases personal informationcomply with GDPR.

Elizabeth Renieris, a senior researcher at the Institute for Ethics in AI at the University of Oxford who has written extensively about the challenges of applying existing laws to newly-emerging technology such as A.I. and blockchain, wrote on Twitter that she suspected GDPR actions against companies making generative A.I. will be impossible to enforce because data supply chains are now so complex and disjointed that its hard to maintain neat delineations between a data subject, controller, and processor (@OpenAI might try to leverage this). Under GDPR, the privacy and data protection obligations differ significantly based on whether an organization is considered a controller of certain data, or merely a processor of it.

Lilian Edwards, chair of technology law at the University of Newcastle, wrote in reply to Renieris, These distinctions chafed when the cloud arrived, frayed at the edges with machine learning and have now ripped apart with large models. No-one wants to reopen GDPR fundamentals but I am not clear [the Court of Justice of the European Union] can finesse it this time.

Edwards is right that theres no appetite among EU lawmakers to revisit GDPRs basic definitions. Whats more, the bloc is struggling to figure out what to do about large general-purpose models in the Artificial Intelligence Act it is currently trying to finalize, with the hope of having key EU Parliamentary committees vote on a consensus version on April 26. (Even then, the act wont be really be finalized. The whole Parliament will get to make amendments and vote in early May and there will be further negotiation between the Parliament, the EU Commission, which is the blocs executive arm, and the European Council, which represents the blocs various national governments.) Taken together, there could be real problems for generative A.I. based on large foundation models in Europe.

At an extreme, many companies may have to follow OpenAIs lead and simply discontinue offering these services to EU citizens. It is doubtful European politicians and regulators would want that outcomeand if it starts to happen, they will probably seek some sort of compromise on enforcement. That alone may not be enough. As has been the case with GDPR and trans-Atlantic data sharing, European courts have been quite open to citizens groups going to court and obtaining judgements based on strict interpretations of the law that force national data privacy regulators to act.

At a minimum, uncertainty over the legal status of large foundation models may make companies, especially in Europe, much more hesitant to deploy them, especially in cases where they have not trained the model from scratch themselves. And this might be the case for U.S. companies that have international operations tooGDPR applies not just to customer data, but also employee data, after all.

With that, heres the rest of this weeks news in A.I.

Jeremy Kahn@jeremyakahnjeremy.kahn@fortune.com

U.K. government releases A.I. policy white paper. The British governments Department for Science, Innovation and Technology published a white paper on how it wants to see A.I. governed. It urges a sector and industry-specific approach, saying regulators should establish tailored, context-specific approaches that suit the way A.I. is actually being used in their sectors, and for applying existing laws rather than creating new ones. The recommendations also lay out high level principles in five main areas: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. While some A.I. and legal experts praised the sector-specific approach the white paper advocates, arguing it will make the rules more flexible than a one-size-fits-all approach and promote innovation, others worried that different regulators might diverge in their approach to identical issues, creating a confusing and messy regulatory patchwork that will actually inhibit innovation, CNBC reported.

Bloomberg creates its own LLM, BloombergGPT, for finance. Bloomberg, where I worked before coming to Fortune, is not new to machine learning. (Ive periodically highlighted some of the ways Bloomberg has been using large language models and machine learning in this newsletter.) The company has access to vast amounts of data, much of it proprietary. This past week, Bloomberg unveiled Bloomberg GPT, a 50 billion parameter LLM, and the first ultra-large language GPT-based model the financial news company has ever trained. This puts it pretty far up there in the rankings of large models, although still far smaller than the largest models OpenAI, Google Brain, DeepMind, Nvidia, Baidu and some other Chinese researchers have built. The interesting thing is that 51% of the data Bloomberg used was financial data, some of it its own proprietary data, that it curated specifically to train the model. The company reported that BloombergGPT outperformed general-purpose LLMs on tasks relevant to Bloombergs own use cases, such as recognizing named entities in data, performing sentiment analysis on news and earnings reports, and answering questions about financial data and topics. Many think this is a path many large companies with access to lots of data will choose to take going forwardtraining their own proprietary LLM on their own data and tailored to their own use casesrather than relying on more general foundation models built by the big tech companies.

Research collective creates open-source version of DeepMind visual language model as step towards an open-source GPT-4 competitor. The nonprofit A.I. research group LAION released a free, open-source version of Flamingo, a powerful visual language model created by DeepMind a year ago. Flamingo is a fully multi-modal model, meaning it can take in both images, videos, and text as inputs and output in all those modes too. That enables it to describe images and also answer questions about them, as well as generating images (or possibly video) from text, similar to the way Stable Diffusion, Midjourney and DALL-E can. Flamingo had some interesting twists in its architecture that enable it to do thisincluding a module called a perceiver remixer that reduces complex visual data to a much lower number of tokens to be used in training, the use of a frozen language model, and other clever innovations you can read about in DeepMinds research paper.

Any way, LAION decided to copy this architecture, apply it to its own open-source, multi-modal training data and the result is Open Flamingo.

Why should you care? Because LAION explicitly says it is doing this in the hopes that someone will be able to use Open Flamingo to train a model that essentially replicates the capabilities of GPT-4 in its ability to ingest both text and images. This means everyone and anyone might soon have access to a model as powerful as OpenAIs currently most powerful A.I., GPT-4, at essentially no cost. That could either be a great thing or a terribly dangerous thing, depending on your perspective.

And another subtle dynamic here that doesnt often get discussed: One of the things that is continuing to drive OpenAI to release new, more powerful models and model enhancements (such as the ChatGPT plugins) so quickly is the competition it is facing not just from other tech players, such as Google, but the increasingly stiff competition it faces from open source alternates. These open source competitors could easily erode the marketshare OpenAI (and its partner Microsoft) was hoping to control.

In order to maintain a reason for customers to pay for its APIs, OpenAI is probably going to have to keep pushing to release bigger, more powerful, more capable modelswhich, if you believe these models can be dangerous (either because they are good for producing misinformation at scale, or because of cybersecurity risks, or because you think they just might hasten human extinction, then anything that incentivizes companies to put them out in the world with less time for testing and for installing guardrails, is probably not a good thing).

ChatGPT gave advice on breast cancer screenings in a new study. Heres how well it didby Alexa Mikhail

Former Google CEO Eric Schmidt says the tech sector faces a reckoning: What happens when people fall in love with their A.I. tutor?by Prarthana Prakash

Nobel laureate Paul Krugman dampens expectations over A.I. like ChatGPT: History suggests large economic effects will take longer than many people seem to expectby Chloe Taylor

Google CEO wont commit to pausing A.I. development after experts warn about profound risks to societyby Steve Mollman

How should we think about the division over last weeks open letter calling for a sixth month pause in the development of any A.I. system more powerful than GPT-4? I covered some of this in Fridays special edition of Eye on A.I. But theres a very nice essay on how politicized discourse over A.I. risks is becoming, from VentureBeats A.I. reporter, Sharon Goldman. Its worth a read. Check it out here.

Also, how should we feel about Sam Altman, the OpenAI CEO, who claims to be both a little bit frightened about advanced A.I. and, simultaneously, hellbent on creating it? Well, dueling profiles of Altman, one in the New York Times and one in the Wall Street Journal, try to sort this out. Both are worth a read.

The cynical take on Altman was put forth by Brian Merchant in an op-ed in the Los Angeles Timesnamely, that fear-mongering about A.I., particularly about its ability to replace lots of peoples jobs, only serves to hype the power of existing technologies and OpenAIs brand, boosting its sales.

I agree with some of Merchants take. I do think OpenAI has very much become a commercially-motivated enterprise, and that this explains a lot about why it is releasing powerful A.I. models so quickly and why it has done things like create the ChatGPT plugins. But, Im not sure about Merchants take on Altman himselfthat Altmans conflicted genius schtick is simply that, schtick. Altmans concern with A.I. Safety is not some newfound preoccupation that came about only once he had something to sell. Its clear from those Altman profiles that AGI and its potential for good and ill have been preoccupations of Altmans for a long time. Its what led him to cofound OpenAI with Elon Musk in the first place. And remember, when it started, OpenAI was just a nonprofit research lab, dedicated to open sourcing everything it did. Altman didnt set out to run a commercial venture. (He may have thought there would be money to be made down the line, but making money doesnt seem to have been his real rationale. He was already enormously wealthy at the time.) So I think Altmans simultaneous expressions of longing for AGI and fear of it are not just about hyping A.I. Im not saying the rationale is noble. I just dont think commercial motives explain Altmans strange stance on advanced A.I. I think it has a lot more to do with ego and with a kind of messiah complexor at the very least, a kind of messianic thinking.

In fact, a lot of stuff people who believe in AGI say only makes sense if viewed in religious terms. AGI believers are a lot like evangelicals waiting for the rapture. They both want the second coming and wish to hasten its arrival, and yet on some level they fear it. And while some of these folks are cynical in their beliefsthey only talk about the Armageddon because they have Bibles to sell (that would be Merchants take)others are sincere believers who really do want to save souls. That doesnt mean you have to agree with these folks. But intentions do make a difference. Which do you think Altman is: Bible salesman or modern day prophet?

See more here:
For ChatGPT creator OpenAI, Italys ban may just be the start of trouble in Europe - Fortune

Read More..

The future, one year later – POLITICO – POLITICO

In this Oct. 30, 2008, photo, Electric Time Company employee Dan Lamoore adjusts the color on a 67-inch square LED color-changing clock at the plant in Medfield, Mass. | Elise Amendola/AP photo

When this newsletter launched exactly one year ago today, we promised to bring you a unique and uniquely useful look at questions that are addressed elsewhere as primarily business opportunities or technological challenges.

We had a few driving questions: What do policymakers need to know about world-changing technologies? What do tech leaders need to know about policy? Could we even get them talking to each other?

Were still working on that last one. But what we have brought you is a matter of public record: Scoops on potentially revolutionary technologies like Web3, a blow-by-blow account of the nascent governing structure of the metaverse and a procession of thinkers on the transformation AI is already causing, and how we might guide it.

Yeah, about that. In just a year, AI has gone from a powerful, exciting new technology still somewhat on the horizon to a culture-and-news-dominating, potentially even apocalyptic force. Change is always happening in the tech world, but sometimes it happens fast. And as the late Intel chief Gordon Moore might have said, that speed begets more speed, with seemingly no end in sight.

The future already looks a lot different than it looked in April 2022. And we dont expect it to look the same next year, or next month, or even next week. Theres a lot of anxiety that AI in particular could change the future much, much faster than were ready to address.

With that in mind I spoke yesterday with Peter Leyden, founder of the strategic foresight firm Reinvent Futures and author of The Great Progression: 2025 to 2050 a firmly optimistic reading of how technology will change society in radical ways about how the rise of generative AI has shaken up the landscape, and what he sees on the horizon from here.

This is the kind of explosive moment that a lot of us were waiting for, but it wasnt quite clear when it was going to happen, Leyden said. Ive been through many, many different tech cycles around, say, crypto, that havent gone down this path this is the first one that is really on the scale of the introduction of the internet.

Tech giants have been spending big on AI for more than a decade, with Googles acquisition of DeepMind as a signal moment. Devoted sports viewers might remember one particularly inescapable 2010s-era commercial featuring the rapper Common proselytizing about AI on Microsofts behalf. And there is, of course, a long cultural history of AI speculation, dating back to James Camerons Terminator and beyond.

There is a kind of parallel to the mid-90s, where people had a very hard time understanding both the digitization of the world and the globalization of the world that were happening, Leyden said. Were seeing a similar tipping point with generative AI.

From that perspective, the current generative AI boom begs for a historical analogue. How about America Online? It might seem hopelessly dated now, but like ChatGPT it was a ubiquitous product that brought a revolutionary technology into millions of homes. From the perspective of 20 years from now, a semi-sophisticated chatbot might seem like the Youve got mail of its time.

AI might seem a chiefly digital disruptor right now, but Leyden, who has a pretty good track record as a prognosticator, believes it could revolutionize real-world sectors from education to manufacturing to even housing.

Weve always thought those things are too expensive and cant be solved by technology, and weve finally now crossed the threshold to say Oh wait, now we could apply technology to it, Leyden said. The next five to 10 years are going to be amazing as this superpower starts to make its way through all these fields.

AI is also already powering innovation in other fields like energy, biotech, and media. Thats where its an especially salient comparison with the internet as a whole, not just a platform like social media. Its an engine, not the vehicle itself, and there are millions of designs yet to be built around it.

Largely for that reason, its nearly impossible to predict whats going to happen next with AI. Maybe artificial general intelligence really will arise, posing an entirely different set of problems than the current policy concerns of regulating bias and accountability in decision-making algorithms. Or maybe it will start solving problems, wickedly difficult ones, like nuclear fusion and mortality and space survival.

To get back to our mission here: We cant know. What we can do is continue to cover the bleeding edge of these technologies as they exist now, and where the people in charge of building and governing them aim to steer their development and, by proxy, ours.

A message from TikTok:

TikTok is building systems tailor-made to address concerns around data security. Whats more, these systems will be managed by a U.S.-based team specifically tasked with managing all access to U.S. user data and securing the TikTok platform. Its part of TikToks commitment to securing personal data while still giving the global TikTok experience people know and love. Learn more at http://usds.TikTok.com.

A pair of George Mason University technologists are recommending the government take a novel, deliberate approach to AI regulation.

In an essay for GMUs Mercatus Center publication Discourse, Matthew Mittelsteadt and Brent Skorup propose a framework they call AI Progress, a novel framework to help guide AI progress and AI policy decisions. Their big ideas, among a handful of others:

People will need time to understand the limitations of this technology, when not to use it and when to trust it (or not), they write nearing their conclusion. These norms cannot be developed without giving people the leeway needed to learn and apply these innovations.

A message from TikTok:

Health and tech heavy hitters are teaming up to make their own recommendations about how AI should be used specifically in the world of health care.

As POLITICOs Ben Leonard reported today for Pro subscribers, the Coalition for Health AI, which includes Google, Microsoft, Stanford and Johns Hopkins, released a Blueprint for Trustworthy AI that calls for high transparency and safety standards for the techs use in medicine.

We have a Wild West of algorithms, Michael Pencina, coalition co-founder and director of Duke AI Health, told Ben. Theres so much focus on development and technological progress and not enough attention to its value, quality, ethical principles or health equity implications.

The report also recommends heavy human monitoring of AI systems as they operate, and a high bar for data privacy and security. The coalition is holding a webinar this Wednesday to discuss its findings.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); Steve Heuser ([emailprotected]); and Benton Ives ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

A message from TikTok:

TikTok has partnered with a trusted, third-party U.S. cloud provider to keep all U.S. user data here on American soil. These are just some of the serious operational changes and investments TikTok has undertaken to ensure layers of protection and oversight. Theyre also a clear example of our commitment to protecting both personal data and the platforms integrity, while still allowing people to have the global experience they know and love. Learn more at http://usds.TikTok.com.

Read more here:
The future, one year later - POLITICO - POLITICO

Read More..

Is 2023 The Year Of Quantum Computing Startups And A 1 Million Qubit Machine? – Yahoo Finance

Quantum computing uses quantum mechanics to perform operations. Quantum mechanics is a physics theory that describes the physical environment at an atomic and subatomic scale, compared to traditional physics, which looks at the macroscopic scale.

Bits denote data in classical computing. These bits are two-state, the familiar 1 or 0. With quantum computing, quantum bits qubits measure computing power. These exist in multiple states at the same time, which can include combining 0 and 1 simultaneously.

Dont Miss: The Startup Behind The Automated Future Of The Fast-Food Industry

The benefits of this new computing technology include storing massive amounts of information in fewer computers while using less energy. And, by operating outside the traditional laws of physics, quantum computers can offer processing speeds millions of times faster than traditional computers.

In 2019, for example, Googles latest quantum computer performed a calculation in four minutes. The worlds most powerful supercomputer at the time would have needed 10,000 years to finish that same calculation. With 300 qubits, a quantum computers calculations at a given time are greater than the atoms in the universe.

The speed of quantum computers brings many use cases, including faster and smarter artificial intelligence (AI) platforms, advanced pharmaceutical modeling, more accurate weather predictions and the creation of new materials.

To stay updated with top startup news & investments, sign up for Benzingas Startup Investing & Equity Crowdfunding Newsletter

Research firms like Contrive Datum Insights see massive quantum computing market growth. The company projects a compound annual growth rate of 36.89% from 2023 to 2030, with the market reaching $125 billion annually. Where there is that kind of growth and money involved, startups are sure to follow. With quantum computing still in the early stages, startups are tackling multiple fronts, including different computer production methods, advanced quantum algorithms and other innovations.

Story continues

Here are some of the quantum computing startups making noise in the space:

Maryland-based quantum computing hardware and software firm IonQ Inc. (NYSE: IONQ). The company partners with various firms like Hyundai Motor Co. to create better machine learning algorithms to improve safety and bring about self-driving automobiles. Hyundai is also leveraging IonQ to study lithium chemistry and find new reactive solutions for future electric vehicles (EVs).

PSIQuantum is a company developing a method of quantum computing that uses photos that represent qubits. The startup is on the CB Insights list of unicorn companies with a current valuation of $3.15 billion as of March 10. The firm completed a $450 million investment round in the summer of 2021 and continues toward its stated goal of developing a 1 million qubit computer.

French startup PASQAL offers quantum computers built with 2D and 3D arrays of ordered neutral atoms, enabling its clients to solve challenging problems. These include improving weather forecasting, boosting auto aerodynamics for greater efficiency and finding relationships between chemical compounds and biological activity for the healthcare industry.

Read Next: Kevin O'Leary-Backed Startup Lets You Become a Venture Capitalist With $100

Established technology giants are also pushing forward quantum computing. IBM remains at the forefront. In November 2022, the company announced the creation of a 430-qubit machine named Osprey, which has the largest qubit count of any processor. IBMs breakthroughs in quantum computing mirror the trajectory of innovation for traditional computers as processing speed increased year over year.

Amazon Inc. Braket is the companys managed quantum computing service and part of its overall growth strategy with Amazon Web Services (AWS). Bracket offers users a place to build, test and run quantum algorithms. It provides them with access to different types of quantum hardware, encourages software development through the Braket SDK and to create open-source software.

Microsoft Corp., Alphabet Inc.s Google, Intel Corp. and Nvidia Corp. also offer quantum computing solutions and investment. As the biggest tech firms increase participation in quantum computing, more startups should become acquisition and merger targets as the market moves toward consolidation.

See more on startup investing from Benzinga.

Don't miss real-time alerts on your stocks - join Benzinga Pro for free! Try the tool that will help you invest smarter, faster, and better.

This article Is 2023 The Year Of Quantum Computing Startups And A 1 Million Qubit Machine? originally appeared on Benzinga.com

.

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Original post:
Is 2023 The Year Of Quantum Computing Startups And A 1 Million Qubit Machine? - Yahoo Finance

Read More..