Page 1,237«..1020..1,2361,2371,2381,239..1,2501,260..»

Data Protection update – May 2023 – Stephenson Harwood

Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from May 2023.

The month of May marked the fifth anniversary of the EU GDPR and it was commemorated with a bang.

Just days before the GDPRs official birthday, Meta was served a record 1.2 billion fine for data protection breaches. The fine, the largest ever imposed under the GDPR, came after Irelands Data Protection Commission found the tech giant had violated the law by transferring personal data of EU Facebook users to the US without appropriate safeguards.

Meta has six months to remediate the unlawful processing, including storage, in the US of personal data of European users. Andrea Jelinek, chair of the European Data Protection Board, said the unprecedented fine sends a strong signal to organisations that serious infringements have far-reaching consequences.

Still, Meta hasnt taken the decision lying down. In a statement, the tech company vowed to appeal the ruling, which it says could have implications for thousands of businesses which rely on the ability to transfer data between the EU and US in order to operate.

May also saw the Court of Justice of the European Union hand down four pivotal preliminary rulings related to the application of the EU GDPR. The rulings clarified the law in relation to four legal issues: the accountability principle, the right of access under Article 15 EU GDPR, compensation under Article 82 EU GDPR and joint controllers.

In this months issue:

On 22 May, the Irish Data Protection Commission ("DPC") announced that Meta has been fined 1.2 billion the largest fine to date issued under the EU General Data Protection Regulation ("EU GDPR").

The DPC's decision against Meta has three parts:

With the EU-US draft adequacy agreement still not in place (the European Parliament voted against the proposed agreement in a non-binding resolution earlier in May), the DPC's decision lands Meta's US-EU data transfers in a difficult, uncertain position. The decision also has profound ramifications for anyone transferring personal data to the US under the EU GDPR, as it demonstrates that it may be very difficult to do so lawfully under any of existing legal mechanisms and derogations, in light of the incompatibility of US law with European fundamental rights. The issue is especially difficult for transfers to any electronic communications service provider (such as Meta) that may be required to hand over European data to US national security agencies under the US federal law FISA.

For further analysis of the DPC's decision and what it means for any business making overseas transfers, look out for our upcoming Insight deep dive on our data protection hub.

On 4 May, the Court of Justice of the European Union ("CJEU") handed down four preliminary rulings relating to the application of the EU GDPR.

The CJEU considered:

For more information on these decisions, read our Insight.

On 15 May, the UK government announced that it is scaling back the Retained EU Law (Revocation and Reform) Bill ("REUL Bill"). The government provided a revised list outlining which pieces of legislation are being revoked with justifications provided for each.

Since Brexit, over 1,000 EU laws have been revoked or reformed in the UK. The REUL Bill will revoke a further 600 laws, in addition to the 500 pieces of legislation that will be revoked by the Financial Services and Markets Bill and the Procurement Bill. The government justifies this decision by stating that it will lighten the regulatory burden for businesses and encourage economic growth.

This decision reflects a scaled down promise in contrast to the government's initial plans to scrap thousands of EU laws by the end of this year. However, in its press release, the government outlined plans to continue reviewing remaining EU laws in order to identify further opportunities for reform. The REUL Bill creates a mechanism that enables this ongoing aim of revoking EU law.

Some minor pieces of data protection legislation will be revoked by the REUL bill, such as the Data Retention and Acquisition Regulations 2018. However, more significantly, the government has stated that it will remove the current interpretive principles and the structure providing for the supremacy of all EU law. This means UK courts could be permitted to overrule EU precedents and there will be significant uncertainty as to how to interpret terms from retained EU laws. In the context of data protection, there may be uncertainty as to the supremacy and interpretation of the UK General Data Protection Regulation ("UK GDPR").

The REUL Bill will return to the House of Commons after the House of Lords concludes its debate.

Stay tuned for further updates on how post-Brexit regulatory reform will affect data protection in the UK.

On 17 April, the Data Protection and Digital Information (No. 2) Bill ("DPDI Bill") had its second reading in the House of Commons. This provided us with our first opportunity to hear what MPs had to say about the DPDI Bill. Their primary concerns surrounded retaining its adequacy with the EU and the struggle to balance the interests of big tech and consumers. For more information on the second reading, read our Insight.

Following this, the DPDI Bill moved to Committee stage. This stage involves a House of Commons committee hearing evidence and conducting a detailed examination of a bill. On 10 May, a House of Commons committee heard evidence from 23 witnesses. John Edwards, the UK Information Commissioner, was among those providing evidence.

Edwards assisted the committee with a forensic analysis of the wording of the DPDI Bill. He outlined that the use of phrases such as 'high-risk activities' does not provide decision-makers with sufficient clarity when interpreting legislation. Edwards argued that the ICO and other decision-makers would appreciate further, clear criteria to assist them with issuing guidance and interpreting the legislation. Removing as much uncertainty as possible from the DPDI Bill should be the aim as this will enable greater efficiency. Edwards also outlined his concerns surrounding the future role of ministers with the current DPDI Bill providing scope for ministers to overrule the ICO and refuse to publish its statutory codes, threatening to undermine the independence of the ICO.

Other witnesses expressed concerns relating to the DPDI Bill's provisions on automated decision-making and its impact on the UK retaining adequacy with the EU.

The DPDI Bill will now move to its third reading, representing the House of Commons' final chance to debate the contents of the bill and vote on its approval. If approved, the DPDI Bill will move to the House of Lords for consideration.

On 4 May, leaders of some of Europe's largest technology companies wrote to the European Commission outlining their concerns regarding the EU's forthcoming Data Act.

As we previously reported, the Data Act will bring in a new landscape for data access, data portability and data sharing. It includes provisions that introduce common rules on the sharing of data generated by connected products or related services and will compel data holders to make data available to public bodies without undue delay where there is an exceptional need for the public body to use the data. The European Commission are adamant that the Data Act will ensure fairness in the digital environment, stimulate a competitive data market, open opportunities for data-driven innovation and make data more accessible for all.

However, the concerns raised in this letter from the technology companies suggest that not all stakeholders agree on whether the Data Act is on track to achieve its aims. The letter was organised by DigitalEurope and is signed by chief executives of Siemens, Siemens Healthineers, SAP, Brainlab and Datev. The letter expressed concerns around supporting European competitiveness and protecting businesses against cyber attacks and data breaches. The letter outlined three key concerns:

Executives at SAP say that they welcome the objectives of the Data Act to create a common EU regulatory framework and facilitate data sharing. However, they insist that the Data Act needs further amendments in order to preserve contractual freedom, allowing providers and customers to agree on terms that reflect business needs.

The letter asks the European Commission to pause the process, enabling changes to the proposed Act. Time will tell whether the Data Act will be further delayed in the face of these concerns. The Swedish presidency entered into negotiations (or 'trilogue') with the European Parliament on the final version of the Data Act in March and further trilogues are expected to take place in May and beyond.

The ICO, the UK Data Protection Authority ("DPA"), issued new guidance for businesses and employers on Employee Data Subject Access Requests ("DSARs").

Data subjects have the right of access under the EU GDPR, meaning they can request a copy of their personal information from organisations. This is a right often exercised by employees against their employers or former employers. Employees can request any personal data held by the employer, such as attendance details, sickness records or personal development and other HR records. The ICO reported in its press release that it received 15,848 complaints relating to DSARs between April 2022 and March 2023. In light of this, it has now released new, enhanced guidance on how employers should respond to DSARs.

The new guidance covers key issues, including the following points:

For more information, you can access the ICO's full guidance here.

In the midst of growing anxiety across the tech industry about the potential impact of AI, and some more stark warnings from industry experts including George Hinton (so called "godfather of AI") that the recent rapid development in the capabilities of AI may pose an existential risk to humankind unless urgent action is taken, Prime Minister Rishi Sunak appears to be contemplating charting an alternative approach to the UK's regulation of AI, with reports that the government is considering tighter regulation and talk of a new global regulator (or at least the creation of a new UK AI-focused watchdog).

Back in March, we reported that the UK Government had published a white paper outlining its plans to regulate AI (the "AI White Paper"). The government's intention was for the AI White Paper to foster a pro-innovation approach to AI regulation which focusses on its benefits and potential whilst avoiding unnecessary burdens to business and economic growth. The AI White Paper is currently open for consultation, which is set to conclude on 21 June, although industry figures have warned that the AI White Paper is now already out of date.

The government may concede that there has been a shift in its approach since the AI White Paper was published, with reports of government insiders insisting that they "want to stay nimble, because the technology is changing so fast", and expressing their wish to avoid the product-by-product regulatory regime, such as the one that is envisaged by the EU's AI Act.

It appears that Sunak may also be applying pressure on the UK's allies, seeking to construct an international agreement in relation to how to develop AI capabilities, which could entail the establishment of a global regulator. Given that the EU has been unable to reach an agreement since the draft AI Act was published over two years ago, Sunak's plan to formulate and subsequently agree such an international agreement in a short period of time appears somewhat optimistic.

Domestically, MPs from both the Conservative and Labour party are calling for an AI bill to be passed, which might set certain conditions for companies seeking to create and develop AI in the UK and lead to the creation of a UK regulator. It remains to be seen what approach the government will take to regulating AI in the UK and what aspiration it has to lead on such regulation on the global stage.

Over in the US, American lawmakers are arguing that federal regulation of AI is necessary for innovation. Speaking at an event in Washington, DC, on 9 May, US Representative Jay Obernolte said that regulation to mitigate potential harms and provide customer protection is something which "is very clearly necessary when it comes to AI." Obernolte further stressed that regulation of data privacy and AI must coincide, given the vast amounts of information AI models require to learn and AI's ability to pierce digital data privacy, reaggregate personal data and build behavioural models to predict and influence behaviour.

In early May, the Biden Administration (the "Administration") announced new actions which it says are set to further promote responsible American innovation in AI as well as protect people's rights and safety. Emphasising the need to place people and communication at the centre of AI development, by supporting responsible innovation that serves the public good, the Administration said that companies have a fundamental responsibility to ensure that the products they provide are safe prior to deployment for public use.

The Administration has also announced an independent commitment from leading AI developers including Google, Microsoft, NVIDIA and OpenAI to participate in a thorough public evaluation of AI systems. These actions all contribute to a broader and ongoing effort for the Administration to engage with a variety of stakeholders on critical AI issues.

Belinda Dennett, Microsoft Corporate Affairs Director, spoke to members of Australia's parliament at a parliamentary hearing on 3 May, to communicate her view that the government should collaborate with industry and society on principles-based measures or co-regulation with regard to AI, rather than taking a more targeted and direct regulatory response.

Dennett's comments reflect Microsoft's view that there is a risk in seeking to regulate what is known today in relation to generative AI technologies, as that knowledge can rapidly go out of date. The effect of this risk is such that any policy seeking to regulate generative AI would soon find itself trailing behind the development of the technology being regulated.

In making her remarks, Dennett specifically referred to the recent rapid enhancement in the capabilities of generative AI technologies such as ChatGPT and explained that "this was innovation we weren't expecting for another ten years." Dennett also praised calls which have been made for summits and various other discussions around the generative AI boom on the basis that, for AI, "society needs to decide where those guardrails should be."

Microsoft's comments come as Australia joins other jurisdictions needing to act quickly to determine how best to regulate AI and generative AI in particular, which we considered in our April 2023 bulletin.

In October 2022, Joseph Sullivan, Uber Technologies' former security chief, was convicted of obstruction of a federal proceeding and of concealing or failing to report a felony. Sullivan's conviction arose in connection with a 2016 cyber breach that affected 57 million Uber drivers and riders. In response to the breach, Sullivan devised a scheme by which the hackers who had breached Uber's network were paid $100,000 through the company's 'bug bounty' scheme and were induced into signing a non-disclosure agreement, such that Uber's legal team and the US Federal Trade Commission officials would not find out.

Sentenced in early May, Sullivan was handed a three-year term of probation and ordered to pay a fine of $50,000. Although Sullivan has avoided time in prison, US District Judge William Orrick made clear that if he were to preside over a similar case in the future "even if the character is that of Pope Francis, they should expect custody." Sullivan's case illustrates that corporate information security officers ("CISOs") should work with lawyers to establish whether a breach has occurred and whether it should be reported. It has also accelerated a transition whereby CISOs report breaches more directly to their organisation's senior executives.

Consequently, companies should now be reconsidering their processes for breach identification and the documentation of decisions regarding breaches in order to develop more robust breach response procedures. This will allow companies to cultivate a culture of shared responsibility for taking decisions associated with cybersecurity breaches, which will, in turn, assist CISOs with avoiding personal liability.

Following a year-long inquiry into the abuse of spyware in the EU, the European Parliament's Committee of Inquiry has adopted its final report and recommendations. The inquiry investigated the use of surveillance spyware such as "Pegasus", which can be covertly installed on mobile phones and is capable of reading text messages, location tracking, accessing the device's phone and camera as well as harvesting information from apps.

MEPs stated that the use of spyware in Hungary constitutes "part of a calculated and strategic campaign to destroy media freedom and freedom of expression by the government", and in Poland the use of Pegasus has been part of "a system for the surveillance of the opposition and critics of the government designed to keep the ruling majority and the government in power". To remedy these major violations of EU law, the MEPs called on Hungary and Poland to comply with European Court of Human Rights ("ECHR") judgments, restore judicial independence and oversight institutions as well as launch credible investigations into abuse cases to help ensure citizens have access to proper legal redress. In Greece, where spyware "does not seem to be part of an integral authoritarian strategy, but rather a tool used on an ad hoc basis for political and financial gains", MEPs called on the government to repeal export licences that are not in line with EU export control legislation. Elsewhere across the EU in Spain, although the country has "an independent justice system with sufficient safeguards", MEPs called on Spanish authorities to ensure "full, fair and effective" investigations.

In order that illicit spyware practices are stopped immediately, MEPs recommended that spyware should only ever be used by member states in which allegations of spyware abuse have been thoroughly investigated, national legislation is in line with recommendations of the Venice Commission and CJEU and ECHR case law, Europol is involved in investigations, and export licences not in line with export controls are repealed. MEPs further recommended that the Commission should assess whether these conditions are met by member states by December 2023. In order to prevent attempts to justify abuses, the MEPs also called for a common legal definition of 'national security' as grounds for surveillance.

MEPs adopted the report and recommendations and the text outlining the recommendations is expected to be voted on by the full Parliament during the plenary session starting on 12 June.

In a statement released earlier this month, Toyota Motor Corporation ("Toyota") confirmed that a human error rendered the vehicle data of around 2.15 million customers publicly accessible in a period spanning almost a decade from November 2013 to April 2023.

The incident, which Toyota states was caused by a "misconfiguration of the cloud environment", as a result of the cloud system having been accidentally set to public rather than private, meant that data including vehicle identification numbers and vehicle location data was potentially accessible by the public. Toyota have said that the accessible data alone was not sufficient to enable identification of the affected data subject and that there had been no reports of malicious use of the data.

Although it has confirmed that the data in question is confined to that of its Japanese customers, the number of potentially affected customers constitutes almost the entirety of Toyota's customer base who had signed up for its main cloud service platforms since 2012, which are essential to its autonomous driving and other AI-based offerings. Affected customers include those who use the T-Connect service, which provides a range of services such as AI-voice driving assistance, and also users of G-Link, which is a similar service for owners of Lexus vehicles.

The incident had only recently been discovered by Toyota as it targets an expansion of its connectivity services. Toyota said that the "lack of active detection mechanisms, and activities to detect the presence or absence of things that become public" was the cause of the failure to identify the issue earlier. Toyota has stated that it will take a series of measures to prevent a recurrence of the incident including implementing a system to audit cloud settings, establishing a system to continuously monitor settings and educating employees on data handling rules.

The Japanese Personal Information Protection Commission has been informed of the incident but has not provided comment at this stage. However, the Japanese automaker subsequently announced that customer information in some countries throughout Oceania and Asia may also have been left publicly accessible from October 2016 to May 2023. In this instance, potentially leaked customer data may include names, addresses, phone numbers and email addresses.

You can read Toyota's statement in Japanese here.

The High Court has brought Prismall v Google and DeepMind to an early conclusion, ruling that Andrew Prismall and the 1.6 million class members he represents cannot go to trial.

Andrew Prismall sued Google and DeepMind under the tort of misuse of private information on behalf of 1.6 million NHS patients after, in 2016, it was revealed that DeepMind transferred the patients' data without their knowledge or consent. To make this claim, Prismall was required to show that the class of patients had a reasonable expectation of privacy and that DeepMind deliberately and without justification obtained and used the data. Prismall also had to show that all members of the class had the same interest. This follows the principle set out in Lloyd v Google that a representative action cannot succeed if it requires an individualised assessment of class members' loss.

Prismall argued that, without needing an individualised assessment, he could show that each class member had a reasonable expectation of privacy in relation to the relevant personal data, this expectation was unjustifiably interfered with and such interference entitled them to an award of more than trivial damages. However, the court ruled that there was no realistic prospect of the class members meeting these requirements. The court found that:

Mrs Justice Williams struck out the case and ruled that a summary judgment should be entered in favour of Google and DeepMind.

The case was one of the few opt-out class actions that continued after the Lloyd v Google ruling narrowed the options for bringing such claims under the UK GDPR. It appears that misuse of private information was not a viable alternative in this case.

For more information, you can access the full judgment here.

A Belgian data subject complained to the Belgian DPA after being informed of his obligations under the US Foreign Account Tax Compliance Act ("FATCA") by his bank. The Belgian DPA has now ordered Belgium's Federal Public Service Finance to stop processing complainants' data in relation to FATCA transfers, arguing that such transfers breach the EU GDPR.

FATCA's aim is to combat tax fraud, money laundering and tax evasion. 87 countries have entered FATCA agreements with the US. Under FATCA, non-US banks must send information about any accounts held by American citizens to the corresponding non-US government, who then shares the information with the US Internal Revenue Service (the "IRS"). This information constitutes personal data under the EU GDPR.

The Belgian DPA originally decided that the FATCA transfers did not breach the EU GDPR and that Schrems II did not apply. However, the Belgian DPA's litigation arm disagreed. It found that data subjects are not able to understand the purposes of processing in relation to FATCA transfers and concluded that FATCA transfers breach the EU GDPR's purpose limitation, data minimisation and proportionality principles. The IRS failed to carry out a data protection impact assessment in relation to the transfers. In addition, the FATCA transfers were found not to be subject to appropriate safeguards. As a result, the Belgian DPA ordered that transfers of personal data to the US under the FATCA regime must cease.

This does not represent the only challenge to FATCA. A US-born data subject now residing in the UK has complained to the High Court that FATCA transfers are disproportionate and breach her rights under the EU GDPR. However, the impact of ceasing FATCA transfers is questionable. American Citizens Abroad, a non-profit organisation, commented that the Belgian DPA decision will not get rid of US tax problems for expats. It argued that the IRS has an obligation to enforce US tax laws and if the required information cannot be provided via FATCA transfers, it will come to light another way.

The US Federal Trade Commission ("FTC") filed a complaint against Meta in 2011, resulting in a 2012 privacy order barring Meta from misrepresenting its privacy practices. After a subsequent complaint from the FTC, relating to Meta's misrepresentations that fed into the Cambridge Analytica scandal, Meta agreed to another privacy order in 2020. This 2020 order compelled Meta to pay a $5 billion penalty.

In a press release dated 3 May, the FTC claims that Meta has now violated the privacy promises that it made in the 2020 privacy order. The FTC's claim is based on the following points:

As a result, the FTC proposes to make the following changes and extensions to the privacy order:

The FTC has requested that Meta responds to these claims within 30 days. Meta have pledged to robustly fight this action, labelling it a political stunt.

May saw the latest enforcement action against Clearview AI, following numerous recent sanctions against the facial recognition platform.

On 9 May, the Austrian DPA found that Clearview AI was not complying with the EU GDPR. Following a request for access, a data subject found that their image data had been processed by Clearview AI. The Austrian DPA found that Clearview AI processed the personal data without lawfulness, fairness and transparency and was in breach of data retention rules by permanently storing data. In addition, Clearview AI's processing of the data served a different purpose from the original publication of the data subject's personal data. The Austrian DPA ordered Clearview AI to erase the complainant's personal data and to designate a representative in the EU.

In another decision handed down in May, the Australian Administrative Appeals Tribunal ruled that Clearview AI's collection of Australian facial images without consent breached the country's privacy standards. As a result, the Australian authority ordered Clearview AI to leave the country and delete all Australian images that it had gathered.

This follows action taken against Clearview AI in April. The French DPA fined Clearview AI 5.2 million for its failure to comply with the DPA's earlier order to stop collecting and processing personal data of individuals located in France.

This wave of enforcement action reflects the ongoing battle of applying data protection requirements to ever-evolving AI technologies.

We reported in March that Marc Van der Woude, president of the EU's General Court, warned that a wave of Digital Markets Act ("DMA") litigation was looming. The DMA places obligations on Big Tech platforms (referred to as "Gatekeepers") to create a fairer environment for business users and to ensure that consumers can access better services and easily switch providers.

The first step of the DMA's implementation kicked off on 2 May. This step looks into the classification of certain platforms as Gatekeepers. Any platforms labelled with this designation will be prohibited from certain behaviours and practices. Three main criteria are involved in deciding whether a platform is a Gatekeeper:

Any organisations labelled as Gatekeepers will be subject to the DMA's list of dos and donts. For example, Gatekeepers must not prevent consumers from linking up to businesses outside the Gatekeeper's platform or prevent users from uninstalling any pre-installed software or app if they wish to.

By 3 July, potential Gatekeepers must notify their core platform services to the European Commission if they meet the DMA's thresholds. The European Commission, following such notifications, has 45 working days to assess whether the organisation is a Gatekeeper. Any designated Gatekeepers will then have 6 months to comply with the DMA's requirements.

Each month, we bring you a round-up of notable data protection enforcement actions.

Company

Authority

Fine

Comment

Meta Ireland

Irish DPA

1.2 billion

See our coverage of the Irish DPA's decision above.

GSMA

Spanish DPA

200,000

GSMA failed to carry out a data protection impact assessment in relation to a data subject's special category data.

B2 Kapital

Croatian DPA

2.26 million

Representing the Croatian DPA's highest EU GDPR fine to date, B2 Kapital were fined for failing to prevent data security breaches.

Clearview AI

Read more here:
Data Protection update - May 2023 - Stephenson Harwood

Read More..

How WASM (and Rust) Unlocks the Mysteries of Quantum Computing – The New Stack

WebAssembly has come a long way from the browser; it can be used for building high-performance web applications, for serverless applications, and for many other uses.

Recently, we also spotted it as a key technology used in creating and controlling a previously theoretical state of matter that could unlock reliable quantum computing for the same reasons that make it an appealing choice for cloud computing.

Quantum computing uses exotic hardware (large, expensive and very, very cold) to model complex systems and problems that need more memory than the largest supercomputer: it stores information in equally exotic quantum states of matter and runs computations on it by controlling the interactions of subatomic particles.

But alongside that futuristic quantum computer, you need traditional computing resources to feed data into the quantum system, to get the results back from it and to manage the state of the qubits to deal with errors in those fragile quantum states.

As Dr. Krysta Svore, the researcher heading the team building the software stack for Microsofts quantum computing project, put it in a recent discussion of hybrid quantum computing, We need 10 to 100 terabytes a second bandwidth to keep the quantum machine alive in conjunction with a classical petascale supercomputer operating alongside the quantum computer: it needs to have this very regular 10 microsecond back and forth feedback loop to keep the quantum computer yielding a reliable solution.

Qubits can be affected by whats around them and lose their state in microseconds, so the control system has to be fast enough to measure the quantum circuit while its operating (thats called a mid-circuit measurement), find any errors and decide how to fix them and send that information back to control the quantum system.

Those qubits may need to remain alive and remain coherent while you go do classical compute, Svore explained. The longer that delay, the more theyre decohering, the more noise that is getting applied to them and thus the more work you might have to do to keep them stable and alive.

There are different kinds of exotic hardware in quantum computers and you have a little more time to work with a trapped-ion quantum computer like the Quantinuum System Model H2, which will be available through the Azure Quantum service in June.

That extra time means the algorithms that handle the quantum error correction can be more sophisticated, and WebAssembly is the ideal choice for building them Pete Campora, a quantum compiler engineer at Quantinuum, told the New Stack.

Over the last few years, Quantinuum has used WebAssembly (WASM) as part of the control system for increasingly powerful quantum computers, going from just demonstrating that real-time quantum error correction is possible to experimenting with different error correction approaches and, most recently, creating and manipulating for the first time the exotic entangled quantum states (called non-Abelian anyons) that could be the basis of fault-tolerant quantum computing.

Move one of these quasiparticles around another like braiding strings and they store that sequence of movements in their internal state, forming whats called a topological qubit thats much more error resistant than other types of qubit.

At least, thats the theory: and WebAssembly is proving to be a key part of proving it will work which still needs error correction on todays quantum computers.

Were using WebAssembly in the middle of quantum circuit execution, Campora explained. The control system software is preparing quantum states, doing some mid-circuit measurements, taking those mid-circuit measurements, maybe doing a little bit of classical calculation in the control system software and then passing those values to the WebAssembly environment.

In cloud, developers are used to picking the virtual machine with the right specs or choosing the right accelerator for a workload.

Rather than picking from fixed specs, quantum programming can require you to define the setup of your quantum hardware, describing the quantum circuit that will be formed by the qubits and as well as the algorithm that will run on it and error-correcting the qubits while the job is running with a language like OpenQASM (Open Quantum Assembly Language); thats rather like controlling an FPGA with a hardware description language like Verilog.

You cant measure a qubit to check for errors directly while its working or youd end the computation too soon, but you can measure an extra qubit (called an ancilla because its used to store partial results) and extrapolate the state of the working qubit from that.

What you get is a pattern of measurements called a syndrome. In medicine, a syndrome is a pattern of symptoms used to diagnose a complicated medical condition like fibromyalgia. In quantum computing, you have to diagnose or decode qubit errors from the pattern of measurements, using an algorithm that can also decide what needs to be done to reverse the errors and stop the quantum information in the qubits from decohering before the quantum computer finishes running the program.

OpenQASM is good for basic integer calculation, but it requires a lot of expertise to write that code: Theres a lot more boilerplate than if you just call out to a nice function in WASM.

Writing the algorithmic decoder that uses those qubit measurements to work out what the most likely error is and how to correct it in C, C++ or Rust and compiling it to WebAssembly makes it more accessible and lets the quantum engineers use more complex data structures like vectors, arrays, tuples and other ways to pass data between different functions to write more sophisticated algorithms that deliver more effective quantum error correction.

An algorithmic decoder is going to require data structures beyond what you would reasonably try to represent with just integers in the control system: it just doesnt make sense, Campora said. The WASM environment does a lot of the heavy lifting of mutating data structures and doing these more complex algorithms. It even does things like dynamic allocation that normally youd want to avoid in control system software due to timing requirements and being real time. So, the Rust programmer can take advantage of Rust crates for representing graphs and doing graph algorithms and dynamically adding these nodes into a graph.

The first algorithmic decoder the Quantinuum team created in Rust and compiled to WASM was fairly simple: You had global arrays or dictionaries that mapped your sequence of syndromes to a result.The data structures used in the most recent paper are more complex and quantum engineers are using much more sophisticated algorithms like graph traversal and Dijkstras [shortest path] algorithm. Its really interesting to see our quantum error correction researchers push the kinds of things that they can write using this environment.

Enabling software thats powerful enough to handle different approaches to quantum error correction makes it much faster and more accessible for researchers to experiment than if they had to make custom hardware each time, or even reprogram an FPGA, especially for those with a background in theoretical physics (with the support of the quantum compiler team if necessary). Its portable, and you can generate it from different languages, so that frees people up to pick whatever language and software that can compile to WASM thats good for their application.

Its definitely a much easier time for them to get spun up trying to think about compiling Rust to WebAssembly versus them having to try and program an FPGA or work with someone else and describe their algorithms. This really allows them to just go and think about how theyre going to do it themselves, Campora said.

With researchers writing their own code to control a complex and expensive quantum system, protecting that system from potentially problematic code is important and thats a key strength of WebAssembly, Campora noted. We dont have to worry about the security concerns of people submitting relatively arbitrary code, because the sandbox enforces memory safety guarantees and basically isolates you from certain OS processes as well.

Developing quantum computing takes the expertise of multiple disciplines and both commercial and academic researchers, so there are the usual security questions around code from different sources. One of the goals with this environment is that, because its software, external researchers that were collaborating with can write their algorithms for doing things like decoders for quantum error correction and can easily tweak them in their programming language and resubmit and keep re-evaluating the data.

A language like Portable C could do the computation, but then you lose all of those safety guarantees, Campora pointed out. A lot of the compilation tooling is really good about letting you know that youre doing something that would require you to break out of the sandbox.

WebAssembly restricts what a potentially malicious or inexpert user could do that might damage the system but also allows system owners to offer more capabilities to users who need them, using WASI the WebAssembly System Interface that standardizes access to features and services that arent in the WASM sandbox.

I like the way WASI can allow you, in a more fine-grained way, to opt into a few more things that would normally be considered breaking the sandbox. It gives you control. If somebody comes up to you with a reasonable request that that would be useful for, say, random number generation we can look into adding WASI support so that we can unblock them, but by default, theyre sandboxed away from OS things.

In the end, esoteric as the work is, the appeal of WebAssembly for quantum computing error correction is very much what makes it so useful in so many areas.

The web part of the name is almost unfortunate in certain ways, Camora noted, because its really this generic virtual machine-stack machine-sandbox, so it can be used for a variety of domains. If you have those sandboxing needs, its really a great target for you to get some safety guarantees and still allows people to submit code to it.

The rest is here:
How WASM (and Rust) Unlocks the Mysteries of Quantum Computing - The New Stack

Read More..

Protect Digital Assets From The Threat Of Supercomputers: Q&A With Quantum Computing And Blockchain Security Experts – Yahoo Finance

CHEYENNE, WY / ACCESSWIRE / June 8, 2023 / Quantum computing - a technological evolution once thought to be decades away - is now right at our doorstep. While quantum computers may greatly benefit both scientific advancement and industrial application, they also represent a serious threat to the security of our digital infrastructure - particularly for blockchain-based technologies, such as cryptocurrencies. The destabilization of an increasingly crucial part of our global financial system could have major (and potentially devastating) effects.

The Quantum Resistance Corporation, Thursday, June 8, 2023, Press release picture

To shed light on this complex and evolving landscape, Dr. Pierre-Luc Dallaire-Demers ("PL"), Founder/CEO, and William Doyle ("Will"), Core Developer, of Pauli Group spoke about their work at the forefront of quantum-resistant blockchain technologies.

What are the biggest issues for crypto with the growth of quantum computing?

PL: The inherent security weakness of public keys is the biggest issue. Everyone has been led to believe they are almost impossible to break, but the reality is that a quantum computer running with about 1 million qubits - which we will see within the next 5-10 years - will break keys in a matter of hours. As an example, the first 1 million BTC mined in the Satoshi era explicitly list their public keys in the block explorer, and thus getting hacked would have catastrophic consequences on the economics of the blockchain and cascading collapse of the trust for the whole web3 industry since, as most blockchains use the same signature method.

The National Institute of Standards and Technology (NIST) has been working on standardizing cryptographic signature methods that can resist quantum computers - but we need to act ASAP on implementing it on a mass scale.

How long is left before quantum computing is a serious threat or it's too late to act?

Will: I think quantum computing is a serious threat right now. This is because it's unclear exactly when quantum computers will be capable of breaking secp256k1 - and other modern cryptographic primitives, which is when the whole thing will unravel.

Story continues

PL: The algorithm to break elliptic curve cryptography - which crypto uses - was actually present as far back as 2003, but nothing out there was powerful enough to process it - so when Bitcoin came around, everyone felt it was totally safe. It's not. We expect to see machines with millions of qubits by the decade's end, which will be able to perform this task with ease. At that point, non-quantum-secure blockchains will be totally at risk. As quantum computers grow in the 2030s, the rate of key breaking will skyrocket in parallel, rendering old blockchains completely obsolete in the 2040s. Fortunately, we still have a window to upgrade our infrastructures to resist quantum computers, but it's a challenging task that requires immediate action.

Why aren't large networks such as Ethereum doing more to protect their networks?

PL: Large networks are definitely aware of the implications of quantum computing for the security of their blockchains but they're not putting sustained efforts toward upgrading to quantum-resistant cryptography. No major network has a multi-year migration plan either. This absolutely needs to change if they care about the long-term viability of the existing networks.

The main issue is that we expected computers of this power to be over a hundred years away, but they've arrived far sooner than expected - and everyone is sort of scrambling around trying to work out what to do, or ignoring the issue entirely. But if we all get organized we can prepare.

What can crypto investors do now to protect themselves?

PL: The best strategy in the short term is for users to hedge their crypto investment with a post-quantum secure digital asset such as the Quantum Resistant Ledger and move their existing blockchain assets into a quantum-resistant wallet. Pauli Group uses our own Anchor Wallet, which has the strongest quantum-resistant cryptography available to permanently secure assets against the potential vulnerabilities posed by quantum computers.

Describe the professional journeys that led you both here.

PL: My journey with quantum computing began in 2006 when I pursued a Ph.D. in the field and a post-doc at Harvard, then worked as a quantum computer scientist at Xanadu. My interest in cryptocurrencies started in 2013, and over time as I saw quantum computers scaling at a rapid rate I recognized an impending and problematic intersection of these two fields. This led me to found Pauli Group in the summer of 2021.

Will: I have been in the blockchain space for years with a focus on blockchain security. During my time in the industry, I've witnessed a rapid rise in technology that threatens the very decentralized financial freedom that cryptocurrency was created for.

What problem was Pauli Group created to solve?

PL: Pauli Group was born out of an understanding that large-scale quantum computers are no longer a distant possibility but a rapidly approaching reality. The whiplash progress in this field means that these machines could be a reality by the end of this decade, and this poses a significant threat to the security of blockchains. Our aim is to monitor the progress of quantum computers and their ability to break blockchain cryptography and to develop solutions that protect users and their assets in the long run.

Will: Pauli Group was created to innovate at the overlapping space between quantum computing and blockchain technology. We firmly believe that the security, integrity and trust in blockchains must remain uncompromised even in the post-quantum era.

Learn more about the Pauli Group here: https://pauli.group/.

Featured photo by Towfiqu barbhuiya on Unsplash.

Contact:Mike Zeiger4d5a@theqrc.com

SOURCE: The Quantum Resistance Corporation

View source version on accesswire.com: https://www.accesswire.com/760041/Protect-Digital-Assets-From-The-Threat-Of-Supercomputers-QA-With-Quantum-Computing-And-Blockchain-Security-Experts

View post:
Protect Digital Assets From The Threat Of Supercomputers: Q&A With Quantum Computing And Blockchain Security Experts - Yahoo Finance

Read More..

Researchers ‘split’ phonons in step toward new type of quantum computer – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

When we listen to our favorite song, what sounds like a continuous wave of music is actually transmitted as tiny packets of quantum particles called phonons.

The laws of quantum mechanics hold that quantum particles are fundamentally indivisible and therefore cannot be split, but researchers at the Pritzker School of Molecular Engineering (PME) at the University of Chicago are exploring what happens when you try to split a phonon.

In two experimentsthe first of their kindsa team led by Prof. Andrew Cleland used a device called an acoustic beamsplitter to "split" phonons and thereby demonstrate their quantum properties. By showing that the beamsplitter can be used to both induce a special quantum superposition state for one phonon, and further create interference between two phonons, the research team took the first critical steps toward creating a new kind of quantum computer.

The results are published in the journal Science and built on years of breakthrough work on phonons by the team at Pritzker Molecular Engineering.

In the experiments, researchers used phonons that have roughly a million times higher pitch than can be heard with the human ear. Previously, Cleland and his team figured out how to create and detect single phonons and were the first to entangle two phonons.

To demonstrate these phonons' quantum capabilities, the teamincluding Cleland's graduate student Hong Qiaocreated a beamsplitter that can split a beam of sound in half, transmitting half and reflecting the other half back to its source (beamsplitters already exist for light and have been used to demonstrate the quantum capabilities of photons). The whole system, including two qubits to generate and detect phonons, operates at extremely low temperatures and uses individual surface acoustic wave phonons, which travel on the surface of a material, in this case lithium niobate.

However, quantum physics says a single phonon is indivisible. So when the team sent a single phonon to the beamsplitter, instead of splitting, it went into a quantum superposition, a state where the phonon is both reflected and transmitted at the same time. Observing (measuring) the phonon causes this quantum state to collapse into one of the two outputs.

The team found a way to maintain that superposition state by capturing the phonon in two qubits. A qubit is the basic unit of information in quantum computing. Only one qubit actually captures the phonon, but researchers cannot tell which qubit until post-measurement. In other words, the quantum superposition is transferred from the phonon to the two qubits. The researchers measured this two-qubit superposition, yielding "gold standard proof that the beamsplitter is creating a quantum entangled state," Cleland said.

In the second experiment, the team wanted to show an additional fundamental quantum effect that had first been demonstrated with photons in the 1980s. Now known as the Hong-Ou-Mandel effect, when two identical photons are sent from opposite directions into a beamsplitter at the same time, the superposed outputs interfere so that both photons are always found traveling together, in one or the other output directions.

Importantly, the same happened when the team did the experiment with phononsthe superposed output means that only one of the two detector qubits captures phonons, going one way but not the other. Though the qubits only have the ability to capture a single phonon at a time, not two, the qubit placed in the opposite direction never "hears" a phonon, giving proof that both phonons are going in the same direction. This phenomenon is called two-phonon interference.

Getting phonons into these quantum-entangled state is a much bigger leap than doing so with photons. The phonons used here, though indivisible, still require quadrillions of atoms working together in a quantum mechanical fashion. And if quantum mechanics rules physics at only the tiniest realm, it raises questions of where that realm ends and classical physics begins; this experiment further probes that transition.

"Those atoms all have to behave coherently together to support what quantum mechanics says they should do," Cleland said. "It's kind of amazing. The bizarre aspects of quantum mechanics are not limited by size."

The power of quantum computers lies in the "weirdness" of the quantum realm. By harnessing the strange quantum powers of superposition and entanglement, researchers hope to solve previously intractable problems. One approach to doing this is to use photons, in what is called a "linear optical quantum computer."

A linear mechanical quantum computerwhich would use phonons instead of photonsitself could have the ability to compute new kinds of calculations. "The success of the two-phonon interference experiment is the final piece showing that phonons are equivalent to photons," Cleland said. "The outcome confirms we have the technology we need to build a linear mechanical quantum computer."

Unlike photon-based linear optical quantum computing, the University of Chicago platform directly integrates phonons with qubits. That means phonons could further be part of a hybrid quantum computer that combines the best of linear quantum computers with the power of qubit-based quantum computers.

The next step is to create a logic gatean essential part of computingusing phonons, on which Cleland and his team are currently conducting research.

Other authors on the paper include . Dumur, G. Andersson, H. Yan, M.-H. Chou, J. Grebel, C. R. Conner, Y. J. Joshi, J. M. Miller, R. G. Povey, and X. Wu.

More information: H. Qiao et al, Splitting phonons: Building a platform for linear mechanical quantum computing, Science (2023). DOI: 10.1126/science.adg8715. http://www.science.org/doi/10.1126/science.adg8715

Journal information: Science

See the original post:
Researchers 'split' phonons in step toward new type of quantum computer - Phys.org

Read More..

New research could improve performance of artificial intelligence … – UMN News

A University of Minnesota Twin Cities-led team has developed a new superconducting diode, a key component in electronic devices, that could help scale up quantum computers for industry use and improve the performance of artificial intelligence systems.

The paper is published in Nature Communications, a peer-reviewed scientific journal that covers the natural sciences and engineering.

A diode allows current to flow one way but not the other in an electrical circuit. It essentially serves as half of a transistor which is the main element in computer chips. Diodes are typically made with semiconductors, substances with electrical properties that form the base for most electronics and computers, but researchers are interested in making them with superconductors, which additionally have the ability to transfer energy without losing any power along the way.

Compared to other superconducting diodes, the researchers device is more energy efficient, can process multiple electrical signals at a time, and contains a series of gates to control the flow of energy, a feature that has never before been integrated into a superconducting diode.

We want to make computers more powerful, but there are some hard limits we are going to hit soon with our current materials and fabrication methods, said Vlad Pribiag, senior author of the paper and an associate professor in the University of Minnesota School of Physics and Astronomy. We need new ways to develop computers, and one of the biggest challenges for increasing computing power right now is that they dissipate so much energy. So, were thinking of ways that superconducting technologies might help with that.

The University of Minnesota researchers created the device using three Josephson junctions, which are made by sandwiching pieces of non-superconducting material between superconductors. In this case, the researchers connected the superconductors with layers of semiconductors. The devices unique design allows the researchers to use voltage to control the behavior of the device.

Their device also has the ability to process multiple signal inputs, whereas typical diodes can only handle one input and one output. This feature could have applications in neuromorphic computing, a method of engineering electrical circuits to mimic the way neurons function in the brain to enhance the performance of artificial intelligence systems.

The device weve made has close to the highest energy efficiency that has ever been shown, and for the first time, weve shown that you can add gates and apply electric fields to tune this effect, explained Mohit Gupta, first author of the paper and a Ph.D. student in the University of Minnesota School of Physics and Astronomy. Other researchers have made superconducting devices before, but the materials theyve used have been very difficult to fabricate. Our design uses materials that are more industry-friendly and deliver new functionalities.

The method the researchers used can, in principle, be used with any type of superconductor, making it more versatile and easier to use than other techniques in the field. Because of these qualities, their device is more compatible for industry applications and could help scale up the development of quantum computers for wider use.

Right now, all the quantum computing machines out there are very basic relative to the needs of real-world applications, Pribiag said. Scaling up is necessary in order to have a computer that's powerful enough to tackle useful, complex problems. A lot of people are researching algorithms and usage cases for computers or AI machines that could potentially outperform classical computers. Here, were developing the hardware that could enable quantum computers to implement these algorithms. This shows the power of universities seeding these ideas that eventually make their way to industry and are integrated into practical machines.

This research was funded primarily by the United States Department of Energy with partial support from Microsoft Research and the National Science Foundation.

-30-

About the College of Science and Engineering

The University of Minnesota College of Science and Engineering brings together the Universitys programs in engineering, physical sciences, mathematics and computer science into one college. The college is ranked among the top academic programs in the country and includes 12 academic departments offering a wide range of degree programs at the baccalaureate, master's, and doctoral levels. Learn more at cse.umn.edu.

The rest is here:
New research could improve performance of artificial intelligence ... - UMN News

Read More..

Improved qubits achieved with Schrdinger’s cat – Newswise

Newswise Quantum computing uses the principles of quantum mechanics to encode and elaborate data, meaning that it could one day solve computational problems that are intractable with current computers. While the latter work with bits, which represent either a 0 or a 1, quantum computers use quantum bits, or qubits the fundamental units of quantum information.

With applications ranging from drug discovery to optimization and simulations of complex biological systems and materials, quantum computing has the potential to reshape vast areas of science, industry, and society, says Professor Vincenzo Savona, director of the Center for Quantum Science and Engineering at EPFL.

Unlike classical bits, qubits can exist in a superposition of both 0 and 1 states at the same time. This allows quantum computers to explore multiple solutions simultaneously, which could make them significantly faster in certain computational tasks. However, quantum systems are delicate and susceptible to errors caused by interactions with their environment.

Developing strategies to either protect or qubits from this or to detect and correct errors once they have occurred is crucial for enabling the development of large-scale, fault-tolerant quantum computers, says Savona. Together with EPFL physicists Luca Gravina, and Fabrizio Minganti, they have made a significant breakthrough by proposing a critical Schrdinger cat code for advanced resilience to errors. The study introduces a novel encoding scheme that could revolutionize the reliability of quantum computers.

What is a critical Schrdinger cat code?

In 1935, physicist Erwin Schrdinger proposed a thought experiment as a critique of the prevailing understanding of quantum mechanics at the time the Copenhagen interpretation. In Schrdingers experiment, a cat is placed in a sealed box with a flask of poison and a radioactive source. If a single atom of the radioactive source decays, the radioactivity is detected by a Geiger counter, which then shatters the flask. The poison is released, killing the cat.

According to the Copenhagen view of quantum mechanics, if the atom is initially in superposition, the cat will inherit the same state and find itself in a superposition of alive and dead. This state represents exactly the notion of a quantum bit, realized at the macroscopic scale, says Savona.

In past years, scientists have drawn inspiration by Schrdingers cat to build an encoding technique called Schrdingers cat code. Here, the 0 and 1 states of the qubit are encoded onto two opposite phases of an oscillating electromagnetic field in a resonant cavity, similarly to the dead or alive states of the cat.

Schrdinger cat codes have been realized in the past using two distinct approaches, explains Savona. One leverages anharmonic effects in the cavity, the other relying on carefully engineered cavity losses. In our work, we bridged the two by operating in an intermediate regime, combining the best of both worlds. Although previously believed to be unfruitful, this hybrid regime results in enhanced error suppression capabilities. The core idea is to operate close to the critical point of a phase transition, which is what the critical part of the critical cat code refers to.

The critical cat code has an additional advantage: it exhibits exceptional resistance to errors that result from random frequency shifts, which often pose significant challenges to operations involving multiple qubits. This solves a major problem and paves the way to the realization of devices with several mutually interacting qubits the minimal requirement for building a quantum computer.

We are taming the quantum cat, says Savona. By operating in a hybrid regime, we have developed a system that surpasses its predecessors, which represents a significant leap forward for cat qubits and quantum computing as a whole. The study is a milestone on the road towards building better quantum computers, and showcases EPFLs dedication in advancing the field of quantum science and unlocking the true potential of quantum technologies.

See the rest here:
Improved qubits achieved with Schrdinger's cat - Newswise

Read More..

A Quantum Leap In AI: IonQ Aims To Create Quantum Machine Learning Models At The Level Of General Human Intelligence – Forbes

vacuum chamber package.IonQ

Classical machine learning (ML) is a powerful subset of artificial intelligence. Machine learning has advanced from simple pattern recognition in the 1960s to today's advanced use of massive datasets for training and the generation of highly accurate predictions.

Meanwhile, between 2010 and 2020, global data usage increased from 1.2 trillion gigabytes to almost 60 trillion gigabytes. At some point, quantum systems will more easily handle the ongoing exponential growth in data compared to classical computers, which may struggle to keep up. Theoretically, at some point in the not-too-distant future, only quantum computers can handle such massive scale and complexity. Applying this same insight to the realm of ML, it only makes sense that at some point, the real breakthroughs will be coming from quantum machine learning (QML) rather than classical approaches.

IonQ

Although other quantum computing companies are exploring QML, there are several reasons I have focused on advanced QML research being done at IonQ ($IONQ).

One, IonQ's CEO, Peter Chapman, has a rich background in machine learning when he worked with Ray Kurzweil at Kurzweil Technologies. Chapman played a crucial role in developing a pioneering character recognition system that generated text characters from scanned images.urzweil Technologies eventually used that approach to build a comprehensive digital library for the blind and visually impaired.

Two, Chapman is optimistic about the future of QML. He believes that QML will eventually be as significant as the large language models used by OpenAI's ChatGPT and other generative AI systems. For that reason,QML is built into IonQ's long-term quantum product roadmap.

And three, IonQ collaborates with leading companies in the field of AI and machine learning, such as Amazon, Dell, Microsoft, and NVIDIA. These partnerships combine IonQ's expertise in quantum technology with the partner's AI knowledge of their partners.

IonQ hardware and #AQ

IonQ's primary focus is not just on qubit quantity but more comprehensively on the quality of the qubits and how they operate as a system. This qualityalso called qubit fidelityis a critical differentiator for efficiently completing quantum computations, one that IonQ measures with an application-oriented benchmark that it calls algorithmic qubits or #AQ.

#AQ is based on work pioneered by the Quantum Economic Development Consortium, an independent industry group that evaluates quantum computer utility in real-world settings. Here is how #AQ is computed.

IonQ quantum processors

IonQ has created three trapped-ion quantum computers: IonQ Harmony, IonQ Aria and its latest model, a software-defined quantum computer called IonQ Forte.

There are two Arias online. According to Chapman, the second Aria machine was needed to handle increased customer demand and to improve the company's redundancy, capacity and order processing speed.

Additionally, IonQ is working hard to make the IonQ Forte commercially available

IonQ

IonQ Aria and IonQ Harmony are cloud accessible via Google, Amazon Braket, Microsoft Azure and IonQ Quantum Cloud. According to the company, cloud access for IonQ Forte will be announced later. Let's take a deeper look at the different quantum computers that IonQ has built:

Forte recently demonstrated a record 29 AQ, which puts it seven months ahead of IonQ's original AQ goal for 2023.

Note: IonQ's next major technical milestone is achieving 35 AQ. At the 35 AQ level, using classical hardware to simulate quantum algorithms can become very challenging and costly. At that point, IonQ believes it will be easier and less expensive for some customers to run models on actual quantum machines rather than attempting to simulate them classically.

ML + QC = QML

Even though quantum computing is still being carried out by mid-stage prototypes, it has the potential, perhaps within this decade, to solve problems far beyond the capability of classical supercomputers. Meanwhile, as quantum computing prototypes are getting closer to becoming operationally sound, scaled versions of classical ML models are already being used in hundreds of thousands of applications across almost every industry. These range from personalized recommendations on shopping sites to critical healthcare diagnostics, such as analyzing X-rays and MRI scans to detect diseases more accurately than humans can.

QML is a still-developing field that uses quantum computers for challenging ML tasks, even though at this point quantum machines are less practical than classical computers. Combining ML and quantum computing (QC) to produce QML creates a technology that should soon be even more powerful than classical machine learning.

According to Peter Chapman, much of today's QML is created by converting classical machine learning algorithms into quantum algorithms. QML is not without challenges. It has many of the same problems as those associated with current quantum computers, the most prevalent being susceptibility to errors caused by environmental noise and decoherence due to prototype hardware limitations.

"Look at the past research we've done with Fidelity, GE, Hyundai and a few others," Chapman said. "All those projects started with regular machine learning algorithms before we converted them to quantum algorithms."

He explained, however, that IonQ's research has shown QML performance to be superior to many of its classical ML counterparts. "Our QML versions beat comparable classical ML versions," he said. "Sometimes the results show that the QML model did a better job capturing the signal in the data, or sometimes the number of iterations needed to go through the data was substantially less. And sometimes, as our most recent research indicates, the data needed for QML was about 8,000 times less than a classical model needs."

Why QML performs better than classical ML

QML uses superposition and entanglement, two principles of quantum mechanics, to develop new machine learning algorithms. Quantum superposition allows qubits to be in multiple states simultaneously, whereas quantum entanglement allows many qubits to share the same state. This is in contrast to classical physics, where a bit can be in only one state at a time, and where connectivity between bits is only possible by physical means. The relevant quantum properties allow developers to create QML algorithms to solve problems that are intractable using classical computers.

It is important to note that QML is still in its early stages of development. It is not yet powerful enough to solve very large and very complex machine learning problems. Still, QML has the potential to revolutionize classical machine learning by training models faster, providing greater accuracy and opening the door for newer and even more powerful algorithms.

Quantum artificial intelligence

Quantum AI is even newer than QML. About a year ago, IonQ started exploring quantum AI. Its first research effort produced a paper on modeling human cognition that was published in the peer-reviewed scientific journal Entropy. The paper shows that human decision making can be tested on quantum computers. Since the 1960s, researchers have found that people don't always follow the rules of classical probability when making decisions. For instance, the sequence in which people are asked questions can influence their answers. Quantum probability helps clarify that oddity.

The research paper doesn't say that the brain explicitly operates on using quantum mechanics. Instead, it applies the same mathematical structures to both fields, which adds to the intrigue of using quantum computers to simulate human cognition.

"We are excited by the potential for quantum to not only add power to machine learning but to artificial general intelligence or AGI as well," Chapman said. "AGI is the point at which AI is strong enough to accomplish any task that a human can. Some things are almost impossible to model on a classical computer but are possible on a quantum computer. And, I think that AGI will likely be where these kinds of problem sets will be done."

Wrapping up

Quantum Machine Learning is still an emerging field. It is the intersection where techniques from quantum information processing, machine learning and optimization come together to solve problems faster and more accurately than classical machine learning.

It is possible to use classical machine learning algorithms and convert them to quantum machine learning. IonQ has done this successfully several times. These QML models often outperform the original ML models.

QML offers several advantages over traditional machine learning thanks to quantum mechanics in the form of superposition and entanglement. QML can complement the growing trend of using ML models for many classification tasks, from image recognition to NLP.

Analyst's notes:

Here are a few IonQ QML-related research papers I found interesting:

January 2023 Quantum natural language processing (QNLP) is a subfield of machine learning that focuses on developing algorithms that can process and understand natural language (i.e., the languages spoken by humans). IonQ researchers demonstrated that statistically meaningful results can be obtained using real datasets, even though it is much more difficult to predict than with easier artificial language examples used previously in developing quantum NLP systems. Other approaches to quantum NLP are compared, partly with respect to contemporary issues including informal language, fluency and truthfulness.

January 2023 Research by IonQ focused on text classification with QNLP. This research demonstrated that an amplitude encoded feature map combined with a quantum support vector machine can achieve 62% average accuracy predicting sentiment using a dataset of 50 actual movie reviews. This is small, but considerably larger than previously-reported results using quantum NLP.

November 2022 This joint research by IonQ, the Fidelity Center for Applied Technology (FCAT) and Fidelity Investments focuses on generative quantum learning of joint probability distribution functionsGANs, QGANs and QCBMsall of which use machine learning to learn from data and make predictions. The research demonstrates that a relationship between two or more variables can be represented by a quantum state of multiple particles. This is important because it shows that quantum computers can be used to model and understand complex relationships between variables.

November 2021 IonQ and Zapata Computing developed the first practical and experimental implementation of a hybrid quantum-classical QML algorithm that can generate high-resolution images of handwritten digits. The results outperformed comparable classical generative adversarial networks (GANs) trained on the same database. GAN is a machine learning model with two neural networks that compete against each other to produce the most accurate prediction.

September 2021 Researchers from IonQ and FCAT developed a proof-of-concept QML model to analyze numerical relationships in the daily returns of Apple and Microsoft stock from 2010 to 2018. Daily returns are the price of a stock at the daily closure compared to its price at the previous day's closure. The metric measures daily stock performance. The model demonstrated that quantum computers can be used to generate correlations which cannot be efficiently reproduced by classical means, such as probability distribution.

December 2020 In a partnership between IonQ and QC Ware, classical data was loaded onto quantum states to allow efficient and robust QML applications. Machine learning achieved the same level of accuracy and ran faster than on classical computers. The project used QC Ware's Forge Data Loader technology to transform classical data onto quantum states. The quantum algorithm, running on IonQ's hardware, performed at the same level as the classical algorithm, identifying the correct digits eight out of 10 times on average.

Paul Smith-Goodson is the Vice President and Principal Analyst covering AI and quantum for Moor Insights & Strategy. He is currently working on several personal research projects, one of which is a unique method of using machine learning and ionospheric data collected from a national network of HF transceivers for highly accurate prediction of real-time and future global propagation of HF radio signals.

For current information on these subjects, you can follow him on Twitter.

Moor Insights & Strategy provides or has provided paid services to technology companies like all research and tech industry analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 88, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Multefire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA, Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Fivestone Partners, Frore Systems, Groq, MemryX, Movandi, and Ventana Micro., MemryX, Movandi, and Ventana Micro.

Continued here:
A Quantum Leap In AI: IonQ Aims To Create Quantum Machine Learning Models At The Level Of General Human Intelligence - Forbes

Read More..

The startup using ARMs blueprint to give European quantum a … – Sifted

Theres quite a bit of ground standing between humanity and a working quantum computer, and experts are worried that Europe could be left behind as the US and China pour billions into the technology.

But one startup from the small German city of Ulm believes it can help smaller players with less capital to compete against the big guns of Google and IBM, by addressing one of the key technical challenges in building a useful quantum computer.

QC Design which is today coming out of stealth mode is building technology to help quantum hardware companies fast track a process known as error correction: the task of getting more qubits (the quantum equivalent of a bit a unit of information in classical computing) working together to scale up the power of these machines.

Companies like Paris-based PASQAL,UK-based Quantum Motion and Finnish IQM are all building their own approaches to quantum computers, trying to increase the number of qubits in their systems.

But scaling up the number of qubits isn't the only challenge. To begin solving complex problems like finding new drugs or useful materials quantum computer builders also have to create something called logical qubits.

In simple terms, a logical qubit is a combination of hundreds of qubits working together to facilitate complex quantum calculations. This is difficult to achieve due to the very delicate nature of qubits, which generally have to be chilled to extremely low temperatures to keep them stable, making them expensive and difficult to operate.

This is where error correction comes in, as researchers build systems that counteract the natural faults that qubits make (a goal in quantum computing known as fault tolerance). But theres a big talent shortage in this field and Europe is far behind in the race, according to QC Design founder Ish Dhand.

American companies were here first and lots of the top error correction researchers from Europe and elsewhere in the world work with these big North American companies, he says.

Sifted Newsletters

Sifted Newsletter

3x a week

We tell you what's happening across startup Europe and why it matters.

Join to Sign Up

Sifted Newsletters

Sifted Newsletter

3x a week

We tell you what's happening across startup Europe and why it matters.

Join to Sign Up

If you look at the companies that have blueprints and roadmaps to fault tolerance, these are predominantly North American companies. Even with the biggest companies in Europe which have really good physical qubits the roadmaps to fault tolerance are not yet there.

QC Design hopes it can level the playing field for smaller companies that can't hire the right kind of talent, by licensing them the technology they need to help them scale up their logical qubits.

This will be a mix of hardware architecture and software design, and Dhand tells Sifted that there are already around 50 quantum computing companies globally which could benefit from QC Designs architecture licences.

The company hasn't signed any clients yet, but says its opened early discussions with some quantum hardware builders.

The founder compares his company to an early-stage version of UK chip company ARM, which licences the IP for its chip architecture rather than making the chips itself.

Its just like ARM licences out designs the laptop that I'm talking from is an ARM-designed chip but ARM doesnt make any chips of their own. It's the designs that we licence out, says Dhand.

Comparisons to ARM are, of course, a little premature QC Design was founded in 2021 and employs 10 people. But the company did land pre-seed backing from deeptech investors Vsquared, Quantonation and Salvia last year, and could provide an important piece of the puzzle for companies trying to keep up with the best-funded players in quantum computing.

The rest is here:
The startup using ARMs blueprint to give European quantum a ... - Sifted

Read More..

Researchers in Belgium move towards industrial production of qubits – ComputerWeekly.com

Shana Massar, engineer in thequantum computingprogrammeatImec, states: The goal of quantum computers is not to replace our already known classical computers for performing our daily tasks.We need quantum computers for a very particular set of problems, problems that have a high degree of complexity.

One example of a use case for quantum computing is solving optimisation problems; another is simulating molecular systems. This can be done to gain a better understanding of materials science and can also be done to help discover new drugs.

In a quantum computer, information is manipulated in a fundamentally different way than in a classical computer. In a classical computer, the logic element is a bit, which can take on one of two states: zero or one. In a quantum computer, the logic element isaqubit, or quantum bit, which is defined as any coherent two-level system that can be initialized, manipulated, and read.

If I look at the state of a bit, the state is either zero or one and this leads to deterministic measurement, while the qubit has a superposition of state, says Massar.

It is a linear combination of zero and one simultaneously. But after readout, its either zero or one along with a certain probability and this leads to probabilistic measurement.

The quantum computer has another feature, entanglement. The classical bit states are independent of each other, which leads to the fact that N bits store N states. But qubits can be entangled. They can be coupled, which means N qubits can process in some sense up to two to the power of N states. When we apply a logical operation to all those states at the same time, we get massive parallelisation and a very high computational power.

But none of these promises of quantum computing will ever come to fruition until somebody finds a way of producing reliable qubits in a repeatable manner. Qubits are currently implemented in labs in a customised fashion, but researchers at Imec would like to change that. They have started looking for ways to produce qubits on an industrial scale.

To build a one-million qubit system, or just a meaningful quantum computer, you have to reduce the qubit variability and increase the production yield, while maintaining the fidelity and coherence, saysKristiaanDe Greve, scientific director andprogrammedirectorfor Quantum Computing at Imec.

The methods that some of the best research labs in the world have been using will likely not allow you to go all the way. We have a different approach and are trying to see if we can use existing tools from the semiconductor industry, where they have produced very complex circuits, with low variability and high yield.

There are several different approaches to implementing qubits: quantum optics, trapped ions, magnetic resonance, superconductors, nitrogen vacancy in diamond, and quantum dots. Researchers at Imec focus on two technologies the superconducting devices and the semiconductor quantum dots.

One reason for these choices is that Imec sees those technologies as promising ways to make high-quality qubits. But the second reason the biggest reason for Imec is that qubits in those two technologies can be fabricated in a way that is first order compatible with complementary metal-oxide-semiconductor (CMOS) facilities, facilities that Imec has in very high quality.

One challenge with both approaches is that they operate at very low temperatures. For this reason, Imec is also doing research in cryo electronics, electronics that can work at very low temperatures.

Imec aims to build suitable and stable qubits and qubit arrays along with the necessary electronic interfaces, which allow programmers to setup the qubits to run a program and then to read the results.

To discover optimal production techniques, Imec has set up a research process, where they try different materials, architectures and production techniques to produce qubits and then test the results to measure which techniques work best.

The first phase of its research is the design phase, where a team of experts run simulations to find the best design, given different materials and the required dimensions. When the design phase is completed, they move to the second phase, the fabrication phase, which begins by running other simulations to find optimal ways of creating the qubits, determining the most accurate process flow and the best settings and recipes.

Imec then process its sample in the fab, closely monitoring the different processing steps using inline characterisation. When the fabrication of the samples is successful, they move to the last phase cryo characterisation or characterisation at low temperature.

In the end, they wind up with a wafer full of dies, sub dies, and chips that they mount on a sample holder to put it in a refrigerator for measurements at very low temperature. The temperatures go down to just a few thousandths of a Kelvin, which is much cooler than outer space. Using the cryo measurements, Imec researchers extract qubit performance and characteristics and assess how well a given design and fabrication process works.

We are currently focusing our research on the fabrication of devices, and we are investigating different gate stack materials and patterning technology, says Massar. We are also investigating different substrate materials and formation recipes. And we look at the overall thermal budget of our processes and the consequence it has on the qubit quality.

At the same time, were working on the qubit control and design. Were improving the design of our devices, the controlling devices of the qubit and the measurement setup quality. As an example, over the past few months, we have worked on decreasing the electromagnetic noise in our measurement setup. This leads to a better quality on the qubit read.

At the other end, were also looking at the characterisation setup quality. We want to improve the qubit read and also improve our setup in terms of both the quantity of measurements and the quality of each measurement.

Imec has made big progress. Last year, it demonstrateda fab-compatible process to manufacture high-coherence superconducting qubits and are now transferring the process from the lab to the fab. By doing this, they hope to open new possibilities for manufacturing fab qubits with high coherence and low variability.

Whoknows? Maybe one day this will lead to a one-million qubit quantum computer.

Read the original here:
Researchers in Belgium move towards industrial production of qubits - ComputerWeekly.com

Read More..

Chinese Scientists Achieve Breakthrough in Quantum Computing … – CityLife

Scientists in China have made a significant stride in the field of quantum computing, as they announce that their device, Jiuzhang, can perform tasks commonly utilized in artificial intelligence a staggering 180 million times faster than the worlds most powerful supercomputer.

The applications of their quantum computers problem-solving capabilities span various domains, including data mining, biological information, network analysis, and chemical modeling research, according to the researchers.

Led by Pan Jianwei, a physicist at the University of Science and Technology of China, often referred to as the countrys father of quantum, the team published their findings in the peer-reviewed journal Physical Review Letters last month.

During the experiment, the team employed Jiuzhang to tackle a complex problem that poses challenges for classical computers. The quantum computer utilized over 200,000 samples to solve the problem.

Remarkably, the researchers successfully implemented and accelerated two algorithms commonly employed in AI, namely random search and simulated annealing, marking the first time a quantum computer has achieved such a feat.

To put the speed into perspective, the fastest classical supercomputer in the world would require 700 seconds for each sample, which equates to nearly five years to process the same number of samples. However, Jiuzhang accomplished this task in less than a second.

An article published by Physics, a magazine from the American Physical Society reporting on papers from the Physical Review journals, highlighted the significance of the teams achievement. The editor noted that the result extends the list of tasks for which todays noisy quantum computers offer an advantage over classical computers.

In the realm of traditional computing, a bit represents either zero or one as its fundamental unit of information. Quantum computing takes it a step further with qubits, which can represent zero, one, or both simultaneously, showcasing the peculiar nature of quantum mechanics.

Due to their ability to simultaneously represent all possibilities, quantum computers hold immense theoretical power and speed compared to the regular computers we rely on in our daily lives.

However, the subatomic particles at the core of this technology are delicate, short-lived, and vulnerable to errors caused by even the slightest disturbance from the surroundings. To mitigate disruption, most quantum computers operate in extremely cold and isolated environments.

Named after a 2,000-year-old Chinese mathematics text, Jiuzhang employs light as its physical medium for computation. Unlike other quantum computers, Jiuzhang does not necessitate operation in extremely low temperatures, providing greater stability and longer operating times.

As the research progresses, the quantum processors advantage over classical algorithms optimized for solving graph problems remains an open question, according to the Physics article. Nonetheless, this breakthrough paves the way for further exploration of real-world applications using existing noisy intermediate-scale quantum computers.

See original here:
Chinese Scientists Achieve Breakthrough in Quantum Computing ... - CityLife

Read More..