Page 2,048«..1020..2,0472,0482,0492,050..2,0602,070..»

O’Neill Receives IACR Test-of-Time Award – UMass News and Media Relations

Adam O'Neill, assistant professor in the Manning College of Information and Computer Sciences at UMass Amherst (CICS), is one of three authors who recently received the Test-of-Time Award from the International Association for Cryptologic Research (IACR) for their paper Deterministic and Efficiently Searchable Encryption, presented at the IACR Crypto 2007 conference.

Co-authored by Mihir Bellare of University of California San Diego and Alexandra Boldyreva of Georgia Institute of Technology, who was then O'Neill's doctoral advisor, the award-winning paper has over 1000 citations, according to Google Scholar.

The paper has been lauded broadly for its lasting impact on cryptology and was cited by the IACR for placing searchable encryption on a rigorous footing, leading to a huge interest in this field in applications. Their paper's key contribution was to define and achieve strong definitions of privacy for deterministic encryption algorithms. It further showed that such algorithms are highly effective for encrypting fields in remote databases, allowing fast searching in such databases by "distinguished receivers" in a public-key setting. In public-key encryption schemes, a public key is used for encrypting data, and the data can be only decrypted by a corresponding private key. In a remote database setting, such schemes allow anyone to add encrypted data to the database, but only allow a distinguished receiver who owns the matching private key to retrieve and decrypt the data.

Deterministic encryption, which always produces the same encrypted text, or ciphertext, with the same inputs, allows fast searching for data retrieval by storing and indexing encrypted fields in a data structure. The challenge, according to the authors, is that inherent limitations of deterministic encryption can make data vulnerable to non-trusted users. While randomized encryption methods, which always produce a different ciphertext, are more secure, such ciphertexts are far slower to search, to the point of being unusable for large databases. In the paper, the authors struck a novel privacy-efficiency trade-off for the remote database setting.

The authors constructed two novel deterministic encryption schemes that were secure according to the new definitions: Encrypt-with-Hash and RSA-DOAEP. The latter functioned as the first example of a public-key cipher, which keeps the length of the ciphertext equal to the length of the plaintext--which, as the authors explained, is of great importance for reducing bandwidth cost and securing legacy security code. Additionally, they introduced the notion of efficiently searchable encryption schemes, which combine randomized encryption with deterministic "tags" that allow for fast searching.

Each year, the IACR Test-of-Time Award offers special recognition for lasting contributions of papers presented at IACR conferences at least fifteen years prior.

O'Neill joined CICS as a faculty member in 2019. Previously, he served at Georgetown University as an assistant professor in the computer science department. He also held a visiting faculty position in the cryptographic technology group at the National Institute of Standards and Technology and two postdoctoral appointments at Boston University and the University of Texas at Austin. He earned his doctorate in computer science from Georgia Institute of Technology under Professor Alexandra Boldyreva and a bachelor's degree in computer science and mathematics from the University of California San Diego.

Original post:
O'Neill Receives IACR Test-of-Time Award - UMass News and Media Relations

Read More..

Are relay services the key to online privacy? – Ericsson

In the past few years, several actions to strengthen user security and privacy on the Internet have been taken. Focusing on techniques that have been developed and deployed to prevent unintended third parties from accessing and manipulating communication. The wide-scale deployment of HTTPS to provide encrypted communication for web content access is one example. Efforts to deploy new protocols that encrypt associated data like domain name system (DNS) requests is another. These techniques prevent passive observers from knowing the exact data that is exchanged, but they can still deduce which parties communicate and sometimes even which services have been requested.

More recently, focus has centered on various mechanisms that enhance user privacy by concealing even more data and metadata from Internet communications. These mechanisms often rely on a relay service that intermediates the communication between two hosts. While at first glance it seems counterintuitive to involve yet another party in the communication process to protect user privacy, the logic behind using relays is that in the end, each party can only access a limited set of information, and therefore each party knows less than before.

These relay services specifically aim to separate two important pieces of information: knowledge of the identity of the person accessing a service is separated from knowledge about the service being accessed. This requires two levels of encryption.

Simple VPN services only add one level of encryption to the link between the client and the VPN server: the VPN server can still see which services are being accessed, and by whom. In the Internet Engineering Task Force (IETF), the leading standardization body for Internet technologies, most of the activities related to these goals of separating information are indicated by using the term Oblivious, but theres also MASQUE (Multiplexed Application Substrate over QUIC Encryption) and a new to-be-chartered group called PPM (Privacy Preserving Measurements) that apply this communication pattern to different use cases.

MASQUE is a new IETF working group that extends HTTP CONNECT to initiate and manage the use of QUIC-based relays. Catch up on the background of QUIC and MASQUE in our earlier blog posts. MASQUE is a tunnel-based approach similar to VPN services. However, it sets up an encrypted connection to a relay, or so-called MASQUE server, using QUIC as a tunnel transport, then forwards traffic through that tunnel to a target server or another relay (see Figure 1).

Figure 1: Setup with two MASQUE proxies, e.g. hosted by the Mobile Network Operator (MNO) and the Content Distribution Network (CDN).

At present, the most recent example of such services being deployed is Apples new Private Relay service, which is in beta testing for iCloud+ users. When activated, Private Relay uses both the MASQUE and Oblivious DNS protocols for web traffic emitted by Safari, and for all DNS traffic.

In both cases, user traffic either for a web server or a DNS resolver is encrypted then first sent to a relay service that knows the users identity and IP but doesnt see the user request itself. This relay service then forwards the encrypted traffic to another relay, which can determine where to forward the end-to-end encrypted service request but doesnt know the users identity or IP address.

Figure 2: Apple's Private Relay setup - Only the ingress proxy can see the client's IP address and only the egress proxy knows the target server name.

Figure 3: Use of ingress and egress proxies to provide the Oblivious DNS over HTTP (DoH) service

While this approach seems straightforward, its a substantial change in communication patterns. Instead of sending traffic directly to a service, essentially all user traffic is routed to the same small set of intermediaries, and traffic received at the target server comes from a more limited set of entities as well. This significantly changes how traffic flows and is observed in the networks, as well as for the application service providers. Consequently, deploying these kinds of services on a large scale will have an impact on how we manage our networks.

Without relay services, traffic effectively broadcasts cleartext information to potentially unknown third parties that may listen on-path, especially when transport information isnt otherwise encrypted. Tunnels between selected relays can empower users to control the data and metadata that could reveal privacy-sensitive information. An example is usage patterns and details of services accessed that can reveal if a user is at home or not to anybody passively listing on the network path. The challenge is to ensure that the right data, and only the right data, is shared with the right entity. Ideally, users only share their identity with an entity they already have a trusted relationship with, like the Access Network provider or service provider. A setup like this is more complicated than what we have today and may make some of the network management techniques currently deployed more complex, but there are also opportunities for better collaboration between the network and, say, the application at the endpoint.

Explicit trust relations provide the basis for more targeted information exchanges with intermediates. Today many network functions for example for performance optimization or zero rating passively listen and try to derive useful data from what is revealed. However, the details of what information is available could change anytime due to the ongoing deployment of encryption techniques, general protocol evolution, and new services like Private Relay, or simply by change of application behavior. Requesting explicit information from a relay service instead, or a relay from the endpoint, provides better guarantees that the information is correct and useful and wont suddenly break if traffic changes due to the deployment of encryption, or new end-to-end services. The latter point might even be the more important one.

The network management techniques deployed at present often rely on information that is exposed by most traffic but without any guarantees that the information is accurate. For example, zero rating today often relies on the Service Name Identifier (SNI) in TLS. However, a simple technique like domain fronting can circumvent and thereby cheat the respective network control functions. In the future, the SNI will likely be encrypted and therefore wont be usable anymore for a passive observer.

Similar challenges exist for techniques such as TCP optimizers that provide protocol specific in-network performance enhancements for unencrypted traffic. These techniques are used today as they can be deployed by the network without collaboration or coordination with other entities, and as such provide a relatively straight forward approach to improve performance under challenging network conditions. However, these techniques often rely on cleartext information from the protocol (for example, TCP header) or even unencrypted application layer data. This kind of traffic interception or traffic manipulation has caused protocol ossification in the past and therefore can hinder deployment of new protocol features. As the portion of encrypted traffic further increases (HTTPS and QUIC traffic) these techniques are less applicable. Instead, having an explicit collaboration between one or more endpoints and an in-network relay to exchange information explicitly avoids protocol ossification and ambiguity of the information provided.

A further example is parental control, which relies on DNS filtering today and therefore is also one of the functions that service providers indicated as being affected when Private Relay is used. The example perfectly illustrates the main problem of this technique: it breaks easily if, for example, another DNS server is used, or information simply becomes better protected. Using explicit relays instead of a DNS-based solution to provide this function not only makes this service less fault-prone. It also provides an opportunity for improvements by involving the content provider, or a relay that acts on behalf of the content provider, and therefore can provide much better information as input to the content control decision itself.

Finally, the cases described above show theres also an opportunity in this change. While adding relays seems technically more complex, business relations can become simpler. Whenever collaboration with a network service provider or content provider is needed, this can also be proxied by the relay hosting provider, and therefore a potentially much smaller set of entities to cooperate with. Especially this is a chance for mobile network operators to establish new, well defined business relationships, and hopefully more easily.

The deployment of relay-based services will indeed change the communication pattern and traffic flows on the Internet. Instead of direct communication between two parties that is observable by all on-path elements, intermediates will be involved, and only limited information will be distributed to an explicitly selected set of trusted parties. However, given the importance of the Internet, more user control of privacy is long overdue and explicit collaboration between these parties could be the key for better in-network support of new and emerging services. Now is the right time to focus our work on these new communication patterns and make the best use of the emerging technologies for network collaboration!

Read more about Ericssons network security solutions

Find out more about our network services, fit for todays shifting needs

Learn about the benefits of network security automation

Listen: Why 5G is the most secure platform, with our Chief Product Security Officer

Follow this link:
Are relay services the key to online privacy? - Ericsson

Read More..

The role of ‘God’ in the ‘Matrix’ – Analytics India Magazine

We are survival machines robot vehicles blindly programmed to preserve the selfish molecules known as genes. This is a truth which still fills me with astonishment.

Ancient Greeks imagined their gods to be capable of building robots. Hundreds of years have passed since the collapse of the greatest civilization, yet the human pursuit of developing something in their own image continues unabated. Thanks to the advances in AI, we now have the means to create human-like robots.

In his book, Human Natures: Genes Cultures and the Human Prospect, Paul Ehrlich said the concept of religion first appeared when humans developed brains large enough for abstract thought. On the other hand, artificial intelligence is brand new but pervasive.

Science and religion rarely see eye to eye, except for occasional outliers like the artificial intelligence church. Even though the church is closed now, it does pose a critical question, how will religion react to a sentient machine with free will?

Interestingly, some cultures are more open to technology than others. For example, a 400-year-old Buddhist temple in Kyoto, Japan, called Kodaji caught the public imagination last year when it announced a new clergy member, a robot priest that performs sermons. Called Mindar, the USD 1 million robot was designed to look like Kannon, the Buddhist deity of mercy. The idea behind the priest robot was to rekindle peoples faith. Meanwhile, SoftBanks humanoid robot Pepper is available for hire as a Buddhist priest for funerals.

At the World Economic Forum in Davos in 2018, Pope Francis said AI, robotics, and other innovations should be used to serve humanity and protect our common home.

These are examples of religion leveraging modern-day technology. But, what happens when AI hits singularity, or we achieve AGI? What happens when we create something in our own image?

What happens when an AI robot with the same intellectual ability as a human makes its own decisions? Should this machine be considered a human? Does it have a soul?

Religion argues that God has a plan for everybody. Religion also tells people how they should lead their life. So, where does this AI robot fit in? Herein lies the rub: Technology has the power to shake the foundations of religion.

In his 1950 paper Computing Machinery and Intelligence, Alan Turing, the founding father of AI, said: Thinking is a function of mans immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or machines. Hence no animal or machine can think. Plato believed that the soul was both the source of life and the mind.

Different faiths react differently to technology. For example, last year, a top religious body in Indonesia forbade cryptocurrencies under Islamic law. Also, it can be argued that faith is very personal, so individuals will respond differently to AI.

AGI is the north star of companies like OpenAI, DeepMind and AI2. While OpenAIs mission is to be the first to build a machine with human-like reasoning abilities, DeepMinds motto is to solve intelligence.

DeepMinds AlphaGo is one of the biggest success stories in AI. In a six-day challenge in 2016, the computer programme defeated the worlds greatest Go player Lee Sedol. DeepMinds latest model, Gato, is a multi-modal, multi-task, multi-embodiment generalist agent. Googles 2021 model, GLaM, can perform tasks like open domain question answering, common-sense reading, in-context reading comprehension, the SuperGLUE tasks and natural language inference.

OpenAIs DALL.E2 blew minds just a few months ago with imaginative renderings based on text inputs. Yet, all these achievements are pale compared to the intelligence of the human child.

However, people from the AI/ML community believe that AGI is achievable. Recently, Elon Musk, who has invested a lot in AI over the years, tweeted that he would be surprised if we do not achieve AGI by 2029.

Meanwhile, a popular critic of deep learning and AGI Gary Marcus said current AI is illiterate in an interview. It can fake its way through, but it doesnt understand what it reads. So the idea that all of those things will change on one day and on that magical day, machines will be smarter than peopleis a gross oversimplification, he said.

When Anthony Levandowski established the first church of artificial intelligence, called Way of the Future, it raised a few eyebrows.

If we achieve AGI in the coming years, it will change many things. When technology becomes far superior, and these artificial beings can do things beyond a human being, there are chances that people will associate them with a higher power. For example, AI in its current form is already aiding scientists in drug discovery. Humans are always on the lookout for God, even if on a personal level.

But one of the worst possible outcomes would be that AI emerges as a polarising factor. Hence, it warrants a greater discussion in this context. There is also an increasing need for the larger participation of religious groups in deliberating the ethical implications of AI development.

See the original post:
The role of 'God' in the 'Matrix' - Analytics India Magazine

Read More..

How To Slip Into A Deep Meditation Every Night Using "Conscious Sleep" – mindbodygreen.com

Author of From Fatigued to Fantastic Jacob Teitelbaum, M.D., defines conscious sleep as "the ability to be aware of the self, but not of our body or surroundings, during different stages of sleep."Western scientific studies have focused predominantly on this state during REM sleep and how a person can tap into their consciousness and experience lucid dreaming, Teitelbaum tells mbg.

However, conscious sleep is possible in non-REM sleep as well. In fact, in Eastern meditation traditions, conscious sleep is taught as a way to maintain self-awareness, but without being aware of the body or environment, during deep non-dream sleep, Teitelbaum explains over email.

According to a review published in the journal Progress in Brain Research, the concept of conscious sleep was highlighted by Elmer and Alyce Green of the Menninger Foundation in Topeka, Kansas. The couple teamed up with the Indian master of yoga meditation Swm Rma at the time, to further explore how a person could find themselves in their deepest, non-REM sleep but still have a sharp awareness of their surroundings.

From the yogi's perspective, conscious sleep was (and still is) considered to be a form of deep meditation that teaches those who practice how to sustain their meditative state, regardless of what's happening in the world around them.

See the rest here:
How To Slip Into A Deep Meditation Every Night Using "Conscious Sleep" - mindbodygreen.com

Read More..

Google and France’s Engie Team Up to Accelerate Wind Power – ITPro Today

(Bloomberg) -- French utility Engie SA will begin using an experimental technology from Google that aims to boost efficiency and power from wind farms, the companies announced on Wednesday.

Google is selling the service through its cloud division, which istryingto lure clients with toolsfor managing energy usage and reducing emissions. In 2019, Google said it worked with DeepMind, a sister company of parentAlphabet Inc.,to makeartificial intelligence software that could predict wind power output thirty-six hours in advance. That would let energy providers schedule inputs into energy grids ahead of time with more accuracy, countering some of the unpredictability of wind generation.Early tests on Googles data centers improved the value of wind energy by 20 percent,according to Google.

Related: From Energy Star to DEEP: Making Data Centers More Efficient

For utility customers, Google's AI service offers forecasts that can sharpen their decisions when they buy and sell in energy markets, said Larry Cochrane, director of global energy solutions for Google Cloud. The best way to think about it is as a trading recommendations tool, he said.

Engie will be the first customer to use Googles feature, starting with the utilityswind portfolio in Germany. If the pilot program is successful, the companies plan to expandacross Europe, Cochrane said.The companies did not sharefinancial terms. According to Cochrane,Google may soon offer similar forecasting services for other renewable markets like solar power and storage.

Related: Google Is Applying AI to Crack Next-Gen Geothermal Energy for Data Centers

The French utilityplans to more than doubleits renewable power generation to 80 gigawatts by 2030, despite a recentuptick in prices for solar panels and wind turbines. Previously, Engie has publicized its work with Amazon Web Services, a Google cloud rival.

See original here:
Google and France's Engie Team Up to Accelerate Wind Power - ITPro Today

Read More..

Can Stimulating the Vagus Nerve Improve Mental Health? – The New York Times

The device may be especially helpful for those with bipolar depression because so few treatments exist for them, said Dr. Scott Aaronson, one of the senior psychiatrists involved in the clinical trial and the chief science officer of the Institute for Advanced Diagnostics and Therapeutics, a center within the Sheppard Pratt psychiatric hospital that aims to help people who have not improved with conventional treatments and medications.

In general, one of the problems with treating depression is that weve got a lot of medications that pretty much do the same thing, Dr. Aaronson said. And when patients do not respond to those medications, we dont have a lot of novel stuff.

Implanted vagus nerve stimulation isnt currently accessible for most people, however, because insurers have so far declined to pay for the procedure, with the exception of Medicare recipients participating in the latest clinical trial.

Dr. Traceys research, which uses internal vagus nerve stimulation to treat inflammation, may also have applications for psychiatric disorders like PTSD, said Dr. Andrew H. Miller, the director of the Behavioral Immunology Program at Emory University, who studies how the brain and the immune system interact, and how those interactions can contribute to stress and depression.

PTSD is characterized by increased measures of inflammation in the blood, he said, which can influence circuits in the brain that are related to anxiety.

In one pilot study at Emory, for example, researchers electronically stimulated the neck skin near the vagus in 16 people, eight of whom received vagus nerve stimulation treatment and eight of whom received a sham treatment. The researchers found that the stimulation treatment reduced inflammatory responses to stress and was associated with a decrease in PTSD symptoms, indicating that such stimulation may be useful for some patients, including those with elevated inflammatory biomarkers.

Meanwhile, Dr. Porges and his colleagues at the University of Florida have patented a method to adjust vagus nerve electrical stimulation based on a patients physiology. He is now working with the company Evren Technologies, where he is a shareholder, to develop an external medical device that uses this approach for patients with PTSD.

View original post here:
Can Stimulating the Vagus Nerve Improve Mental Health? - The New York Times

Read More..

Graphcore Thinks It Can Get An AI Piece Of The HPC Exascale Pie – The Next Platform

For the last few years, Graphcore has primarily been focused on slinging its IPU chips for training and inference systems of varying sizes, but that is changing now as the six-year-old British chip designer is joining the conversation about the convergence of AI and high-performance computing.

There are now 168 supercomputers in the Top500 and quite a few more outside of that list that use accelerators to power these increasingly converging workloads. Most of these systems are using Nvidias GPUs, but the appearance of seven new systems with AMDs fresh Instinct MI250X GPUs which includes Oak Ridge National Laboratorys Frontier, the United States first exascale system shows there is an appetite to consider alternative architectures when they can provide an advantage.

Graphcore hopes it can soon get a slice of this action with its massively parallel processors.

Phil Brown, a Cray veteran who returned to Graphcore in May as vice president of scaled systems after a four-month stint at chip startup NextSilicon, tells The Next Platform that the IPU maker has recently seen significant, sustained interest from organizations that are considering deploying Graphcores specialized silicon for these converged AI and HPC needs, and this includes large deployments.

I think were now at the point where there is going to be significant interest in doing large-scale deployments with the systems. The technology space and machine learning capability has evolved sufficiently that it can deliver significant value to the scientific organizations, and so Im expecting those to follow quite rapidly in the future, he says.

Graphcore views three key opportunities around the convergence of HPC and AI: using IPUs class-leading performance for 32-bit floating point math to tackle HPC applications, training large foundation models like DeepMinds 280-billion-parameter language model, and using AI to complement and accelerate traditional HPC workloads to create a feedback loop of sorts.

Its the latter area that Brown says is likely the largest opportunity for Graphcore in HPC.

This may be having surrogate models, elements of a traditional HPC simulation, replaced by a machine learning kernel parameterization in a weather forecast, for example, he says. Surrogate models are computationally expensive, he added, so replacing them with a machine learning models that are much cheaper but equally accurate can help reduce the overall cost of running simulations.

These opportunities are based on exploratory work Graphcore has conducted with partners that has yielded promising results. For instance, the company says its IPUs were used to train a gravity wave drag model for weather forecasting five times faster than Nvidias V100. In another example, Hewlett Packard Enterprise trained a deep learning model for protein folding using Graphcores IPU-M2000 system and found that the second-generation IPU was around three times faster than Nvidias A100.

To help more the conversation forward, several government labs are in different stages of trying out Graphcores IPUs to see if the processors hold promise for large systems in the future.

Most recently, this includes the US Department of Energys Sandia National Laboratories and Argonne National Laboratory. Both are adding Graphcores Bow IPU Pod systems to their AI hardware testbeds, and Argonne is doing so after reporting impressive results with Graphcores first-generation IPU systems. These Bow Pods will use the chip designers recently announced Bow IPU, which makes use of Taiwan Semiconductor Manufacturing Cos wafer-on-wafer 3D stacking technology to provide more performance while using less power compared to its second-generation IPU.

Michael Papka, director of the Argonne Leadership Computing Facility, says the addition of Graphcores Bow IPU Pod supports the testbeds goal of understanding the role AI accelerators can play in advancing data-driven discoveries, and how these systems can be combined with supercomputers to scale to extremely large and complex science problems.

The University of Edinburghs EPCC supercomputing center is also installing a Bow IPU Pod system, which will use it for a broad range of use cases as part of the multi-industry-supporting Data Driven Innovation Programme that is funded by the governments of Scotland and the United Kingdom. EPCC has expressed interest in Graphcores in-development Good computer, which the company has promised will deliver more than 10 exaflops of AI floating point compute with next-generation IPUs.

If we were to travel 226 miles south of EPCC, wed find support for Graphcore from Englands Hartree Centre, which plans to access IPUs through cloud service provider G-Core Cloud to conduct research on fusion energy as part of a partnership with the UK Atomic Energy Authority.

While Graphcore is building its own exascale supercomputer for AI with the Good system, Brown saus he believes the companys IPUs will be well-suited for other exascale supercomputers in the future, ranging from those that are very AI-focused to those running traditional simulation software that could benefit from performing such calculations at a lower precision on IPUs.

This means that, in Browns mind, an exascale system could consist mostly of Graphcore IPUs or the processors could be a component of a larger heterogenous system, which he says is based on feedback hes heard from people in the HPC community.

The message that weve been getting from them is that theyre very interested in exploring exascale system architectures that include components of different types that give them a good balance of overall capability for their systems, because they recognize that the workloads are going to become more heterogeneous in terms of the space but also the performance and the value proposition you get from these heterogeneous processors is well worth the investment, he says.

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.Subscribe now

Originally posted here:
Graphcore Thinks It Can Get An AI Piece Of The HPC Exascale Pie - The Next Platform

Read More..

Quantum computer manufacturer Pasqal strengthens position in North American market by opening offices in the US and Canada – EurekAlert

Paris, Boston, Sherbrooke, June 2, 2022 - Pasqal, the global leader in neutral atoms quantum computing, has named seasoned quantum technology executive, Catherine Lefebvre, to lead North American business development for the company. The company also announced office openings in Boston (U.S.) and in Sherbrooke (Canada).

As Vice President, Strategic Business Development North America for Pasqal, Lefebvre will be based in the Boston office to help drive the companys commercial and strategic partnership efforts and serve as the primary point of contact for U.S.-based clients and partners. Pasqals local U.S. presence will allow the company to further capitalize on the tremendous market opportunity and to expand the adoption of Pasqals quantum hardware and software solutions by U.S. industries including energy, healthcare, finance and automotive, while deepening Pasqals relationships with U.S. customers.

Prior to joining Pasqal, Lefebvre served in multiple roles, including as U.S. and Canada Innovation Ambassador for quantum technology company M Squared; advisor in quantum technologies at Quebec Ministry of Economy and Innovation; and as Science Liaison Officer for Element AI (acquired by ServiceNow), a global developer of AI solutions. Lefebvre has a background in research with a Ph.D. in molecular physics and quantum chemistry with training in science diplomacy.

Pasqals Canadian office is located in the Quantum Innovation Zone in Sherbrooke, which brings together researchers, startups and investors to cultivate the local quantum ecosystem and accelerate the development and adoption of quantum technologies. Known as Pasqal Canada, the new subsidiary will allow Pasqal to collaborate with both academic institutions and industry to grow its business in Canada and develop new commercial applications in such areas as smart cities, energy and materials science

Strengthening our coverage in North America opens up immense new opportunities to leverage our neutral atoms quantum computers for real-world benefit across new regions, markets and industries, said Georges Olivier-Reymond, CEO and founder of Pasqal. Catherine is the ideal executive to drive this next phase of our growth, and we are honored to welcome her to the team.

Offering a broad range of full stack quantum solutions across different industries, Pasqals customers include Johnson & Johnson, LG, Airbus, BMW Group, EDF, Thales, MBDA and Credit Agricole CIB.

To learn more about Pasqal, please visit:www.pasqal.com.

About PasqalPasqal builds quantum computers from ordered neutral atoms in 2D and 3D arrays with the goal of bringing a practical quantum advantage to its customers in addressing real-world problems, especially in quantum machine learning and predictive modeling. Pasqal was founded in 2019 by Georges-Olivier Reymond, Christophe Jurczak, Professor Dr. Alain Aspect, Dr. Antoine Browaeys and Dr. Thierry Lahaye. Based in Palaiseau and Massy, south of Paris, Pasqal has secured more than 40 million in financing combining equity and non-dilutive funding from Quantonation, the Defense Innovation Fund, Runa Capital, BPI France, ENI and Daphni.

Website:www.pasqal.comTwitter: @pasqalioLinkedIn:www.linkedin.com/company/pasqal/

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Continued here:
Quantum computer manufacturer Pasqal strengthens position in North American market by opening offices in the US and Canada - EurekAlert

Read More..

The Powerful New AI Hardware of the Future – CDOTrends

As an observer of artificial intelligence over the last few years at DSAITrends, it is fascinating to observe the dichotomy between the sheer amount of research and development in AI, and its glacial real-world impact.

No doubt, we do have plenty of jaw-dropping developments from AI-synthesized faces that are indistinguishable from real faces, AI models that can explain jokes, and the ability to create original, realistic images and art from text descriptions.

But this has not translated into business benefits for more than a handful of top tech firms. For the most part, businesses are still wrestling with their board about whether to implement AI or struggling to operationalize AI.

In the meantime, ethical quandaries are as yet unresolved, bias is rampant, and at least one regulator has warned banks about the use of AI.

One popular business quote comes to mind: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

So yes, while immediate AI gains seem lacking, the impact of AI in the long term might yet exceed our wildest expectations. And new, powerful AI hardware could well accelerate AI developments.

More powerful AI hardware

But why the fascination with more powerful hardware? In the groundbreaking Scaling laws for neural language models paper published in 2020, researchers from OpenAI concluded that larger AI models will continue to perform better and be much more sample efficient than previously appreciated.

While the researchers cautioned that more work is needed to test if the scaling holds, the current hypothesis is that more powerful AI hardware could train much larger models that will yield capabilities far beyond todays AI model.

Leading the charge on the hardware front would be data center-class GPUs from NVIDIA and AMD, as well as specialized AI processors from technology giants such as Google. For example:

Stepping outside the box

There are research fields that could impact the development of AI, too. For example, the Loihi 2 is a second-generation experimental neuromorphic chip by Intel. Announced last year, a neuromorphic processor mimics the human brain using programmable components to simulate neurons.

According to its technical brief (pdf), the Loihi 2 has 128 cores and has potentially more than a million digital neurons due to its asynchronous design. The human brain does have roughly 90 billion interconnected neurons, so there is still some way to go yet.

Chips like the Loihi 2 has another advantage though. As noted by a report on The Register, high-end AI systems such as DeepMinds AlphaGo require thousands of processing units running in parallel, with each consuming around 200 watts. Thats a lot of power and we havent even factored in the ancillary systems or cooling equipment yet.

On its part, neuromorphic hardware promises between four and 16 times better energy efficiency than other AI models running on conventional hardware.

Warp speed ahead with quantum computing

While the Loihi 2 is made of traditional transistors there are 2.3 billion of them in the Loihi 2 another race is underway to make a completely different type of computer known as quantum computers.

According to a report on AIMultiple, quantum computing can be used for the rapid training of machine learning models and to create optimized algorithms. Of course, it must be pointed out that quantum computers are far more complex to build due to the special materials and operating environments required to access the requisite quantum states.

Indeed, experts estimate that it could take another two decades to produce a general quantum computer, though working quantum computers of up to 127-qubit exists.

In Southeast Asia, Singapore is stepping up its investments in quantum computing with new initiatives to boost talent development and provide access to the technology. This includes a foundry to develop the components and materials needed to build quantum computers.

Whatever the future brings for AI in the decades ahead, it will not be for lack of computing prowess.

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [emailprotected].

Image credit: iStockphoto/jiefeng jiang

More:
The Powerful New AI Hardware of the Future - CDOTrends

Read More..

Arqit Quantum Reports First Quarter Operating Loss of $14.3 Million Parabolic Arc – Parabolic Arc

Generated $12.3 million of revenue and other operating income in the first half of fiscal year 2022

LONDON (Arqit Quantum PR) Arqit Quantum Inc. (Nasdaq: ARQQ, ARQQW) (Arqit), a global leader in quantum encryption technology, today announced its operational and financial results for the first half of its fiscal year ending (FYE) 30 September 2022.

Recent Operational Highlights

Management Commentary

Arqit has made significant progress in the commercialisation of our QuantumCloudTMproduct in the first six months of this fiscal year, said David Williams, Arqits Founder, Chairman and Chief Executive Officer. In the period we signed and fulfilled contracts with leading enterprises in our key identified market sectors, including Virgin Orbit and AUCloud. We also began the process of demonstrating our stronger, simpler encryption in demonstration projects with numerous customers. As a result of our commercial sales and other activities, we are pleased to deliver $12.3 million in revenue and other operating income for the six-month period.

Our contract wins, other announced activity, such as our participation in the UK Ministry of Defence multi-domain integration project and UK 5G Open RAN, and prospective customer dialogues confirm our belief that telecoms, defence, financial institutions and IoT are the early adopter markets that understand the issues with todays public key infrastructure and the future threat posed by quantum computers.

Our symmetric key agreement service is increasingly being recognised as a solution that meets the moment it is computationally light, quantum safe, available in the instant needed as a single use key or in unlimited group sizes and does not require changes to the existing AES256 encryption infrastructure.

We are pleased to have hired a significant cohort of new senior sales executives in the first half of the fiscal year to complement our team. All have deep relationships within their respective geographies and industry verticals. As our focus is on driving sales, top sales talent is a must.

The confidence in Arqit is shared by our investors. Today, we also announced that shareholders holding 105.9 million of the 108.6 million shares currently subject to lock-up agreements that were due to expire in connection with this results announcement were approached to voluntarily extend their lock-up agreements until September. All approached shareholders agreed to participate, which is a strong statement of support.

We will look to continue the momentum we have created in H1 as we drive toward our fiscal year end in September.

First Half of Fiscal Year 2022 Financial Highlights

Arqit commenced commercialisation and began generating revenue in the second half of the fiscal year ended 30 September 2021. Therefore, comparison of our results for the six months ended 31 March 2022 to prior periods may not be meaningful for all financial metrics.

1Administrative expenses are equivalent to operating expenses.

2Adjusted loss before tax is a non-IFRS measure. For a discussion of this measure, how its calculated and a reconciliation to the most comparable measure calculated in accordance with IFRS, please see Use of Non-IFRS Financial Measures below.

About Arqit

Arqit supplies a unique quantum encryption platform-as-a-service which makes the communications links of any networked device secure against current and future forms of attack even from a quantum computer.Arqits product, QuantumCloud, enables any device to download a lightweight software agent, which can create encryption keys in partnership with any other device.The keys are computationally secure, optionally one-time use and zero trust.QuantumCloud can create limitless volumes of keys in limitless group sizes and can regulate the secure entrance and exit of a device in a group.The addressable market for QuantumCloud is every connected device.

Media relations enquiries:Arqit:contactus@arqit.ukFTI Consulting:scarqit@fticonsulting.com

Investor relations enquiries:Arqit:investorrelations@arqit.ukGateway:arqit@gatewayir.com

Use of Non-IFRS Financial Measures

Arqit presents adjusted loss before tax, which is a financial measure not calculated in accordance with IFRS. Although Arqits management uses this measure as an aid in monitoring Arqits on-going financial performance, investors should consider adjusted loss before tax in addition to, and not as a substitute for, or superior to, financial performance measures prepared in accordance with IFRS. Adjusted loss before tax is defined as loss before tax excluding change in fair value of warrants, which is a non-cash expense. There are limitations associated with the use of non-IFRS financial measures, including that such measures may not be comparable to similarly titled measures used by other companies due to potential differences among calculation methodologies. There can be no assurance whether (i) items excluded from the non-IFRS financial measures will occur in the future, or (ii) there will be cash costs associated with items excluded from the non-IFRS financial measures. Arqit compensates for these limitations by using adjusted loss before tax as a supplement to IFRS loss before tax and by providing the reconciliation for adjusted loss before tax to IFRS loss before tax, as the most comparable IFRS financial measure.

IFRS and Non-IFRS loss before tax

Arqit presents its consolidated statement of comprehensive income according to IFRS and in line with SEC guidance. Consequently, the changes in warrant values are included within that statement in arriving at profit before tax. The changes in warrant values are non-cash expenses. After this adjustment is made to Arqits IFRS profit before tax of $58.0 million, Arqits non-IFRS adjusted loss before tax is $14.4 million, as shown in the reconciliation table below.

The change in fair value of warrants arises as IFRS requires our outstanding warrants to be carried at fair value within liabilities with the change in value from one reporting date to the next being reflected against profit or loss in the period. It is non-cash and will cease when the warrants are exercised, are redeemed or expire.

Other Accounting Information

As of March 31, 2022, we had $87.4 million of total liabilities, $55.6 million of which related to our outstanding warrants, which are classified as liabilities rather than equity according to IFRS and SEC guidance. The warrant liability amount reflected in our consolidated statement of financial position is calculated as the fair value of the warrants as of March 31, 2022. Our liabilities other than warrant liabilities were $31.8 million, and we had total assets of $143.2 million including cash of $82 million.

Arqit Quantum Inc.Condensed Consolidated Statement of Comprehensive IncomeFor the period ended 31 March 2022

All of the Groups activities were derived from continuing operations during the above financial periods.

Arqit Quantum Inc.Condensed Consolidated Statement of Financial PositionAs at 31 March 2022

Arqit Quantum Inc.Condensed Consolidated Statement of Cash FlowsFor the period ended 31 March 2022

Caution About Forward-Looking Statements

This communication includes forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. All statements, other than statements of historical facts, may be forward-looking statements. These forward-looking statements are based on Arqits expectations and beliefs concerning future events and involve risks and uncertainties that may cause actual results to differ materially from current expectations. These factors are difficult to predict accurately and may be beyond Arqits control. Forward-looking statements in this communication or elsewhere speak only as of the date made. New uncertainties and risks arise from time to time, and it is impossible for Arqit to predict these events or how they may affect it. Except as required by law, Arqit does not have any duty to, and does not intend to, update or revise the forward-looking statements in this communication or elsewhere after the date this communication is issued. In light of these risks and uncertainties, investors should keep in mind that results, events or developments discussed in any forward-looking statement made in this communication may not occur. Uncertainties and risk factors that could affect Arqits future performance and cause results to differ from the forward-looking statements in this release include, but are not limited to: (i) the outcome of any legal proceedings that may be instituted against the Arqit related to the business combination, (ii) the ability to maintain the listing of Arqits securities on a national securities exchange, (iii) changes in the competitive and regulated industries in which Arqit operates, variations in operating performance across competitors and changes in laws and regulations affecting Arqits business, (iv) the ability to implement business plans, forecasts, and other expectations, and identify and realise additional opportunities, (v) the potential inability of Arqit to convert its pipeline into contracts or orders in backlog into revenue, (vi) the potential inability of Arqit to successfully deliver its operational technology which is still in development, (vii) the risk of interruption or failure of Arqits information technology and communications system, (viii) the enforceability of Arqits intellectual property, and (ix) other risks and uncertainties set forth in the sections entitled Risk Factors and Cautionary Note Regarding Forward-Looking Statements in Arqits annual report on Form 20-F (the Form 20-F), filed with theU.S. Securities and Exchange Commission(the SEC) onDecember 16, 2021and in subsequent filings with theSEC. While the list of factors discussed above and in the Form 20-F and other SEC filings are considered representative, no such list should be considered to be a complete statement of all potential risks and uncertainties. Unlisted factors may present significant additional obstacles to the realisation of forward-looking statements.

Excerpt from:
Arqit Quantum Reports First Quarter Operating Loss of $14.3 Million Parabolic Arc - Parabolic Arc

Read More..