Page 4,021«..1020..4,0204,0214,0224,023..4,0304,040..»

It’s artificial intelligence to the rescue (and response and recovery) – GreenBiz

This article is adapted from GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here.

As global losses rack up from climate change-exacerbated natural disasters from voracious wildfires to ferocious hurricanes communities are scrambling to prepare (and to hedge their losses).

While information technologies such as machine learning and predictive analytics may not be able to prevent these catastrophes outright, they could help communities be better prepared to handle the aftermath. Thats the spirit behind a unique collaboration between Chicago-based technology services company Exigent and the Schulich School of Business at York University in Toronto, one that aims to create a more cost-effective and efficient marketplace for disaster relief and emergency response services.

The idea is to help state and provincial governments collectively build a more centralized inventory of relief supplies and other humanitarian items based on the data from a particular wildfire or hurricane season.

Rather than buying supplies locally based on the predictions something many small towns in fire-prone areas can ill-afford a community would buy "options" for these services in the marketplace being developed through this partnership. If the town ultimately doesn't need the items, it could "trade" them to another region that does have a need, either in the same state or another location. In effect, towns across a state or region or even country could arrange for protection, without having to make that investment outright.

"Why are we not packing those crates in March, because they are going to go somewhere?" asked Exigent CEO David Holme, referring to the current system.

The most obvious reason is that its expensive: Relief suppliers won't invest in making items unless they have certainty of orders. The intention of the Exigent-Schulich project is to move from a system that is 100 percent reactive, and consequently very slow, to one that is at least 50 percent predictive that can deliver help far more quickly, he said.

To do this, Exigent is working with AI students at Schulich to use information about a communitys demographics, geology and topography, and existing infrastructure to predict what areas affect could need: how many first-aid kits to treat local citizens or how many cement bags to rebuild structures or how many temporary housing units for residents and relief workers. All sorts of data is being consulted, from census information to historical weather data to forward-looking models for wind direction, temperature and humidity, noted Murat Kristal, program director for the Schulich masters program that is involved in the project.

Governments and decision makers are acting in a reactive way right now.

The initial focus of the joint Exigent-Schulich work is on gathering data related to wildfires in Canada and the United States. The prevalence of Californias fires captures many headlines: the insurance losses from the Camp, Hill and Woolsey fires in November 2017 have topped $12 billion. Although it gets far less attention, Texas is also highly prone to wildfires and 80 percent of them are within two miles of a community. To the north, Canadian provinces such as Alberta and Ontario are also at risk: There are an average of 6,000 fires in Canada annually.

Exigent estimates that by deploying supplies to affected regions more quickly, the platform its developing a pilot version is due in June might cut recovery costs by 20 percent and drive down premiums in at-risk regions. "The municipalities and insurers can collaboratively benefit," Holme said. "The more Ive studied the idea, the more useful it seems."

More:

It's artificial intelligence to the rescue (and response and recovery) - GreenBiz

Read More..

Ethics And AI: Are We Ready For The Rise Of Artificial Intelligence? – The Roanoke Star

Steven Mintz

No job in the United States has seen more hiring growth in the last five years than artificial-intelligence specialist, a position dedicated to building AI systems and figuring out where to implement them.

But is that career growth happening at a faster rate than our ability to address the ethical issues involved when machines make decisions that impact our lives and possibly invade our privacy?

Maybe so, says Dr. Steven Mintz (www.stevenmintzethics.com), author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior.

Rules of the road are needed to ensure that artificial intelligence systems are designed in an ethical way and operate based on ethical principles, he says. There are plenty of questions that need to be addressed. What are the right ways to use AI? How can AI be used to foster fairness, justice and transparency? What are the implications of using AI for productivity and performance evaluation?

Those who take jobs in this growing field will need to play a pivotal role in helping to work out those ethical issues, he says, and already there is somewhat of a global consensus about what should be the ethical principles in AI.

Those principles include:

Mintz points to one recent workplace survey that examined the views of employers and employees in a number of countries with respect to AI ethics policies, potential misuse, liability, and regulation.

More than half of the employers questioned said their companies do not currently have a written policy on the ethical use of AI or bots, Mintz says. Another 21 percent expressed a concern that companies could use AI in an unethical manner.

Progress is being made on some fronts, though.

In Australia, five major companies are involved in a trial run of eight principles developed as part of the government AI Ethics Framework. The idea behind the principles is to ensure that AI systems benefit individuals, society and the environment; respect human rights; dont discriminate; and uphold privacy rights and data protection.

Mintz says the next step in the U.S. should be for the business community likewise to work with government agencies to identify ethical AI principles.

Unfortunately, he says, it seems the process is moving slowly and needs a nudge by technology companies, most of whom are directly affected by the ethical use of AI.

Dr. Steven Mintz (www.stevenmintzethics.com), author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior, has frequently commented on ethical issues in society and business ethics.

Link:

Ethics And AI: Are We Ready For The Rise Of Artificial Intelligence? - The Roanoke Star

Read More..

Artificial intelligence puts final notes on Beethoven’s ’10th Symphony’ – The Japan Times

BERLIN A few notes scribbled in a notebook are all that German composer Ludwig van Beethoven left of his 10th Symphony before his death in 1827.

Now, a team of musicologists and programmers is racing to complete a version of the piece using artificial intelligence, ahead of the 250th anniversary of his birth next year.

The progress has been impressive, even if the computer still has a lot to learn, said Christine Siegert, head of archives at Beethoven House in the composers hometown of Bonn.

Siegert said she was convinced that Beethoven would have approved since he too was an innovator at the time, citing his compositions for the panharmonicon a type of organ that reproduces the sounds of wind and percussion instruments.

And she insisted the work would not affect his legacy because it would never be regarded as part of his oeuvre.

The final result of the project will be performed by a full orchestra on April 28 next year in Bonn, a centerpiece of celebrations for a composer who defined the romantic era of classical music.

Beethoven, Germanys most famous musical figure, is so loved in his homeland that a duty to prepare for the anniversary was written into the governing coalitions agreement in 2013.

The year of celebrations will begin on Monday, Dec. 16 believed to be his 249th birthday with the opening of his home in Bonn as a museum after extensive renovation.

Beethoven began working on the Tenth Symphony alongside his Ninth, which includes the world-famous Ode To Joy.

But he quickly gave up on the Tenth, leaving only a few notes and drafts by the time he died at age 57.

In the project, machine-learning software has been fed all of Beethovens work and is now composing possible continuations of the symphony in the composers style.

Deutsche Telekom, which is sponsoring the project, hopes to use the findings to develop technology such as voice recognition.

The team said the first results a few months ago were seen as too mechanical and repetitive but the latest AI compositions have been more promising.

Barry Cooper, a British composer and musicologist who himself wrote a hypothetical first movement for the Tenth Symphony in 1988, was more doubtful.

I listened to a short excerpt that has been created. It did not sound remotely like a convincing reconstruction of what Beethoven intended, said Cooper, a professor at the University of Manchester and the author of several works on Beethoven. There is, however, scope for improvement with further work.

Cooper warned that in any performance of Beethovens music, there is a risk of distorting his intentions and this is particularly the case for the Tenth Symphony because the composer had left only fragmentary material.

Similar AI experiments based on works by Bach, Mahler and Schubert have been less than impressive.

A project earlier this year to complete Schuberts Eighth Symphony was seen by some reviewers as being closer to an American film soundtrack than the Austrian composers work.

Continue reading here:

Artificial intelligence puts final notes on Beethoven's '10th Symphony' - The Japan Times

Read More..

Are We Ready For The First Patent Filed By Artificial Intelligence? – Yahoo News

Patent practitioners and others in the world of intellectual property have expended significant time and money seeking to protect innovation in the field of artificial intelligence (AI). But what happens when an AI tries to patent something itself? Will such an event be possible? If so, who would be named as the inventor? And who would own the rights to the invention?

Given the pace at which machine learning is accelerating, these are the types of questions the patent system will soon have to answer. With computersdriving cars,winning Go tournaments,performing surgeries, and much more, its only a matter of time before an AI is itself capable of inventing patentable subject matter.

Its therefore no surprise thatAndrei Iancu, the director of the United States Patent and Trademark Office (USPTO),solicitedpublic comments last summer on the topic of patenting AI inventions. The goal, according to the USPTO was:

to engage with the innovation community and experts in AI to determine whether further guidance is needed to promote the predictability and reliability of patenting such inventions and to ensure that appropriate patent protection incentives are in place to encourage further innovation in and around this critical area.

Read the original article.

Visit link:

Are We Ready For The First Patent Filed By Artificial Intelligence? - Yahoo News

Read More..

Researchers Slam Artificial Intelligence Software That Predicts Emotions – NDTV

A prominent group of researchers alarmed by the harmful social effects of artificial intelligence called Thursday for a ban on automated analysis of facial expressions in hiring and other major decisions. The AI Now Institute at New York University said action against such software-driven "affect recognition" was its top priority because science doesn't justify the technology's use and there is still time to stop widespread adoption.

The group of professors and other researchers cited as a problematic example the company HireVue, which sells systems for remote video interviews for employers such as Hilton and Unilever. It offers AI to analyse facial movements, tone of voice and speech patterns, and doesn't disclose scores to the job candidates.

The nonprofit Electronic Privacy Information Center has filed a complaint about HireVue to the USFederal Trade Commission, and AI Now has criticised the company before.

HireVue said it had not seen the AI Now report and did not answer questions on the criticism or the complaint.

"Many job candidates have benefited from HireVue's technology to help remove the very significant human bias in the existing hiring process," said spokeswoman Kim Paone.

AI Now, in its fourth annual report on the effects of artificial intelligence tools, said job screening is one of many ways in which such software is used without accountability, and typically favoured privileged groups.

The report cited a recent academic analysis of studies on how people interpret moods from facial expressions. That paper found that the previous scholarship showed such perceptions are unreliable for multiple reasons.

"How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation," wrote a team at Northeastern University and Massachusetts General Hospital.

Companies including Microsoft are marketing their ability to classify emotions using software, the study said. Microsoft did not respond to a request for comment Wednesday evening.

AI Now also criticised Amazon.com, which offers analysis on expressions of emotion through its Rekognition software. Amazon told Reuters that its technology only makes a determination on the physical appearance of someone's face and does not claim to show what a person is actually feeling.

In a conference call ahead of the report's release, AI Now founders Kate Crawford and Meredith Whittaker said that damaging uses of AI are multiplying despite broad consensus on ethical principles because there are no consequences for violating them.

Thomson Reuters 2019

Continued here:

Researchers Slam Artificial Intelligence Software That Predicts Emotions - NDTV

Read More..

Researchers call for harnessing, regulation of AI – INQUIRER.net

Image: IStock.com/ metamorworks via AFP Relaxnews

Artificial intelligence (AI) appears to be widening inequality, and its deployment should be subject to tough regulations and limits, especially for sensitive technologies such as facial recognition, a research report said Thursday.

The AI Now Institute, a New York University center studying the social implications of AI, said that as these technologies become widely deployed, the negative impacts are starting to emerge.

The 93-page report examined concerns being raised from AI-enabled management of workers, to algorithmic determinations of benefits and social services, to surveillance and tracking of immigrants and underrepresented communities.

What becomes clear is that across diverse domains and contexts, AI is widening inequality, placing information and control in the hands of those who already have power and further disempowering those who dont, the researchers noted.

The researchers said AI systems are being deployed in areas such as healthcare, education, employment, criminal justice without appropriate safeguards or accountability structures in place.

The report said governments and businesses should halt use of facial recognition in sensitive social and political contexts until the risks are better understood, and that one subset affect recognition or the reading of emotions by computer technology should be banned in light of doubts about whether it works.

Emotion recognition should not be allowed to play a role in important decisions about human lives, such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school, the report stated.

It also called for tech workers to have the right to know what they are building and to contest unethical or harmful uses of their work.

The AI Now report said medical organizations using advanced technologies need to implement data protection policies and allow people affirmative approval opportunities to withdraw from the study or treatment, and from research using their medical information.

More broadly, the researchers said the AI industry needs to make structural changes to ensure that algorithms are not reinforcing racism, prejudice or lack of diversity.

The AI industry is strikingly homogeneous, due in large part to its treatment of women, people of color, gender minorities, and other underrepresented groups, the report said.

Efforts to regulate AI systems are underway, but are being outpaced by government adoption of AI systems to surveil and control, according to the report.

Despite growing public concern and regulatory action, the rollout of facial recognition and other risky AI technologies has barely slowed down, the researchers said.

So-called smart city projects around the world are consolidating power over civic life in the hands of for-profit technology companies, putting them in charge of managing critical resources and information.RGA

RELATED STORIES:

Abu Dhabi unveils worlds first Artificial Intelligence university

Go grandmaster says computers cannot be defeated

Read Next

LATEST STORIES

MOST READ

Subscribe to INQUIRER PLUS to get access to The Philippine Daily Inquirer & other 70+ titles, share up to 5 gadgets, listen to the news, download as early as 4am & share articles on social media. Call 896 6000.

See the original post:

Researchers call for harnessing, regulation of AI - INQUIRER.net

Read More..

The Ethical Dimension of Artificial Intelligence – The McGill International Review

Artificial intelligence (AI) is a rising field in the technology world that aims to teach machines how to learn, or think, for themselves. Often, when we think of AI we imagine the voice-automated system JARVIS from the Iron Man movies or the 2001 Steven Spielberg movie. In reality, AI looks quite different, and chances are you have already seen it.

Canada, surprisingly, is a global leader in AI. Montreal has the highest concentration of researchers and students studying AI in the world, while Toronto has the highest concentration of AI start-ups. According to Ashley Casovan, the executive director of non-profit AI Global and former Director of Data Architecture and Innovation for the Government of Canada, Canada frequently uses AI for everyday tasks. For instance, lets say youre trying to figure out how to file your taxes while youre on a train home. When you visit a webpage, a chatbot may pop up that will explain the process for you, while your Canadian Pacific Railway train uses its sensors to detect potential blockages on tracks and responds accordingly. Both of these technologies employ machine learning, a technique that trains computers using manually labelled data to respond to new information. The more the program responds to new information that is not labelled, the more it learns. The Canadian government is using this in increasingly innovative ways: using predictive analytics, Canadian scientists were able to identify Zika virus patterns to help mitigate the spread of the virus. In addition, Canadian health services are employing predictive analytics to aid in suicide prevention.

However, the frequency in which the Canadian government employs AI is worrying for some. Fears of governments using AI to infringe on private freedoms are very real, as some countries, such as China, have begun to use facial recognition software for police surveillance. Furthermore, people are rapidly losing confidence in social media platforms and Internet security, often citing the absence of human intervention in the decisions that algorithms make as the cause. Furthermore, 54% of North Americans express concern for their online privacy, and the non-consensual use of personal data by social media companies and federal governments do little to ease these fears. While more Canadians are more concerned about their online security due to threats posed by internet companies, at least 59% fear for their personal information being used by their own government.

More and more people are worried about how their online information is used, perhaps in light of the Cambridge Analytica scandal that implicated Facebook in selling the personal data of millions of users. Furthermore, Russian interference in the 2016 US Presidential Election undoubtedly had an effect on the confidence many users have in their social media platforms of choice. Therefore, it is understandable that many people are hesitant to embrace AI and the idea of inhuman machines processing their information. However, there is little risk that computers will enslave us all. Rather, the prevalence of AI may serve to damage society in other ways, such as the propagation of increased bias.

As AI systems are created by humans, there is often the possibility of an inherent bias in the program itself, either through the data on which it is trained or the application of the program. As most computer programmers are white men, a lack of diversity in AI may serve to reaffirm the presence of gender and racial bias in places where its prevalent. Various organizations have been formed to address this problem, such as AI4ALL, which aims to encourage underrepresented demographics, such as people of colour and women, to pursue careers in AI.

As AI quickly integrates itself into society, the necessity for a comprehensive ethical code arises. According to Casovan, while Canada does have a government policy on responsible AI, it is difficult to enforce as the implementation of the policy often needs to be case-specific. Furthermore, there is little authority that restricts what companies can and cannot do, and evidently even less so for the government itself. Casovan thus proposes a solution: the creation of ethical models that are agile, inclusive, collaborative, and open-sourced to best provide companies with the resources to create ethical AI.

Montreal recently hosted the RE-WORK Deep Learning and Responsible AI Summit from October 24 to 25, a conference for AI-related industry professionals, from computer scientists to journalists to policymakers. During the summit, our team had the opportunity to interview AI professionals about their point of view on potential ethical concerns, such as McGill AI. This student association started in 2017 and aims to bridge the gap between undergraduate students and learning about AI. To do this, McGill AI offers students opportunities to learn about AI machine learning. With annual workshops, bootcamps, courses, lectures and more, students have the opportunity to work in groups on an idea related to AI in order to build functional prototypes.

This is also a way for first-year undergraduate students to learn about machine learning and network with companies, Jenny Long, a representative of McGill AI, tells the MIR. Thanks to its company crawls, McGill AI also make contacts between students and companies who do research on AI, or professors who give their advice on specific topics. McGill AI also organizes workshops that target general audience with their initiative Machine Learning 101, which aims at giving a general feel of what machine learning is and demystifying it. Likewise, McGill is already reaching students for potential initiatives, like reading groups for ethics in AI.

Ethical issues are clearly one of the trending topics at the moment, Long affirmed, As a society, I dont think we see any concerns specifically for the bootcamps but we do hope to make them more accessible in general.

Students interested in artificial intelligence, machine learning, and data science should also consider attending the Centre for Social and Cultural Data Science Expo on January 21, 2020. Hosted by McGill University in New Residence Hall, the Expo will host a variety of talks about the uses of data science in computer science, politics, and other fields.

With many opportunities for students to get involved in AI and machine learning, Canada is evidently working to maintain its status as being a leader in AI. However, those interested in the exciting prospects that AI proposes must also consider its ethical dimensions. AI is serving to reconcile the technological world with the political and social spheres, and can therefore not be chiefly concerned with technological progression. AI researchers must evidently be concerned with the applications of such technology, and what it means for future generations.

Photo credit to Drew Graham, courtesy of Unsplash.

Edited by Alec Regino.

Link:

The Ethical Dimension of Artificial Intelligence - The McGill International Review

Read More..

Conversing with chatbotsArtificial Intelligence research keeps it more ‘human’ – SFU News – Simon Fraser University News

The rapid advance of artificial intelligence (AI) begs a daunting question will we ever achieve human-like behavior in computational systems? SFU professorSteve DiPaolaand his research team are developing a solution called the AI Empathic Painter, using natural interaction methods to enable users to converse efficiently, while highlighting two major human qualities empathy and creativity.

DiPaolas team showcased its work at a major AI conferenceNeurIPS 2019 in Vancouver this past week. Their demo enables visitors to approach and converse with a 3D avatar chatbot, which creates an artistic portrait of the visitors inspired by their emotions and personality via the teams Empathy-based Affective Portrait Painter.

To achievethis, the researchers have combined their research in empathy-based modeling for AI character agents with machine learning models from the teams creativity artistic system.

With a host of gestural, motion and bio-sensor systems, the teams AI systems are designed to give coherent, empathy-based conversational answers via speech, expression and gesture.

Using our special system, the AI avatar can, through conversation, evaluate the user words, facial expression and voice stress to make an empathetic evaluationjust as a human would be able to, about someone they are talking to, says DiPaola, a professor in the School of Interactive Arts and Technology (SIAT), whose team includes post-doctoral researcher Nilay Yalcin and PhD student Nouf Abukhodair.

Then the researchers take it a step furtherusing the Empathy-based Affective Portrait Painter to paint a unique portrait of the user, based on the empathetic evaluation.DiPaolas AI artwork has been showcased globally in such museums as New Yorks Museum of Modern Art and the Whitney Museum of American Art.

The growing success of dialogue systems research makes conversational agents a perfect candidate for becoming a standard in human-computer interaction, explains Yalcin. The naturalness of communicative acts provides a comfortable ground for the users to interact with. There have been many advances in using multiple communication channels in dialogue systems, simulating humaneness in an artificial agent.

DiPaolas and Yalcins extensive research on empathy in AI is also addressing issues in a variety of industries, including e-health. In a collaborative project with the national AGE-WELL initiative, a helper AI conversational bot is being developed to assist the elderly in staying independent at home. Other applications are geared to the entertainment industry.

After premiering at the NeuroIPS conference, the AI Empathic Painter system will travel to Europe to be showcased in Florence in May 2020.

Formerly from Stanford University, DiPaola lead SFUs Interactive Visualization Lab (iVizLab), which strives to make computational systems bend more to the human experience by incorporating biological, cognitive and behavior knowledge models. The lab creates computational models of human ideals such as expression, emotion, behavior and creativity typically for gaming, sciences, arts and health fields.

Follow this link:

Conversing with chatbotsArtificial Intelligence research keeps it more 'human' - SFU News - Simon Fraser University News

Read More..

Could quantum computing be the key to cracking congestion? – SmartCitiesWorld

The technology has helped to improve congestion by 73 per cent in scenario-testing

Ford and Microsoft are using quantum-inspired computing technology to reduce traffic congestion. Through a joint research pilot, scientists have used the technology to simulate thousands of vehicles and their impact on congestion in the US city of Seattle.

Ford said it is still early in the project but encouraging progress has been made and it is further expanding its partnership with the tech giant.

The companies teamed up in 2018 to develop new quantum approaches running on classical computers already available to help reduce Seattles traffic congestion.

Writing on a blog post on Medium.com, Dr Ken Washington, chief technology officer, Ford Motor Company, explained that during rush hour, numerous drivers request the shortest possible routes at the same time, but current navigation services handle these requests "in a vacuum": They do not take into consideration the number of similar incoming requests, including areas where other drivers are all planning to share the same route segments, when delivering results.

What is required is a more balanced routing system that could manage all the various route requests from drivers and provide optimised route suggestions, reducing the number of vehicles on a particular road.

These and more are all variables well need to test for to ensure balanced routing can truly deliver tangible improvements for cities.

Traditional computers dont have the computational power to do this but, as Washington explained, in a quantum computer, information is processed by a quantum bit (or a qubit) and can simultaneously exist "in two different states" before it gets measured.

This ultimately enables a quantum computer to process information with a faster speed, he wrote. Attempts to simulate some specific features of a quantum computer on non-quantum hardware have led to quantum-inspired technology powerful algorithms that mimic certain quantum behaviours and run on specialised conventional hardware. That enables organisations to start realising some benefits before fully scaled quantum hardware becomes available."

Working with Microsoft, Ford tested several different possibilities, including a scenario involving as many as 5,000 vehicles each with 10 different route choices available to them simultaneously requesting routes across Metro Seattle. It reports that in 20 seconds, balanced routing suggestions were delivered to the vehicles that resulted in a 73 per cent improvement in total congestion when compared to selfish routing.

The average commute time, meanwhile, was also cut by eight per cent representing an annual reduction of more than 55,000 hours across this simulated fleet.

Based on these results, Ford is expanding its partnership with Microsoft to further improve the algorithm and understand its effectiveness in more real-world scenarios.

For example, will this method still deliver similar results when some streets are known to be closed, if route options arent equal for all drivers, or if some drivers decide to not follow suggested routes? wrote Washington. These and more are all variables well need to test for to ensure balanced routing can truly deliver tangible improvements for cities.

You might also like:

Follow this link:
Could quantum computing be the key to cracking congestion? - SmartCitiesWorld

Read More..

ProBeat: AWS and Azure are generating uneasy excitement in quantum computing – VentureBeat

Quantum is having a moment. In October, Google claimed to have achieved a quantum supremacy milestone. In November, Microsoft announced Azure Quantum, a cloud service that lets you tap into quantum hardware providers Honeywell, IonQ, or QCI. Last week, AWS announced Amazon Braket, a cloud service that lets you tap into quantum hardware providers D-Wave, IonQ, and Rigetti. At the Q2B 2019 quantum computing conference this week, I got a pulse for how the nascent industry is feeling.

Binary digits (bits) are the basic units of information in classical computing, while quantum bits (qubits) make up quantum computing. Bits are always in a state of 0 or 1, while qubits can be in a state of 0, 1, or a superposition of the two. Quantum computing leverages qubits to perform computations that would be much more difficult for a classical computer. Potential applications are so vast and wide (from basic optimization problems to machine learning to all sorts of modeling) that interested industries span finance, chemistry, aerospace, cryptography, and more. But its still so early that the industry is nowhere close to reaching consensus on what the transistor for qubits should look like.

Currently, your cloud quantum computing options are limited to single hardware providers, such as those from D-Wave and IBM. Amazon and Microsoft want to change that.

Enterprises and researchers interested in testing and experimenting with quantum are excited because they will be able to use different quantum processors via the same service, at least in theory. Theyre uneasy, however, because the quantum processors are so fundamentally different that its not clear how easy it will be to switch between them. D-Wave uses quantum annealing, Honeywell and IonQ use ion trap devices, and Rigetti and QCI use superconducting chips. Even the technologies that are the same have completely different architectures.

Entrepreneurs and enthusiasts are hopeful that Amazon and Microsoft will make it easier to interface with the various quantum hardware technologies. Theyre uneasy, however, because Amazon and Microsoft have not shared pricing and technical details. Plus, some of the quantum providers offer their own cloud services, so it will be difficult to suss out when it makes more sense to work with them directly.

The hardware providers themselves are excited because they get exposure to massive customer bases. Amazon and Microsoft are the worlds biggest and second biggest cloud providers, respectively. Theyre uneasy, however, because the tech giants are really just middlemen, which of course poses its own problems of costs and reliance.

At least right now, it looks like this will be the new normal. Even hardware providers that havent announced they are partnering with Amazon and/or Microsoft, like Xanadu, are in talks to do just that.

Overall at the event, excitement trumped uneasiness. If youre participating in a domain as nascent as quantum, you must be optimistic. The news this quarter all happened very quickly, but there is still a long road ahead. After all, these cloud services have only been announced. They still have to become available, gain exposure, pick up traction, become practical, prove useful, and so on.

The devil is in the details. How much are these cloud services for quantum going to cost? Amazon and Microsoft havent said. When exactly will they be available in preview or in beta? Amazon and Microsoft havent said. How will switching between different quantum processors work in practice? Amazon and Microsoft havent said.

One thing is clear. Everyone at the event was talking about the impact of the two biggest cloud providers offering quantum hardware from different companies. The clear winners? Amazon and Microsoft.

ProBeat is a column in which Emil rants about whatever crosses him that week.

Read the original here:
ProBeat: AWS and Azure are generating uneasy excitement in quantum computing - VentureBeat

Read More..