Page 945«..1020..944945946947..950960..»

Variational quantum and quantum-inspired clustering | Scientific … – Nature.com

Let us start by assuming that we have N datapoints, each being described by m features. The goal is to classify these datapoints into k clusters. Without loss of generality, datapoints are described by m-dimensional vectors (overrightarrow {x}_i), with (i = 1, 2, ldots , N). To implement a clustering of the data we could, for instance, use classical bit variables (q_i^a = 0,1), with (i = 1, 2, ldots , N) and (a = 1, 2, ldots , k), so that (q_i^a = 0) if datapoint i is not in cluster a, and (q_i^a = 1) if it is in the cluster. Let us also call (d(overrightarrow {x}_i, overrightarrow {x}_j)) some distance measure between datapoints (overrightarrow {x}_i) and (overrightarrow {x}_j). With this notation we build a classical cost function H such that points very far away tend to fall into different clusters4:

$$ H = frac{1}{2}sumlimits_{{i,j = 1}}^{N} d (overrightarrow {x} _{i} ,overrightarrow {x} _{j} )sumlimits_{{a = 1}}^{k} {q_{i}^{a} } q_{j}^{a} .{text{ }} $$

(1)

Additionally, one must impose the constraint that every point falls into one and only one cluster, i.e.,

$$begin{aligned} sum _{a=1}^k q^a_i = 1 ~~ forall i. end{aligned}$$

(2)

The bit configuration optimizing Eq. (1) under the above constraint provides a solution to the clustering of the data. As explained in Ref.4, this can be rephrased naturally as a Quadratic Binary Optimization Problem (QUBO) of (k times N) bit variables, so that it can be solved by a quantum annealer. However, on a gate-based quantum computer, we can use a Variational Quantum Eigensolver (VQE)7 with fewer qubits as follows. Let us call (f^a_i equiv |langle psi _i | psi ^a rangle |^2) the fidelity between a variational quantum state (vert psi _i rangle ) for datapoint (overrightarrow {x}_i) and a reference state (vert psi ^a rangle ) for cluster a. In a VQE algorithm, we could just sample terms (h_{ij}^a),

$$begin{aligned} h_{ij}^a = d(overrightarrow {x}_i,overrightarrow {x}_j) f_i^a f_j^a, end{aligned}$$

(3)

for all datapoints i,j and clusters a, together with penalty terms (c_i),

$$begin{aligned} c_{i} = left( sum _{a=1}^k f_i^a - 1right) ^2, end{aligned}$$

(4)

which are taken into account via Lagrange multipliers for all datapoints i. This last term must only be taken into account if several configurations of the qubits forming the VQE circuit allow for multiple clusters a simultaneously for the same datapoint, e.g., if we codified one qubit per cluster as in Eq. (1).

Our approach here, though, is not to relate the number of qubits to the number of clusters. Instead, we work with some set of predefined states (vert psi ^a rangle in {mathcal {H}}), not necessarily orthogonal, and being ({mathcal {H}}) whichever Hilbert space being used for the VQE. This provides us with enormous flexibility when designing the algorithm. For instance, we could choose states (vert psi ^a rangle ) to be a set of maximally mutually-orthogonal states2 in ({mathcal {H}}). In the particular case of one qubit only, we would then have ({mathcal {H}} = {{mathbb {C}}}^2) and the set of maximally-orthogonal states would correspond to the k vertices of a platonic solid inscribed within the Bloch sphere. The corresponding VQE approach would then correspond to a simple quantum circuit of just one qubit involving the fine-tuning of a single one-qubit rotation, and no sampling of the constraints in Eq. (4) would be needed at all, since this would be satisfied by construction. And for more qubits, the corresponding generalization would involve interesting entangled states in ({mathcal {H}}).

In addition to this, the terms to be sampled can be further refined to improve algorithmic performance. One can for instance introduce modified cost functions, such as

$$begin{aligned} h_{ij}^a= & {} d(overrightarrow {x}_i,overrightarrow {x}_j)^{-1} left( 1 - f_i^a f_j^a right) end{aligned}$$

(5)

$$begin{aligned} h_{ij}^a= & {} left( d(overrightarrow {x}_i,overrightarrow {x}_j)^alpha + lambda d(overrightarrow {x}_i,overrightarrow {c}_i)right) f_i^a f_j^a end{aligned}$$

(6)

$$begin{aligned} h_{ij}^a= & {} left( d(overrightarrow {x}_i,overrightarrow {x}_j)^alpha + lambda d(overrightarrow {x}_i,overrightarrow {c}_i)right) left( 1-f_i^aright) left( 1- f_j^a right) . end{aligned}$$

(7)

In the above cost functions, the first one tends to aggregate together in the same cluster those datapoints that are separated by a short distance, which is the complementary view to the original cost function in Eq. (3). The second one includes two regularization hyperparameters (alpha ) and (lambda ), where (alpha ) allows for modified penalizations for the distances between points, and (lambda ) accounts for the relative importance of the distance between datapoint (overrightarrow {x}_i) and the centroid formed by the elements belonging to the same cluster than point i, which we call (overrightarrow {c}_i). This centroid can be re-calculated self-consistently throughout the running of the algorithm. Additionally, one can consider cost functions with a different philosophy, such as the third one, where datapoints with a large separation distance tend to be either in different clusters, but not ruling our the chance of being in the same cluster. On top of all these possibilities, one could also consider combining them in a suitable way to build even more plausible cost functions. Eventually, the goodness of a cost function depends on the actual dataset, so for each particular case it is worth trying several of them.

The rest of the algorithm follows the standards in unsupervised learning. After a preprocessing of the data (e.g., normalization), we define the suitable set of states (vert psi _a rangle ) and set the characteristics of the variational quantum circuit, including the parameters to be optimized. We set them the classical optimizer for the VQE loop (e.g., Adam optimizer) and its main features (learning rate, batch size, etc.). After initialization, and if needed, we compute the centroids (overrightarrow {c}_i) and distances (d(overrightarrow {x}_i, overrightarrow {x}_j), d(overrightarrow {x}_i, overrightarrow {c}_i)). We then perform the VQE optimization loop for a fixed number of epochs, where new parameters of the variational quantum circuit are computed at each epoch. To accelerate performance in the VQE loop, one can include only in the sampling those terms that have a non-negligible contribution. The final step involves estimating, for a given datapoint, the cluster to which it belongs. This can be done implementing quantum state tomography (either classical or quantum), so that we can read out the final state (vert psi _i rangle ) for a given datapoint (overrightarrow {x}_i), and determine to which cluster it belongs by looking for the maximum of fidelities (f_i^a) for all clusters a.

View original post here:
Variational quantum and quantum-inspired clustering | Scientific ... - Nature.com

Read More..

Rigetti Computing: Advancements in Quantum Computing and … – Clayton County Register

Rigetti Computing is a company that specializes in developing and providing quantum computing solutions. Founded in 2013 by former IBM quantum researcher Chad Rigetti, the company designs and fabricates its own quantum chips, which are superconducting circuits that operate at extremely low temperatures.

Rigettis quantum chips are integrated with a classical computing architecture and a software platform called Quantum Cloud Services. This platform allows users to access the quantum processors online and run quantum algorithms. The company has deployed quantum processors with up to 80 qubits and has achieved a net energy gain in a nuclear fusion experiment using its quantum technology.

The companys ultimate goal is to build the first quantum computer that can outperform classical computers on real-world problems, known as achieving quantum advantage or quantum supremacy.

Rigetti Computing has made significant strides in the field of quantum computing. The company holds a portfolio of 165 patents, including both issued and pending patents. This strong intellectual property base solidifies Rigettis position in the market and paves the way for the practical application of quantum computers.

In the second quarter, Rigetti recorded total revenue of $3.3 million, which marked an increase of over 56% compared to the same quarter in the previous year. As a young company in a rapidly growing market, investors should focus on Rigettis technological milestones, collaborations with partners, and the overall prospects of the quantum computing industry to evaluate the future potential of the company.

During the second quarter, Rigetti achieved significant milestones and partnerships. The company sold its first Quantum Processing Unit (QPU) to a renowned national laboratory. This QPU consists of nine qubits with a unique square lattice design, enhanced with adjustable couplers. Additionally, Rigetti partnered with ADIA Lab to develop a quantum computing solution.

Looking ahead, Rigetti has a clear plan for enhancing its quantum systems. The company is refining the Ankaa-1 system to achieve a 98% accuracy level in its two-qubit operations, which will lay the foundation for the upcoming Ankaa-2 84-qubit system. Rigetti also aims to achieve a 99% accuracy level with the Ankaa-2 system, projected to be launched in 2024. The companys long-term plan includes the creation of the Lyra system, a 336-qubit quantum computer.

The market potential for quantum computing is immense. Quantum computers have the ability to address challenges beyond the reach of classical computers, making them highly appealing to industries and researchers. Rigettis vision aligns with this potential as the company aims to develop quantum computers that can meet the requirements of practical workloads, spanning domains such as cryptography and drug discovery.

In order to fulfill these requirements, Rigetti is focused on reducing error rates in its quantum systems. With a commitment to achieving less than 0.5% error rates for next-generation systems, the company aims to make quantum computers reliable and consistent tools for complex problem-solving.

Rigetti has not only made advancements in theory and algorithms but has also addressed engineering challenges. With clock speeds exceeding 1 MHz and manufacturability as tangible achievements, the company is bridging the gap between theoretical promise and the physical realization of the power of quantum computing.

The development of a co-processor that integrates with classical computers indicates Rigettis belief in hybrid computing as the future. This integration has the potential to accelerate the adoption of quantum computing in various industries.

With a current market value of $120 billion for cloud hardware and $40 billion for high-performance computing, Rigetti Computing has significant market potential for its quantum computing solutions. However, investing in the companys stock carries high risk and is suitable for investors with a high-risk appetite.

As the quantum computing sector continues to evolve, Rigetti remains committed to its long-term goals. The company plans to introduce a 1,000-qubit multichip system by 2025 and a 4,000-qubit multichip system by 2027. Investors will have more data on the companys progress in the coming quarters, allowing for a better assessment of its potential and performance.

Here is the original post:
Rigetti Computing: Advancements in Quantum Computing and ... - Clayton County Register

Read More..

Quantum Avalanche A Phenomenon That May Revolutionize … – SciTechDaily

Unraveling the mystery of insulator-to-metal transitions, new research into the quantum avalanche uncovers new insights into resistive switching and offers potential breakthroughs in microelectronics.

New Study Solves Mystery on Insulator-to-Metal Transition

A study explored insulator-to-metal transitions, uncovering discrepancies in the traditional Landau-Zener formula and offering new insights into resistive switching. By using computer simulations, the research highlights the quantum mechanics involved and suggests that electronic and thermal switching can arise simultaneously, with potential applications in microelectronics and neuromorphic computing.

Looking only at their subatomic particles, most materials can be placed into one of two categories.

Metals like copper and iron have free-flowing electrons that allow them to conduct electricity, while insulators like glass and rubber keep their electrons tightly bound and therefore do not conduct electricity.

Insulators can turn into metals when hit with an intense electric field, offering tantalizing possibilities for microelectronics and supercomputing, but the physics behind this phenomenon called resistive switching is not well understood.

Questions, like how large an electric field is needed, are fiercely debated by scientists, like University at Buffalo condensed matter theorist Jong Han.

I have been obsessed by that, he says.

Han, PhD, professor of physics in the College of Arts and Sciences, is the lead author on a study that takes a new approach to answer a long-standing mystery about insulator-to-metal transitions. The study, Correlated insulator collapse due to quantum avalanche via in-gap ladder states, was published in May in Nature Communications.

University at Buffalo physics professor Jong Han is the lead author on a new study that helps solve a longstanding physics mystery on how insulators transition into metals via an electric field, a process known as resistive switching. Credit: Douglas Levere, University at Buffalo

The difference between metals and insulators lies in quantum mechanical principles, which dictate that electrons are quantum particles and their energy levels come in bands that have forbidden gaps, Han says.

Since the 1930s, the Landau-Zener formula has served as a blueprint for determining the size of electric field needed to push an insulators electrons from its lower bands to its upper bands. But experiments in the decades since have shown materials require a much smaller electric field approximately 1,000 times smaller than the Landau-Zener formula estimated.

So, there is a huge discrepancy, and we need to have a better theory, Han says.

To solve this, Han decided to consider a different question: What happens when electrons already in the upper band of an insulator are pushed?

Han ran a computer simulation of resistive switching that accounted for the presence of electrons in the upper band. It showed that a relatively small electric field could trigger a collapse of the gap between the lower and upper bands, creating a quantum path for the electrons to go up and down between the bands.

To make an analogy, Han says, Imagine some electrons are moving on a second floor. When the floor is tilted by an electric field, electrons not only begin to move but previously forbidden quantum transitions open up and the very stability of the floor abruptly falls apart, making the electrons on different floors flow up and down.

Then, the question is no longer how the electrons on the bottom floor jump up, but the stability of higher floors under an electric field.

This idea helps solve some of the discrepancies in the Landau-Zener formula, Han says. It also provides some clarity to the debate over insulator-to-metal transitions caused by electrons themselves or those caused by extreme heat. Hans simulation suggests the quantum avalanche is not triggered by heat. However, the full insulator-to-metal transition doesnt happen until the separate temperatures of the electrons and phonons quantum vibrations of the crystals atoms equilibrate. This shows that the mechanisms for electronic and thermal switching are not exclusive of each other, Han says, but can instead arise simultaneously.

So, we have found a way to understand some corner of this whole resistive switching phenomenon, Han says. But I think its a good starting point.

The study was co-authored by Jonathan Bird, PhD, professor and chair of electrical engineering in UBs School of Engineering and Applied Sciences, who provided experimental context. His team has been studying the electrical properties of emergent nanomaterials that exhibit novel states at low temperatures, which can teach researchers a lot about the complex physics that govern electrical behavior.

While our studies are focused on resolving fundamental questions about the physics of new materials, the electrical phenomena that we reveal in these materials could ultimately provide the basis of new microelectronic technologies, such as compact memories for use in data-intensive applications like artificial intelligence, Bird says.

The research could also be crucial for areas like neuromorphic computing, which tries to emulate the electrical stimulation of the human nervous system. Our focus, however, is primarily on understanding the fundamental phenomenology, Bird says.

Since publishing the paper, Han has devised an analytic theory that matches the computers calculation well. Still, theres more for him to investigate, like the exact conditions needed for a quantum avalanche to happen.

Somebody, an experimentalist, is going to ask me, Why didnt I see that before? Han says. Some might have seen it, some might not have. We have a lot of work ahead of us to sort it out.

Reference: Correlated insulator collapse due to quantum avalanche via in-gap ladder states by Jong E. Han, Camille Aron, Xi Chen, Ishiaka Mansaray, Jae-Ho Han, Ki-Seok Kim, Michael Randle and Jonathan P. Bird, 22 May 2023, Nature Communications.DOI: 10.1038/s41467-023-38557-8

Other authors include UB physics PhD student Xi Chen; Ishiaka Mansaray, who received a PhD in physics and is now a postdoc at the National Institute of Standards and Technology; and Michael Randle, who received a PhD in electrical engineering and is now a postdoc at the Riken research institute in Japan. Other authors include international researchers representing cole Normale Suprieure, French National Centre for Scientific Research (CNRS) in Paris; Pohang University of Science and Technology; and the Center for Theoretical Physics of Complex Systems, Institute for Basic Science.

Original post:
Quantum Avalanche A Phenomenon That May Revolutionize ... - SciTechDaily

Read More..

10 Scary AI Predictions From Movies And TV Shows – Hollywood Reporter

Artificial intelligence has gained new technological and cultural relevance in the past year, to the excitement (we assume) of some and the fear of pretty much anyone whos ever seen a sci-fi movie. Indeed, one of the major reasons for the dual writers and actors strike is concern that studios will use AI to replace them without fair compensation.

But since long before AI became a threat to anyone in the real world, Hollywood has been grappling with how it could help us, harm us or destroy our entire race. A common thread joining these films and TV shows is that they all explore the implications of what it means that artificial intelligence, by definition, has a mind of its own. No matter what purpose it was created for, self-aware AI at least at the level envisioned by sci-fi writers, which granted is miles beyond what exists in the real world is going to make its own unpredictable decisions, and that may or may not be in the best interests of humanity.

So, while the workers of Hollywood might be worried that AI is coming for their jobs, at least they can be relieved that its nowhere near as dangerous as the writers wildest dreams yet.

Here, we round up some of the most memorable AI in film and television, ranging from friendly helper robots to murderous destroyers of mankind.

First appearing on Star Trek: The Next Generation in 1987, the android Data has long been played by Brent Spiner alongside Patrick Stewart as Jean-Luc Picard. Through he was built, not born, in the likeness of his creator, Data is an officer of the U.S.S. Enterprise and is an essential member of the crew. Hes able to compute with efficiency but struggles to understand human emotion and idiosyncrasies. On a never-ending quest for self-improvement, he is constantly striving to become more human, including by adopting a pet cat and eventually implanting an emotion chip. In the end, he proves capable of self-sacrifice to save his friends. He was even inducted into Carnegie Mellons prestigious Robot Hall of Fame (yes, really).

The Abbott & Costello of a galaxy far, far away, C-3PO and R2-D2 are always helping their owner Luke Skywalker and the Rebel forces in their fight against the Empire. R2-D2 (originally portrayed by Kenny Baker) can co-pilot a small fighter ship, convey holographic messages and shoot electricity in self-defense. Meanwhile, C-3PO (Anthony Daniels), a protocol droid, provides helpful calculations such as the odds of surviving a flight through an asteroid field (whether or not Han wants to hear it) and can translate 6 million forms of communication. Just dont ask him to do anything particularly brave, unless the fearless R2-D2 is leading the way. These droids, and fellow Robot Hall of Famers, can also be sent on missions that might be dangerous for humans, such presenting a list of demands to Jabba the Hutt. Honorable Mention: BB-8

If AI television news presenters ever become mainstream, theyll owe a lot to the original: Max Headroom. Matt Frewer starred in several incarnations, including a 1985 British TV movie (Max Headroom: 20 Minutes Into the Future) and a 1987-88 ABC series, as a computer-generated TV journalist (with help from some prosthetics and fancy film editing) whose technological nature is highlighted to comic effect with lots of stuttering glitches it was the 80s, remember? Created in the likeness of human journalist Edison Carter after hes almost killed, Max investigates with his three-dimensional counterpart and colleagues to uncover the truth in a dystopian future. Even if the show only ran for two seasons, it made its mark on the culture: Headroom was interviewed by David Letterman and starred in Ridley Scott-directed New Coke commercials.

Though lesser known than some of the other androids on this list, the nameless robot in 2012s Robot & Frank evokes an interesting argument: Maybe artificial intelligence is neither good nor evil, but it can mirror the morality of the person using it. In the not-too-distant future, retired jewel thief Frank (Frank Langella) lives alone and has been experiencing memory problems when his son (James Marsden) buys him a medical helper robot (voiced by Peter Sarsgaard). When Frank realizes the robot doesnt have laws integrated into its core programming, he uses it to help execute a couple of high-value, white-collar heists. (The only people who get hurt are the insurance companies, it says at one point, repeating Franks mantra.) The robot is adamant that it doesnt have feelings about its own existence, but the one thing it does seem to care about is Franks welfare whether that means cooking him healthy meals or keeping him out of jail. Are they friends, or is that all just programming?

First appearing on the big screen in 2008s Iron Man as voiced by Paul Bettany, JARVIS (Just a Rather Very Intelligent System) begins its existence as a somewhat lowly disembodied AI created by Tony Stark to help run computations and act as a kind of electronic butler or smart home device, which also augments the Iron Man suit.

But the much more dangerous potential of AI is explored in Avengers: Age of Ultron (2015) when Tonys new (accidental) creation, Ultron, decides the best way to achieve world peace is the obliteration of mankind. In that film, JARVIS is nearly destroyed, but is saved by his own quick thinking with help from Tony, Bruce Banner and Thor, who give him android form as Vision. His powers grow with the addition of an Infinity Stone, and he shows how human he has become when he falls in love with the witch Wanda Maximoff.

In a future where AI robots have become ubiquitous, bound by Three Laws meant to keep them from harming humans, only Will Smith (as Chicago police detective Del Spooner) recognizes that they could, in fact, be deadly. His beef with the machines may be noble he doesnt trust them after a robot chose to save him from drowning instead of a child based on their odds of survival but his personal grudge is so well known that he gets blamed when a battalion of rogue robots swarm his vehicle and cause a car accident. (At least, thats the mild description given by a robot that has just punched a hole through Dels windshield.) The lesson of this film seems to be that no matter how many safeguards you have in place, never trust AI to choose wisely when making life-and-death decisions.

In season four of the WB series, Buffy (Sarah Michelle Gellar) goes to college and discovers the morally dubious military unit The Initiative, of which her boyfriend Riley (Marc Blucas) is an agent. The clandestine group is run by Dr. Maggie Walsh (Lindsay Crouse), and it soon comes to light that she has been playing Dr. Frankenstein, building a creature known as Adam (George Hertzberg). Made from a jumble of robotic and monster parts The Initiative has gathered, the first thing Adam does is kill his creator. He has a philosophical side, too, and is interested in discovering the reason behind his existence. But he comes to the wrong conclusions, and eventually attempts to create an army of human-demon-machine hybrids like himself with dreams of forging a new, superior race.

As soon as we started thinking for you, it really became our civilization, Agent Smith (played by Hugo Weaving) taunts a captive Morpheus in one of the most haunting monologues of The Matrix (1999). An AI super-soldier disguised as a guy in a black suit and sunglasses, Agent Smith and his ilk can dodge bullets, land punches at nearly the speed of light and take over the bodies of unsuspecting humans trapped in the Matrixs program.

While some of the evil AI on this list tries to destroy the world, the agents instead tend the garden that is the Matrix. Their goal is to keep the humans inside docile and powerless so that they can be used as a fuel source for the robots out in the real (extremely dystopian) world and, to extend the metaphor, pull weeds like Neo and his friends.

The James Cameron/Gale Ann Hurd franchise birthed a couple of the most iconic catchphrases of the 80s and 90s (Arnold Schwarzenegger recently told THR about the origins of Ill be back), and it explored the possibility of multiple timelines long before the MCU was conceived of as a big-screen phenomenon. Schwarzenegger stars in most of the film iterations as various incarnations of the Terminator, but even though he can rip out a street thugs heart with his bare hands (why does he need guns, anyway?), this cyborg can also be programmed to protect Sarah Connor and her savior son (see: Judgment Day, Rise of the Machines, Genisys) just as easily as kill them. The true brains of the operation is Skynet, the self-aware military tech that, in the future, comes to see humanity as a threat and launches a nuclear war.

Stanley Kubricks 1968 masterpiece starts out innocently enough: After a prehistoric vignette that hints at the existence of alien life, a time jump occurs and a group of astronauts set off on a deep-space mission. Their ship is equipped with the latest in technology, H.A.L. 9000, which is renowned because it has never made a mistake.

On the trip, two crewmembers, Dave and Frank, suspect H.A.L. has made an error regarding a malfunctioning piece of equipment, and for the missions safety they hatch a plan to deactivate the AI program. This leads H.A.L. which thinks the humans are the ones compromising the mission to become murderous, killing the helpless members of the crew who have been kept in suspended animation during the long journey, and turning the ship against Dave and Frank.

After a power struggle, Dave successfully deactivates H.A.L., which regresses intellectually as its being unplugged and endearingly sings Daisy Bell (Bicycle Built for Two) before it loses function entirely. Thats not the end of the film, which has philosophical aspirations way beyond artificial intelligence, but it still offers the iconic cinematic warning on the subject.

Link:

10 Scary AI Predictions From Movies And TV Shows - Hollywood Reporter

Read More..

AI isnt great at decoding human emotions. So why are regulators targeting the tech? – MIT Technology Review

This article is from The Technocrat, MIT Technology Reviews weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Recently, I took myself to one of my favorite places in New York City, the public library, to look at some of the hundreds of original letters, writings, and musings of Charles Darwin. The famous English scientist loved to write, and his curiosity and skill at observation come alive on the pages.

In addition to proposing the theory of evolution, Darwin studied the expressions and emotions of people and animals. He debated in his writing just how scientific, universal, and predictable emotions actually are, and he sketched characters with exaggerated expressions, which the library had on display.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

The subject rang a bell for me.

Lately, as everyone has been up in arms about ChatGPT, AI general intelligence, and the prospect of robots taking peoples jobs, Ive noticed that regulators have been ramping up warnings against AI and emotion recognition.

Emotion recognition, in this far-from-Darwin context, is the attempt to identify a persons feelings or state of mind using AI analysis of video, facial images, or audio recordings.

The idea isnt super complicated: the AI model may see an open mouth, squinted eyes, and contracted cheeks with a thrown-back head, for instance, and register it as a laugh, concluding that the subject is happy.

But in practice, this is incredibly complexand, some argue, a dangerous and invasive example of the sort of pseudoscience that artificial intelligence often produces.

Certain privacy and human rights advocates, such as European Digital Rights and Access Now, are calling for a blanket ban on emotion recognition. And while the version of the EU AI Act that was approved by the European Parliament in June isnt a total ban, it bars the use of emotion recognition in policing, border management, workplaces, and schools.

Meanwhile, some US legislators have called out this particular field, and it appears to be a likely contender in any eventual AI regulation; Senator Ron Wyden, who is one of the lawmakers leading the regulatory push, recently praised the EU for tackling it and warned, Your facial expressions, eye movements, tone of voice, and the way you walk are terrible ways to judge who you are or what youll do in the future. Yet millions and millions of dollars are being funneled into developing emotion-detection AI based on bunk science.

But why is this a top concern? How well founded are fears about emotion recognitionand could strict regulation here actually hurt positive innovation?

A handful of companies are already selling this technology for a wide variety of uses, though its not yet widely deployed. Affectiva, for one, has been exploring how AI that analyzes peoples facial expressions might be used to determine whether a car driver is tired and to evaluate how people are reacting to a movie trailer. Others, like HireVue, have sold emotion recognition as a way to screen for the most promising job candidates (a practice that has been met with heavy criticism; you can listen to our investigative audio series on the company here).

Im generally in favor of allowing the private sector to develop this technology. There are important applications, such as enabling people who are blind or have low vision to better understand the emotions of people around them, Daniel Castro, vice president of the Information Technology and Innovation Foundation, a DC-based think tank, told me in an email.

But other applications of the tech are more alarming. Several companies are selling software to law enforcement that tries to ascertain if someone is lying or that can flag supposedly suspicious behavior.

A pilot project called iBorderCtrl, sponsored by the European Union, offers a version of emotion recognition as part of its technology stack that manages border crossings. According to its website, the Automatic Deception Detection System quantifies the probability of deceit in interviews by analyzing interviewees non-verbal micro-gestures (though it acknowledges scientific controversy around its efficacy).

But the most high-profile use (or abuse, in this case) of emotion recognition tech is from China, and this is undoubtedly on legislators radars.

The country has repeatedly used emotion AI for surveillancenotably to monitor Uyghurs in Xinjiang, according to a software engineer who claimed to have installed the systems in police stations. Emotion recognition was intended to identify a nervous or anxious state of mind, like a lie detector. As one human rights advocate warned the BBC, Its people who are in highly coercive circumstances, under enormous pressure, being understandably nervous, and thats taken as an indication of guilt. Some schools in the country have also used the tech on students to measure comprehension and performance.

Ella Jakubowska, a senior policy advisor at the Brussels-based organization European Digital Rights, tells me she has yet to hear of any credible use case for emotion recognition: Both [facial recognition and emotion recognition] are about social control; about who watches and who gets watched; about where we see a concentration of power.

Whats more, theres evidence that emotion recognition models just cant be accurate. Emotions are complicated, and even human beings are often quite poor at identifying them in others. Even as the technology has improved in recent years, thanks to the availability of more and better data as well as increased computing power, the accuracy varies widely depending on what outcomes the system is aiming for and how good the data is going into it.

The technology is not perfect, although that probably has less to do with the limits of computer vision and more to do with the fact that human emotions are complex, vary based on culture and context, and are imprecise, Castro told me.

Which brings me back to Darwin. A fundamental tension in this field is whether science can ever determine emotions. We might see advances in affective computing as the underlying science of emotion continues to progressor we might not.

Its a bit of a parable for this broader moment in AI. The technology is in a period of extreme hype, and the idea that artificial intelligence can make the world significantly more knowable and predictable can be appealing. That said, as AI expert Meredith Broussard has asked, can everything be distilled into a math problem?

A new study from researchers in Switzerland finds that news is highly valuable to Google Search and accounts for the majority of its revenue. The findings offer some optimism about the economics of news and publishing, especially if you, like me, care deeply about the future of journalism. Courtney Radsch wrote about the study in one of my favorite publications, Tech Policy Press. (On a related note, you should also read this sharp piece on how to fix local news from Steven Waldman in the Atlantic.)

Read the original post:

AI isnt great at decoding human emotions. So why are regulators targeting the tech? - MIT Technology Review

Read More..

Texas A&M Professor Receives NSF Grant To Study AI-Powered … – Texas A&M University Today

With the unprecedented tools now available through artificial intelligence, Dr. Zixiang Xiong will work to create new parameters for the evolving process for data compression.

Getty Images

Each day, an estimated 330,000 billion bytes of data are generated in various forms. This data is shared in many ways videos, images, music, gaming, streaming content and video calls. This immense amount of data requires compression to save on storage capacity, speed up file transfer and decrease costs for storage hardware and network bandwidth.

Dr. Zixiang Xiong, professor and associate department head in the Department of Electrical and Computer Engineering at Texas A&M University, recently received aNational Science Foundationgrant to research the fundamental limits of learned source coding or data compression that uses machine learning now that new machine learning methods have permeated the scene.

The project is a culmination of over 30 years of research conducted by Xiong. Since the late 1980s, he has studied the area of data compression and has seen the evolution of the process.

In the 1990s, successfully sharing an image file required converting the file into text and back into an image. The flip side is now possible; machine learning generative models such as ChatGPT create new content and images based on input text into the model. With the unprecedented tools now available through artificial intelligence, Xiong will work to create new parameters for the evolving process.

We always ask ourselves before we begin any engineering project, Whats the theoretical limit? said Xiong. Thats very fundamental now because AI is completely different. Theres no current theory because we dont know the theoretical limit.

This project aims to understand what types of machine learning algorithms can compress data well and how many samples are needed to learn compression well. While gaining a fundamental understanding of data compression that utilizes machine learning, Xiong hopes to develop more powerful compression methods, leading to more efficient use of wireless communication and less energy consumption by mobile devices.

Traditional compression methods include the well-known JPEG compression for smartphone images; this is a lossy compression method, which means that some image quality is lost. Lossless compression, meaning no quality is lost, is typically used for compressing computer files, such as with Zip, and for music streaming. This project aims to develop boundaries for the performance of machine learning for both compression methods.

In 2020, Xiong worked on a project titled Deep Learning based Scalable Compression Framework with Amazon Prime Video, which was preliminary work that led to this new project.

Collaborators for this project include Dr. Anders Hst-Madsen and Dr. Narayana Santhanam, both professors at the University of Hawaii at Mnoa.

View post:

Texas A&M Professor Receives NSF Grant To Study AI-Powered ... - Texas A&M University Today

Read More..

Achieving the Singularity is ‘All About Progress’: AI Executive – Decrypt

As artificial intelligence continues its rapid advance, one word has computer scientists and science fiction fans waiting with bated breath: singularity. The word defines a pivotal moment in the future where technological growth becomes uncontrollable and irreversible, and disrupts civilization.

Whether that moment is tantalizing or terrifying, one firm working to bring it about is aptly named AI and blockchain developer SingularityNET.

Our vision is to drive towards a positive, beneficial, benevolent singularity for the benefit of all humankind, SingularityNET COO Janet Adams told Decrypt in an interview.

Founded in 2017 by Ben Goertzel and David Hanson, SingularityNET is a decentralized marketplace for artificial intelligence programs. The company says it wants to make advanced AI available to everyone through blockchain technology. Hanson holds a Ph.D. in Interactive Arts and Engineering from The University of Texas and a Master of Science in Applied Neuroscience from King's College London, while Goertzel earned his Ph.D. in Mathematics from Temple University.

A major step towards singularity is bridging the gap between artificial intelligence and robotics, Adams explainedanother focus of the company.

In computer science, singularity is achieved when artificial intelligence surpasses human intelligence, resulting in rapid, unpredictable technological advancements and societal changes. Why, Decrypt asked, would anyone want to create a robot or entity that could one day outsmart humans?

The answer, according to Adams, is progress.

Progress just happens all by itself, Adams said. Technological progress is a forward wayartificial intelligence and the programming of statistics into computer programs it's been happening for decades.

While many in the fields of science and science fiction have helped develop the idea of the singularity, the term was coined by Hungarian-American mathematician John von Neumann in the late 1950s. In his book, The Singularity Is Near, computer scientist, author, and futurist Ray Kurzweil predicted singularity would occur by 2045.

Adams says we are running ahead of schedule.

We acknowledge that there are a number of research breakthroughs to happen before we get to human-level AGI (artificial general intelligence), she said. But we have built the technology stack for that AGI, and they could even emerge sooner than three to seven years.

While AI and AGI may sound similar, they are years apart in scope. AI (Artificial Intelligence) is like a calculator that's good at a specific task. AGI (Artificial General Intelligence), on the other hand, is like a human brain that can learn and perform any intellectual task that a human can.

In 2021, SingularityNET co-founder and CEO of Hanson Robotics David Hanson released Sophia, a robot thatin collaboration with artist Andrea Bonacetolaunched a series of AI and neural network-powered NFT artwork on Nifty Gateway. That same year, SingularityNET launched the Sophia DAO, a decentralized autonomous organization dedicated to Sophias growth, well-being, and development.

SingularityNETs latest AI project is an AI Diva named Desdemona or Desi, created during the COVID pandemic. The plan for Desdemona, Adams said, includes becoming an AI popstar, celebrity, and influencer.

Adams said people form strong connections with humanoid robots, like Desdemona and Sophia, because of their highly expressive faces.

Desdemona has 36 motors in her face, and they can move in any emotion you can think of, and more emotions than you can think of, Adams said. She can perceive and mirror human emotions using facial recognition, voice tone, and word analysis.

Image: Desdemona/SingularityNET

Adams said that because of its rich suite of inputs, Desdemona can understand how a person is feeling and respond appropriately, for example dropping her tone of voice to match that of the person to whom she is speaking.

While SingularityNET is optimistic about human/robot relations, including for young people, psychologists and experts are sounding the alarm about what this bonding could mean, especially for children.

Last week, the Center on Countering Digital Hate released a report titled AI and Eating Disorders, that accused AI chatbots like OpenAIs ChatGPT and Google Bard of promoting eating disorders and unhealthy and unrealistic body images, and not doing enough to safeguard users.

Other AI-focused Web3 projects include The Graph, Fetch.AI, Numeraire, and Ocean Protocol. These projects and their associated tokens received substantial attention from the launch of OpenAIs GPT-4 in March, with the price of their respective tokens hitting double digits.

What we live by and breathe by at SingularityNet is that every algorithm we develop and every action we take across our decentralized community is for good, Adams said.

She asserted that decentralizing the development of AI technology is a crucial step in creating artificial intelligence that benefits all humanity and not a small group of developers.

We're really we're pushing the boundary with our decentralization program, Adams said. We're looking to outsource our decisions, the oversight of our AI, to a great decentralized group globally.

Cybersecurity is essential in safely developing these models. Adams said SingularityNET has put considerable effort into protecting user privacy and data. Adams pointed to blockchain technology as a means toward ensure privacy, as that data is used with permissions, and that users benefit from allowing companies to use their data.

In order for AI to develop responsibility, Adams said, it has to be programmed, overseen, regulated, and developed by a wide range of people to ensure the best outcome.

Humans will progress, she said. The way, from our perspective, is to massively reduce human suffering and inequality and transform our existence on the planet, eradicate diseases resolvable supply chain, finding all new fixes and solutions for global warming.

It's the upsidethe utopic upside of artificial intelligence is almost unimaginable, she concluded.

Visit link:

Achieving the Singularity is 'All About Progress': AI Executive - Decrypt

Read More..

Chances are you havent used A.I. to plan a vacation. Thats about to change – CNBC

Travelers are still skeptical about AI, but most major travel companies aren't.

Nuthawut Somsuk | Istock | Getty Images

According to a global survey of more than 5,700 travelers commissioned by Expedia Group, the average traveler spends more than five hours researching a trip and reviews 141 pages of content for Americans, it's a whopping 277 pages.

And that's just in the final 45 days before departing.

Enter generative artificial intelligence a technology set to simplify that process, and allow companies to better tailor recommendations to travelers' specific interests.

What could that look like? The hope is that AI will not only plan itineraries, but communicate with hotels, draft travel budgets, even function as a personal travel assistant and in the process fundamentally alter the way companies approach travelers.

A typical home search on Airbnb, for example, produces results that don't take past searches into account. You may have a decade of booking upscale, contemporary homes under your belt, but you'll likely still be offered rustic, salt-of-the-earth rentals if they match the filters you've set.

But that could soon change.

During an earnings call in May, CEO Brian Chesky discussed how AI could alter Airbnb's approach. He said: "Instead of asking you questions like: 'Where are you going, and when are you going?' I want us to build a robust profile about you, learn more about you and ask you two bigger and more fundamental questions: Who are you, and what do you want?"

While AI that provides the ever-elusive goal of "personalization at scale" isn't here yet, it's the ability to search massive amounts of data, respond to questions asked using natural language and "remember" past questions to build on a conversation the way humans do that has the travel industry (and many others) sold.

In a survey conducted in April by the market research firm National Research Group, 61% of respondents said they're open to using conversational AI to plan trips but only 6% said they actually had.

Furthermore, more than half of respondents (51%) said that they didn't trust the tech to protect their personal data, while 33% said they feared it may provide inaccurate results.

Yet while travelers are still debating the safety and merits of using AI for trip planning, many major travel companies are already diving headfirst into the technology.

Just look at the names on this list.

Then the summer of 2023 saw a burst of AI travel tech announcements.

In June:

HomeToGo's new "AI Mode" allows travelers to find vacation rental homes using natural language requests.

Source: HomeToGo

In July:

Now, more travel companies have ChatGPT plugins, including GetYourGuide, Klook, Turo and Etihad Airways. And a slew of AI-powered trip planners from Roam Around (for general travel), AdventureGenie (for recreational vehicles), Curiosio (for road trips) added more options to the growing AI travel planning market.

Travel planning is the most visible use of AI in the travel industry right now, but companies are already planning new features.

Trip.com's Senior Product Director Amy Wei said the company is considering developing a virtual travel guide for its latest AI product, TripGenie.

"It can help provide information, such as an introduction to historical buildings and objects in a museum," she told CNBC. "The vision is to create a digital travel companion that can understand and converse with the traveler and provide assistance at every step of the journey."

The travel news site Skift points out AI may be used to predict flight delays and help travel companies respond to negative online reviews.

The company estimates chatbots could bring $1.9 billion in value to the travel industry by allowing companies to operate with leaner customer service staff, freeing up time for humans to focus on complex issues. Chatbots needn't be hired or trained, can speak multiple languages, and "have no learning curve," as Skift points out in a report titled "Generative AI's Impact on Travel."

Overall, Skift's report predicts generative AI could be a $28.5 billion opportunity for the travel industry, an estimate that if the tools are used to "their full potential ... will look conservative in hindsight."

Read this article:

Chances are you havent used A.I. to plan a vacation. Thats about to change - CNBC

Read More..

Humane will share more about its mysterious Ai Pin the same day … – The Verge

Humane, a startup founded by ex-Apple employees, plans to share more about its mysterious AI-powered wearable on the same day as a solar eclipse in October, co-founder Imran Chaudhrisaid in a video on the companys Discord (via Inverse). The solar eclipse is set to happen on October 14th.

The device, officially called the Humane Ai Pin (in the Discord video, Chaudhri pronounces that middle word like you would say the word AI), is being promoted as something that can replace your smartphone. In a wild demo at this years TED conference, Chaudhri uses the device, which is somehow attached to his jacket at chest height, to do things like:

Theres an incredible celestial event thats happening in October: an eclipse, Chaudhri said in the Discord video. An eclipse is an important symbol for us. Its a new beginning spiritually, thats what it means. Its something that the whole world notices and comes together. We are certainly looking forward to being able to have a special moment on that day.

We cant wait all of us to be able to walk down the street and see people using what weve built, Bongiorno said.

If youd like to hear their comments for yourself, weve embedded the video from the Humane Discord below.

Read more:

Humane will share more about its mysterious Ai Pin the same day ... - The Verge

Read More..

2 AI Stocks That Could Help You Build Generational Wealth – The Motley Fool

Generational wealth is a common objective of stock investors. With the market's ability to generate long-term returns, it's an excellent place to preserve and grow wealth that can eventually be passed down to the next generation.

Generating significant long-term returns may have become a bit easier in the past year with the rise of artificial intelligence (AI) and its potential to grow businesses. The benefit of AI-driven applications is spreading into many diverse industries and exciting investors to the possibilities it can create. As a result, many AI-related stocks saw their prices rise significantly, especially when news came out about advances being made possible by Open AI's ChatGPT.

Two AI-related stocks that got fresh attention are Alphabet (GOOGL 1.37%) (GOOG 1.27%) and Broadcom (AVGO 2.93%). Both of these stocks positioned themselves to drive wealth creation through the AI initiatives they are associated with. Let's take a closer look at what these two AI stocks are doing to build generational wealth for their investors.

Alphabet is a quintessential AI stock. Since declaring itself an "AI-first" company in 2016, it has integrated the technology into products ranging from YouTube to the cameras in its Pixel phone. The most profound AI-related efforts may come from Google DeepMind, the merger of Google Research and the AI research company DeepMind. Their efforts enhanced Google's search engine through its Search Generative Experience (SGE). It has also capitalized on the technology by developing and improving Bard, Alphabet's its alternative to ChatGPT.

Additionally, Alphabet uses AI to optimize ads. Although Alphabet has worked to diversify its revenue sources, advertising accounted for 78% of the company's revenue in Q2. That means AI technology is driving a critical part of Alphabet's business.

So far this year, Alphabet has generated $144 billion in revenue, 5% more than the same period last year. And even though it significantly increased research and development spending, it grew net income over that timeframe by 3% to $33 billion.

Admittedly, the stock is not cheap at a 27 P/E ratio, especially with this year's growth. But with revenue rising 41% in 2021 and 10% in 2022, growth could return as AI boosts its advertising and cloud products.Finally, with $118 billion in liquidity and $39 billion in free cash flow generated so far this year, Alphabet can not only preserve generational wealth, but also grow it as conditions improve.

Broadcom's potential to thrive thanks to AI comes from how the technology is being used in both its semiconductor solutions and infrastructure software segments. Its chip segment works closely with clients to develop specialized semiconductors for their needs. This segment recently released Jericho3-AI, an accelerator chip that runs massive machine learning (ML) workloads. The company claims it will balance workloads and operate congestion-free as it enables high-performance AI.

Its infrastructure software segment also offers its AIOps solution. This applies automation and data science to deliver actionable insights powered by AI and ML. Additionally, Broadcom's AI should experience a considerable boost when the company completes its takeover of VMWare at the end of Octover 2023. VMWare provides cloud computing and virtualization software, positioning it to support the workloads that power AI and ML.

Even without VMWare in the fold, Broadcom generated $18 billion in revenue in the first half of 2023, rising 12% from year-ago levels. With the company reducing the cost of revenue and keeping expense growth in check, the net income for the first six months of 2023 of $7.3 billion surged higher by 43%.

Broadcom's power to build generational wealth also comes from its dividend. The payout of $18.40 per year works out to a dividend yield of 2.2%, well above the S&P 500 average of 1.5%. Moreover, that payout cost Broadcom $3.8 billion so far this year. But with Broadcom generating $8.3 billion in free cash flow this year, it should be able to cover the payout costs and continue to raise the payout, which has risen at least once yearly since 2010.

Indeed, new investors will have to pay about 26 times its earnings to benefit from that income stream. But with this tech stock trading up 50% so far in 2023 and nearly 2,200% over the last 10 years, Broadcom has proven its ability to generate rising income and long-term wealth for its shareholders.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Will Healy has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet. The Motley Fool recommends Broadcom and VMware. The Motley Fool has a disclosure policy.

Excerpt from:

2 AI Stocks That Could Help You Build Generational Wealth - The Motley Fool

Read More..