Page 2,729«..1020..2,7282,7292,7302,731..2,7402,750..»

From theory to reality: Google claims to created physics-defying ‘time crystal’ inside its quantum computer – Silicon Canals

Image credits: Google Quantum AI

As the Quantum computing race is heating up, many companies across countries are spending billions on different qubit technologies to stabilise and commercialise the technology. While it is too early to declare a winner in quantum computing, Googles quantum computing lab may have created something truly remarkable.

In the latest development, researchers at Google, in collaboration with physicists at Princeton, Stanford, and other universities, have created the worlds first Time Crystal inside a quantum computer.

Get to know the amazing finalists here

Time crystals developed by Google could be the biggest scientific accomplishment for fundamental physics and quantum physics. Dreamt up by the Nobel Prize-winning physicist Frank Wilczek in 2012, the notion of time crystals is now moving from theory to reality.

In a recently published study, Observation of Time-Crystalline Eigenstate Order on a Quantum Processor, the researchers claim that Time Crystal is a new phase of matter that violates Newtons law of Thermodynamics.

Well, a time crystal sounds like a complicated component of a time machine, but it is not. So, what exactly are Time Crystals? As per researchers, a time crystal is a new phase of matter that alternates between two shapes, never losing any energy during the process.

To make it simple, regular crystals are an arrangement of molecules or atoms that form a regular repeated pattern in space. A time crystal, on the other hand, is an arrangement of molecules or atoms that form a regular, repeated pattern but in time. Meaning, theyll sit in one pattern for a while, then flip to another, and repeat back and forth.

Explaining about Time Crystal in layman terms to Silicon Canals, Loc Henriet, head of Applications and Quantum Software, Pasqal, explains, Some phases of matter are known to spontaneously break symmetries. A crystal breaks spatial translation: one finds atoms only at well-defined positions. Magnets break discrete spin symmetry: the magnetisation points to a well-defined direction. However, no known physical system was known to break one of the simplest symmetries: translation in time. Googles DTC result is the most convincing experimental evidence of the existence of non-equilibrium states of matter that break time-translation symmetry.

Further, Time crystals can withstand energy processes without entropy and transform endlessly within an isolated system without expending any fuel or energy.

Our work employs a time-reversal protocol that discriminates external decoherence from intrinsic thermalisation, and leverages quantum typicality to circumvent the exponential cost of densely sampling the eigenspectrum, says researchers. In addition, we locate the phase transition out of the DTC with experimental finite-size analysis. These results establish a scalable approach to study non-equilibrium phases of matter on current quantum processors.

For the demonstration, the researchers used a chip with 20 qubits to serve as the time crystal. Its worth mentioning that researchers performed the experiments on Googles Sycamore device, which solved a task in 200 seconds that would take a conventional computer 10,000 years.

According to the researchers, their experiment offers preliminary evidence that their system could create time crystals. This discovery could have profound implications in the world of quantum computing if its proven.

Henriet shares, This result is most interesting from a fundamental physics standpoint, as an identification of a novel quantum phase of matter. In itself, it will not directly impact our day-to-day life but it illustrates the richness of many-body quantum physics out-of-equilibrium. It also proves that quantum processors are now powerful enough to discover new interesting regimes for quantum matter with disruptive properties.

The consequence is amazing: You evade the second law of thermodynamics, says Roderich Moessner, director of the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, and a co-author on the Google paper.

This is just this completely new and exciting space that were working in now, says Vedika Khemani, a condensed matter physicist now at Stanford who co-discovered the novel phase, while she was a graduate student and co-authored the new paper with the Google team.

In 2012, Frank Wilczek came up with the idea of time crystals while teaching a class about ordinary (spatial) crystals.

If you think about crystals in space, its very natural also to think about the classification of crystalline behaviour in time, he told Quanta.

Googles quantum computer has certainly achieved what many thought was impossible. Having said that, the experiment is in the preliminary stage and requires a lot of work. Moreover, the pre-print version of the research awaits validation from the scientists community and has to be reviewed by peers as well.

There are good reasons to think that none of those experiments completely succeeded, and a quantum computer like [Googles] would be particularly well placed to do much better than those earlier experiments, University of Oxford physicist John Chalker, who wasnt involved in the research, told Quanta.

How partnering up with Salesforce helped him succeed!

See more here:
From theory to reality: Google claims to created physics-defying 'time crystal' inside its quantum computer - Silicon Canals

Read More..

T-Hub, HCL to collaborate on Quantum Computing and Deep Tech. – The Hindu

Startup ecosystem enabler T-Hub and HCL Technologies have announced a collaboration to explore emerging technologies like Quantum Computing and DeepTech.

As part of the collaboration, T-Hub will connect HCLs Open Innovation Program eSTiP with select startups. This partnership will enable HCL to leverage T-Hubs innovation expertise and ecosystem of start-ups, corporates and investors to accelerate its open innovation initiatives, T-Hub said in a release.

Additionally, HCL will look to curate solutions of the startups for its clients and for focused programme statements, while gaining access to T-Hubs events and demo days.

T-Hub CEO Ravi Narayan said, with this partnership, we are focusing on aiding HCL in its vision of strengthening the approach of creating value for its customers and partners through some disruptive startups, whereas also providing our startups with growth opportunities.

Our partnership with T-Hub cements our ecosystem innovation journey with additional investments in Quantum Computing experiments as the technology continues to evolve", said Kalyan Kumar, Chief Technology Officer and Head-Ecosystems of HCL Technologies.

As Quantum Computing continue to mature and become commercially viable, we hope our continued engagement will bring insights into relevant startups, academia, business collaborators and other innovation ecosystem players, he added.

Follow this link:
T-Hub, HCL to collaborate on Quantum Computing and Deep Tech. - The Hindu

Read More..

Google announces that it may have created a "time crystal" that breaks physics – Texasnewstoday.com

Researchers in Googles quantum computing division have just published a study on the preprint server ArXiv that claims to have used the companys Sycamore quantum computer to create a time crystal that goes against physics, and how big this is. It is impossible to be honest about whether it will be a deal. To do.

As Quanta Magazine explains, time crystals are stable, always fluid, and definable states repeat at predictable intervals without melting into completely random states.

Without getting stuck in the upspins and downspins of qubits (subatomic particles that can represent both 1s and 0s and are the basis of quantum computing), claiming that Google did is essentially a checker. A board with all the red parts on one side and all the black parts on the other side figuratively hit the table in such a way that it completely switched between the two sides without consuming energy.

The second law of thermodynamics says this simply cant happen, but the time crystal doesnt seem to give a bite about entropy, and now Google has seen it actually move. Not only does it say that the process that created it is scalable and its impact can be enormous.

Its hard to say that what Google researchers have done will continue to be scrutinized, as Googles results need to be repeated unpeer-reviewed.

That said, if you can recreate what Googles quantum computer has done, time crystals arent just real, they could actually be used. The impact of such technologies on computer memory alone is difficult to understand, especially for computer processing itself.

After all, its very difficult to say what The nature we know does not work that way, so it will come from a system that opposes entropy-and the entropy assumptions are built into every system we have created or observed so far. Assuming these results are maintained, its really hard to predict what you can do with it, as youve never seen anything like this, but its incredibly exciting. Mystery is.

Google announces that it may have created a time crystal that breaks physics

Source link Google announces that it may have created a time crystal that breaks physics

Follow this link:
Google announces that it may have created a "time crystal" that breaks physics - Texasnewstoday.com

Read More..

Why it’s time to wake up to the quantum threat – Finextra – Finextra – Finextra

Quantum computing is proving to be enormously exciting for financial institutions. Already,Goldman Sachs and Deutsche Brse are exploring quantum algorithms to calculate risk model simulations 1,000 times faster than currently possible, whileBBVA is looking to quantum to optimise investment portfolio management.

But a more sinister aspect to the technology also lurks just around the corner. Because of their computing power, quantum machines will be able to smash through the mathematical algorithms underpinning all modern encryption - posing an unparalleled cybersecurity risk.

It would take a traditional computer years to break the public-key encryption relied on today by just about every financial services company, but a fully-scalable quantum computer could achieve the same in a matter of hours.

According to roadmaps laid out by major players in the field, we will have a quantum computer capable of doing this within the next decade.

Mapping the vulnerabilities

Banks and financial institutions use a range of cryptographic algorithms to ensure the security of transactions, including symmetric key cryptography (e.g. 3DES) and public key cryptography. Although public key cryptography is most exposed to the quantum threat, some types of symmetric key cryptography are also vulnerable to attack.

Core to these operations are hardware security modules (HSMs). These form a key part of the physical infrastructure that stores and generates secure keys using cryptographic asymmetric algorithms to authenticate and validate transaction information.

A chain is only as strong as its weakest link, so unless up-to-date, quantum-secure HSMs are in place, theres a risk of quantum attackers exploiting a single vulnerability to expose all data within the payments ecosystem.

What complicates the issue is that quantum decryption can be applied retrospectively.

Bad actors could begin collecting encrypted data from institutions today, with the intent to harvest now, decrypt later. Financial services companies could unknowingly be victim to an attack today, and only suffer the consequences in the future when quantum computers become available.

Thankfully, some institutions are already paying attention, with early movers likeScotiabank,JP Morgan and Visaall taking the threat seriously.

Beginning the fight back

The world began to take note of the quantum threat when, in 2016, the US National Security Agency issued an officialwarning to industry. Shortly thereafter, the US National Institute of Standards and Technology (NIST) launched a post-quantum cryptography standardisation project to lay out the path to a quantum-secure future.

NIST is running the process as a competition. The project is now in its final stages, with seven finalist algorithms left after 80 submissions from six continents. The final algorithms will be chosen in 2021, with draft standards to be published thereafter.

Its anticipated that the US government will require contractors to incorporate the new NIST standards in order to conduct business with its agencies. As critical infrastructure, financial institutions are also likely to find that quantum-secure cryptography soon becomes a technical necessity.

The path to quantum security

The migration to new cryptography standards will be a massive undertaking - one of the biggest cybersecurity shifts in decades.

The transition will be complicated for banks, too. Each institution will be starting from its own unique position, with its own legacy systems and infrastructure, and each will be vulnerable to the quantum threat in a different way.

Financial institutions can save time in the long-run by taking steps to plan their own transition before NISTs new standards are even announced.

The first step is to conduct an audit, pinpointing each and every place where encryption is being used within the organisation. This will help to identify weak spots, find areas in need of rationalisation, and so on.

NISTagrees that companies should start preparing for the transition today: 'Itis critical to begin planning for the replacement of hardware, software, and services that use public-key algorithmsnow, so the information is protected from future attacks'.

Looking ahead

Institutions have invested huge amounts of time and effort building customer trust in digital banking, and cryptography was the main mathematical tool that allowed this to happen.

Now that quantum computers threaten to break it, its time for the sector to fight back.

The security of all sensitive data, past and present, relies on it.

Continue reading here:
Why it's time to wake up to the quantum threat - Finextra - Finextra - Finextra

Read More..

Quantum Computing Market 2021-2025Top Trends, Business Opportunity, and Growth Strategy The Manomet Current – The Manomet Current

Quantum Computing Market by Component, Application, Services, and Region- Forecast to 2025

The Global Quantum Computing Market Research Report 2021-2025 is a significant source of keen information for business specialists. A report published by Market Insights Reports is an overall investigation and thorough information in regards to the market size and market elements of the Quantum Computing. It furnishes the business outline with development, historical and futuristic cost analysis, income, demand, and supply information (upcoming identifiers). The research analysts give a detailed depiction of the worth chain and its wholesaler examination. The Quantum Computing market study gives extensive information which upgrades the agreement, degree, and use of this report. This is a latest report, covering the COVID-19impact on the market.

The Quantum Computing market is expected to grow from USD 472 million in 2021 to USD 1,765 million by 2025, at a CAGR of 30.2%.

Click the link to get a Free Sample Copy of the Report-

https://www.marketinsightsreports.com/reports/06211315651/global-quantum-computing-market-size-status-and-forecast-2019-2025/inquiry?Source=Manomet&Mode=AK

Top Companies in the Global Quantum Computing Market are Sterimar, LABORATOIRE DE LA MER, Gerolymatos InternationaL, Humer (Laboratoire URGO), Gifrer, GSK, Nacur Healthcare

Quantum computing is gaining traction in the banking and finance services industry, which is focusing on increasing the speed of trade activities, transactions, and data processing manifolds. One of the significant potential applications of quantum computing is the simulation. Quantum computing helps in the identification of an improved and efficient way to manage financial risks. The processing time and the costs of high-quality solutions can increase exponentially if classical computers are used in financial institutions, while quantum computers can carry out speedy operations at optimized costs, resulting in cost savings and new opportunities for revenue generation

Rise in investments in quantum computing technology-

Various government agencies related to the global space and defense sector are investing increasingly in the development of quantum computing technology so that different optimization and simulation strategies can be implemented with quantum computers. Governments of various countries across the world are making significant investments to support their research institutes for the development of quantum computing technology. China is investing significantly in research and development activities related to quantum computing technology. Governments of the US and Australia, as well as of the countries of the European Union, are forging ahead with quantum computing initiatives. For instance, in August 2017, the Commonwealth Bank (CBA) joined telco firm Telstra, the Federal Government, the New South Wales Government, and the University of New South Wales (UNSW) in a USD 83 million venture to establish the first quantum computing company of Australia.

Quantum Computing Market Segmentation:

This report fragments the Global Quantum Computing Market based on Types are-

Simulation

Optimization

Sampling

Based on Application, the Global Quantum Computing Market are divided into-

Defense

Banking & Finance

Energy & Power

Chemicals

Healthcare & Pharmaceuticals

Prime Takeaways on Quantum Computing Market Report:

Explore Full Report With Detailed TOC Here:

https://www.marketinsightsreports.com/reports/06211315651/global-quantum-computing-market-size-status-and-forecast-2019-2025?Source=Manomet&Mode=AK

Geographic Coverage of Quantum Computing Market Report-

The Quantum Computing report contains detailed country-level analysis, market revenue, market value and forecast analysis of Quantum Computing Market (2016-2027) of these regions are covered:

Reasons for purchasing this Report-

Dear Reader,

As the Quantum Computing industry enters a new and different chapter in its history after the second wave of the pandemic, a subscription to the Worlds leading B2B Quantum Computing market report will make sure that you are subscribed to the latest Quantum Computing industry trends and have access to the latest market data covering both the qualitative and quantitative aspects along with the key company information.

SUBSCRIBE TODAY

Subscribing today means you will have access to the following:

In addition to your PDF copy, you will also benefit from access to 1 free press release with the chance to broadcast the content of your choice on the internet.

The issue will also carry a focus on mention the segments there.

About Us:

MarketInsightsReports provides syndicated market research on industry verticals including Healthcare, Information, and Communication Technology (ICT), Technology and Media, Chemicals, Materials, Energy, Heavy Industry, etc. MarketInsightsReports provides global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations.

Contact Us:

Irfan Tamboli (Head of Sales) Market Insights Reports

Phone: + 1704 266 3234 | +91-750-707-8687

sales@marketinsightsreports.com | irfan@marketinsightsreports.com

More here:
Quantum Computing Market 2021-2025Top Trends, Business Opportunity, and Growth Strategy The Manomet Current - The Manomet Current

Read More..

Superconductivity Research: Researchers Develop New Material that Enables Quantum Information-Based Technology – Science Times

After receiving a cash grant of $25 million back in 2019 to establish the first-ever National Science Foundation, Quantum Foundry, UC Santa Barbara researchers have begun developing a new material that would enable quantum information-based technologies like quantum computing, sensing, communications, and simulations.

Today, researchers have succeeded in designing a new superconductive material, a breakthrough in materials science.

(Photo: iStock by Pexel)

In a study published in the journal Nature Materialstitled "Unconventional Chiral Charge Order in Kagome Superconductor KV3Sb5" Stephen Wilson, Foundry co-director and UCSB materials professor, highlights how the new material was developed into a prime candidate as a superconductor. A superconductor is a material in which electrical resistant fades and magnetic fields are expelled. It can also be indispensable in future quantum physics applications.

Previously, a study described a new material known as cesium vanadium antimonide (CsVSb) that was observed to exhibit a mixture of characteristics involving a patterning of self-organized charges intertwined with superconducting state. As it turns out, the unique characteristics discovered in the study were exhibited by similar materials, including KVSb and RbVSb, being the subject of the recent paper, as reported by the Current.

ALSO READ: Quantum Tech One Step Closer With New Single-Photon Switch

Wilson noted that the materials from the group of compounds are expected to host a wide variety of charge densities and wave physics since its peculiar nature is self-organized patterning of electrons and is the focus of their recent work.

The predicted charge density wave state discussed along with other exotic physics are due to the network of vanadium ions in the new material, as reported by Phys.Org. Theyform a corner-sharing network of triangles that are known as kagome lattices. KG SB was discovered as a rare metal built from these kagome lattice planes, and surprisingly, also superconducts. Some of the material's other properties have led the researchers to speculate that the charges may form small loops of currents, creating a localized magnetic field.

For years, materials scientists and physicists have predicted that a material would one day be made to exhibit the form of charge density wave that breaks the time-reversal symmetry.

Wison explains that this means that the magnetic moment is broken by certain patterns on the kagome lattice, where the charge moves around a tiny loop. The loop is similar to a current loop, which will render a magnetic field. This state would be a new electronic state of matter that would have significant consequences on underlying unconventional superconductivity.

This is the kind of scientific work for which the Quantum Foundry was established for. It plays a lead role in developing new materials, with its researchers discovering new superconductivity and finding signatures that indicate the repossession of charge density waves in newly developed materials. Now, the materials are studied worldwide due to the numerous aspects of interests of various communities.

If KVSb becomes what is suspected of being, it could be utilized to create a topological qubit that is useful and necessary in quantum information applications, such as quantum computing and sensing.

RELATED ARTICLE: Quantum Error Computing Source Identified Thanks to Sydney University Machine Learning

Check out more news and information on Quantum Physicsin Science Times.

See the rest here:
Superconductivity Research: Researchers Develop New Material that Enables Quantum Information-Based Technology - Science Times

Read More..

What is Artificial Intelligence (AI)? | Oracle

Despite AIs promise, many companies are not realizing the full potential of machine learning and other AI functions. Why? Ironically, it turns out that the issue is, in large part...people. Inefficient workflows can hold companies back from getting the full value of their AI implementations.

For example, data scientists can face challenges getting the resources and data they need to build machine learning models. They may have trouble collaborating with their teammates. And they have many different open source tools to manage, while application developers sometimes need to entirely recode models that data scientists develop before they can embed them into their applications.

With a growing list of open source AI tools, IT ends up spending more time supporting the data science teams by continuously updating their work environments. This issue is compounded by limited standardization across how data science teams like to work.

Finally, senior executives might not be able to visualize the full potential of their companys AI investments. Consequently, they dont lend enough sponsorship and resources to creating the collaborative and integrated ecosystem required for AI to be successful.

Original post:
What is Artificial Intelligence (AI)? | Oracle

Read More..

8 Examples of Artificial Intelligence in our Everyday Lives

Main Examples of Artificial Intelligence Takeaways:

The words artificial intelligence may seem like a far-off concept that has nothing to do with us. But the truth is that we encounter several examples of artificial intelligence in our daily lives.

From Netflixs movie recommendation to Amazons Alexa, we now rely on various AI models without knowing it. In this post, well consider eight examples of how were already using artificial intelligence.

Artificial intelligence is an expansive branch of computer science that focuses on building smart machines. Thanks to AI, these machines can learn from experience, adjust to new inputs, and perform human-like tasks. For example, chess-playing computers and self-driving cars rely heavily on natural language processing and deep learning to function.

American computer scientist John McCarthy coined the term artificial intelligence back in 1956. At the time, McCarthy only created the term to distinguish the AI field from cybernetics.

However, AI is more popular than ever today due to:

Hollywood movies tend to depict artificial intelligence as a villainous technology that is destined to take over the world.

One example is the artificial superintelligence system, Skynet, from the film franchise Terminator. Theres also VIKI, an AI supercomputer from the movie I, Robot, who deemed that humans cant be trusted with their own survival.

Holywood has also depicted AI as superintelligent robots, like in movies I Am Mother and Ex Machina.

However, the current AI technologies are not as sinister or quite as advanced. With that said, these depictions raise an essential question:

No, not exactly. Artificial intelligence and robotics are two entirely separate fields. Robotics is a technology branch that deals with physical robots programmable machines designed to perform a series of tasks. On the other hand, AI involves developing programs to complete tasks that would otherwise require human intelligence. However, the two fields can overlap to create artificially intelligent robots.

Most robots are not artificially intelligent. For example, industrial robots are usually programmed to perform the same repetitive tasks. As a result, they typically have limited functionality.

However, introducing an AI algorithm to an industrial robot can enable it to perform more complex tasks. For instance, it can use a path-finding algorithm to navigate around a warehouse autonomously.

To understand how thats possible, we must address another question:

The four artificial intelligence types are reactive machines, limited memory, Theory of Mind, and self-aware. These AI types exist as a type of hierarchy, where the simplest level requires basic functioning, and the most advanced level is well, all-knowing. Other subsets of AI include big data, machine learning, and natural language processing.

The simplest types of AI systems are reactive. They can neither learn from experiences nor form memories. Instead, reactive machines react to some inputs with some output.

Examples of artificial intelligence machines in this category include Googles AlphaGo and IBMs chess-playing supercomputer, Deep Blue.

Deep Blue can identify chess pieces and knows how each of them moves. While the machine can choose the most optimal move from several possibilities, it cant predict the opponents moves.

A reactive machine doesnt rely on an internal concept of the world. Instead, it perceives the world directly and acts on what it sees.

Limited memory refers to an AIs ability to store previous data and use it to make better predictions. In other words, these types of artificial intelligence can look at the recent past to make immediate decisions.

Note that limited memory is required to create every machine learning model. However, the model can get deployed as a reactive machine type.

The three significant examples of artificial intelligence in this category are:

Self-driving cars are limited memory AI that makes immediate decisions using data from the recent past.

For example, self-driving cars use sensors to identify steep roads, traffic signals, and civilians crossing the streets. The vehicles can then use this information to make better driving decisions and avoid accidents.

In Psychology, theory of mind refers to the ability to attribute mental state beliefs, intent, desires, emotion, knowledge to oneself and others. Its the fundamental reason we can have social interactions.

Unfortunately, were yet to reach the Theory of Mind artificial intelligence type. Although voice assistants exhibit such capabilities, its still a one-way relationship.

For example, you could yell angrily at Google Maps to take you in another direction. However, itll neither show concern for your distress nor offer emotional support. Instead, the map application will return the same traffic report and ETA.

An AI system with Theory of Mind would understand that humans have thoughts, feelings, and expectations for how to be treated. That way, it can adjust its response accordingly.

The final step of AI development is to build self-aware machines that can form representations of themselves. Its an extension and advancement of the Theory of Mind AI.

A self-aware machine has human-level consciousness, with the ability to think, desire, and understand its feelings. At the moment, these types of artificial intelligence only exist in movies and comic book pages. Self-aware machines do not exist.

Although self-aware machines are still decades away, several artificial intelligence examples already exist in our everyday lives.

Several examples of artificial intelligence impact our lives today. These include FaceID on iPhones, the search algorithm on Google, and the recommendation algorithm on Netflix. Youll also find other examples of how AI is in use today on social media, digital assistants like Alexa, and ride-hailing apps such as Uber.

Virtual filters on Snapchat and the FaceID unlock on iPhones are two examples of AI applications today. While the former uses face detection technology to identify any face, the latter relies on face recognition.

So, how does it work?

The TrueDepth camera on the Apple devices projects over 30,000 invisible dots to create a depth map of your face. It also captures an infrared image of the users face.

After that, a machine learning algorithm compares the scan of your face with what a previously enrolled facial data. That way, it can determine whether to unlock the device or not.

According to Apple, FaceID automatically adapts to changes in the users appearance. These include wearing cosmetic makeup, growing facial hair, or wearing hats, glasses, or contact lens.

The Cupertino-based tech giant also stated that the chance of fooling FaceID is one in a million.

Several text editors today rely on artificial intelligence to provide the best writing experience.

For example, document editors use an NLP algorithm to identify incorrect grammar usage and suggest corrections. Besides auto-correction, some writing tools also provide readability and plagiarism grades.

However, editors such as INK took AI usage a bit further to provide specialized functions. It uses artificial intelligence to offer smart web content optimization recommendations.

Just recently, INK has released a study showing how its AI-powered writing platform can improve content relevance and help drive traffic to sites. You can read their full study here.

Social media platforms such as Facebook, Twitter, and Instagram rely heavily on artificial intelligence for various tasks.

Currently, these social media platforms use AI to personalize what you see on your feeds. The model identifies users interests and recommends similar content to keep them engaged.

Also, researchers trained AI models to recognize hate keywords, phrases, and symbols in different languages. That way, the algorithm can swiftly take down social media posts that contain hate speech.

Other examples of artificial intelligence in social media include:

Plans for social media platform involve using artificial intelligence to identify mental health problems. For example, an algorithm could analyze content posted and consumed to detect suicidal tendencies.

Getting queries directly from a customer representative can be very time-consuming. Thats where artificial intelligence comes in.

Computer scientists train chat robots or chatbots to impersonate the conversational styles of customer representatives using natural language processing.

Chatbots can now answer questions that require a detailed response in place of a specific yes or no answer. Whats more, the bots can learn from previous bad ratings to ensure maximum customer satisfaction.

As a result, machines now perform basic tasks such as answering FAQs or taking and tracking orders.

Media streaming platforms such as Netflix, YouTube, and Spotify rely on a smart recommendation system thats powered by AI.

First, the system collects data on users interests and behavior using various online activities. After that, machine learning and deep learning algorithms analyze the data to predict preferences.

Thats why youll always find movies that youre likely to watch on Netflixs recommendation. And you wont have to search any further.

Search algorithms ensure that the top results on the search engine result page (SERP) have the answers to our queries. But how does this happen?

Search companies usually include some type of quality control algorithm to recognize high-quality content. It then provides a list of search results that best answer the query and offers the best user experience.

Since search engines are made entirely of codes, they rely on natural language processing (NLP) technology to understand queries.

Last year, Google announced Bidirectional Encoder Representations from Transformers (BERT), an NLP pre-training technique. Now, the technology powers almost all English-based query on Google Search.

In October 2011, Apples Siri became the first digital assistant to be standard on a smartphone. However, voice assistants have come a long way since then.

Today, Google Assistant incorporates advanced NLP and ML to become well-versed in human language. Not only does it understand complex commands, but it also provides satisfactory outputs.

Also, digital assistants now have adaptive capabilities for analyzing user preferences, habits, and schedules. That way, they can organize and plan actions such as reminders, prompts, and schedules.

Various smart home devices now use AI applications to conserve energy.

For example, smart thermostats such as Nest use our daily habits and heating/cooling preferences to adjust home temperatures. Likewise, smart refrigerators can create shopping lists based on whats absent on the fridges shelves.

The way we use artificial intelligence at home is still evolving. More AI solutions now analyze human behavior and function accordingly.

We encounter AI daily, whether youre surfing the internet or listening to music on Spotify.

Other examples of artificial intelligence are visible in smart email apps, e-commerce, smart keyboard apps, as well as banking and finance. Artificial intelligence now plays a significant role in our decisions and lifestyle.

The media may have portrayed AI as a competition to human workers or a concept thatll eventually take over the world. But thats not the case.

Instead, artificial intelligence is helping humans become more productive and helping us live a better life.

More here:
8 Examples of Artificial Intelligence in our Everyday Lives

Read More..

Top 10 Artificial Intelligence Books for Beginner in 2021 …

In 2021, Artificial Intelligence is the hottest and demanding field; most engineers want to make their career in AI, Data Science & Data Analytics. Going through the best and reliable resources is the best way to learn, So here is the list of the best AI Books.

Artificial Intelligence is the field of study that simulates the processes of human intelligence on computer systems. These processes include the acquisition of information, using them, and approximating conclusions. The research topics in AI include problem-solving, reasoning, planning, natural language, programming, and machine learning. Automation, Robotics and sophisticated computer software and programs characterize a career in Artificial Intelligence. Basic foundations in maths, technology, logic, and engineering can go a long way in kick-starting a career in Artificial Intelligence.

Here we have listed a few basic and advanced Artificial Intelligence books, which will help you find your way around AI.

By Stuart Russell and Peter Norvig

This edition covers the changes and developments in Artificial Intelligence since those covered in the last edition of this book in 2003. This book covers the latest development in AI in the field of practical speech recognition, machine translation, autonomous vehicles, and household robotics. It also covers the progress, in areas such as probabilistic reasoning, machine learning, and computer vision.

You can buy it here.

By James V Stone

In this book, key neural network learning algorithms are explained, followed by detailed mathematical analyses. Online computer programs collated from open source repositories give hands-on experience of neural networks. It is an ideal introduction to the algorithmic engines of modern-day artificial intelligence.

You can but it here.

By Denis Rothman

This book serves as a starting point for understanding how Artificial Intelligence works with the help of real-life scenarios. You will be able to understand the most advanced machine learning models, understand how to apply AI to blockchain and IoT, and develop emotional quotient in chatbots using neural networks. By the end of this book, you will have understood the fundamentals of AI and worked through a number of case studies that will help you develop the business vision. This book will help you develop your adaptive thinking to solve real-life AI case. Prior experience with Python and statistical knowledge is essential to make the most out of this book.

You can buy it here.

By Chandra S.S.V

This book is primarily intended for undergraduate and postgraduate students of computer science and engineering. This textbook covers the gap between the difficult contexts of Artificial Intelligence and Machine Learning. It provides the most number of case studies and worked-out examples. In addition to Artificial Intelligence and Machine Learning, it also covers various types of learning like reinforced, supervised, unsupervised and statistical learning. It features well-explained algorithms and pseudo-codes for each topic which makes this book very useful for students.

You can buy it here.

By Tom Taulli

This book equips you with a fundamental grasp of Artificial Intelligence and its impact. It provides a non-technical introduction to important concepts such as Machine Learning, Deep Learning, Natural Language Processing, Robotics and more. Further the author expands on the questions surrounding the future impact of AI on aspects that include societal trends, ethics, governments, company structures and daily life.

You can buy it here.

By Neil Wilkins

This book gives you a glimpse into Artificial Intelligence and a hypothetical simulation of a living brain inside a computer. This book features the following topics:

You can buy it here.

By Deepak Khemani

This book follows a bottom-up approach exploring the basic strategies needed problem-solving mainly on the intelligence part. Its main features include an introductory course on Artificial Intelligence, a knowledge-based approach using agents all across and detailed, well-structured algorithms with proofs.

You can buy it here.

By Mariya Yao, Adelyn Zhou, Marlene Jia

Applied Artificial Intelligence is a practical guide for business leaders who are passionate about leveraging machine intelligence to enhance the productivity of their organizations and the quality of life in their communities. This book focuses on driving concrete business decisions through applications of artificial intelligence and machine language. It is one of the best practical guide for business leaders looking to get a true value from the adoption of Machine Learning Technology.

You can buy it here.

By Mahajan MD, Parag Suresh

This book explores the role of Artificial Intelligence in Healthcare, how it is revolutionizing all aspects of healthcare and guides you through the current state and future applications of AI in healthcare, including those under development. It also discusses the ethical concerns related to the use of AI in healthcare, principles of AI & how it works, the vital role of AI in all major medical specialties, & the role of start-ups and corporate players in AI in healthcare.

You can buy it here.

By Max Tegmark

This book takes its readers to the heart of the latest AI thought process to explore the next phase of human existence. The author here explores the burning questions of how to prosper through automation without leaving people jobless, how to ensure that future AI systems work as intended without malfunctioning or getting hacked and how to flourish life with AI without eventually getting outsmarted by lethal autonomous machines.

You can buy it here.

By Dr. Dheeraj Mehrotra This book delivers an understanding of Artificial Intelligence and Machine Learning with a better framework of technology.

You can buy it here.

By Peter Norvig

This book teaches advanced Common Lisp techniques in the context of building major AI systems. It reconstructs authentic, complex AI programs using state-of-the-art Common Lisp, builds and debugs robust practical programs while demonstrating superior programming style and important AI concepts. It is a useful supplement for general AI courses and an indispensable reference for a professional programmer.

You can buy it here.

By Rahul Kumar, Ankit Dixit, Denis Rothman, Amir Ziai, Mathew Lamons

This book helps you to gain real-world contextualization using deep learning problems concerning research and application. Design and implement machine intelligence using real-world AI-based examples. This book offers knowledge in machine learning, deep learning, data analysis, TensorFlow, Python, fundamentals of AI and will be able to apply your skills in real-world projects.

You can buy it here.

By Giuseppe Bonaccorso, Armando Fandango, Rajalingappaa Shanmugamani

This book is a complete guide to learning popular machine learning algorithms. You will learn how to extract features from your dataset and perform dimensionality reduction by using Python-based libraries. Then you will be learning the advanced features of Tensorflow and implement different techniques related to object classification, object detection, image segmentation and more. By the end of this book, you will have an in-depth knowledge of Tensorflow and will be the go-to person for solving AI problems.

You can buy it here.

By Chris Baker

This book explores the potential consequences of Artificial Intelligence and how it will shape the world in the coming years. It familiarizes how AI aims to aid human cognitive limitations. It covers:

You can buy it here.

By John Mueller and Luca Massaron

This offers a much-needed entry point for anyone looking to use machine learning to accomplish practical tasks. This book makes it easy to understand and implement machine learning seamlessly. It explains how

You can buy it here.

By Ethem Alpaydin

It is a concise overview of machine learning which underlies applications that include recommendation systems, face recognition, and driverless cars. The author offers a concise overview of the subject for the general reader, describing its evolution, explaining important learning algorithms, and presenting example applications.

You can buy it here.

By John D. Kelleher, Brian Mac Namee

It is a comprehensive introduction to the most important machine learning approaches used in predictive data analytics, covering both theoretical concepts and practical applications. Technical and mathematical material is augmented with explanatory worked examples, and case studies illustrate the application of these models in the broader business context. Finally, the book considers techniques for evaluating prediction models and offers two case studies that describe specific data analytics projects through each phase of development, from formulating the business problem to implementation of the analytics solution.

You can buy it here.

By Chris Sebastian

This book traces the development of Machine Learning from the early days of computer learning to machines being able to beat human experts. It explains the importance of data and how massive amounts of it provide ML programmers with the information they need to developing learning algorithms. This book explores the relationship between Artificial Intelligence and Machine Learning.

You can buy it here.

By Deepti Gupta

It is a Data Science Bool with an effective understanding on ML Algorithms on R and SAS. This book provides real-time industrial data sets. It covers the Role of Analytics in various Industries with case studies in Banking, Retail, Telecommunications, Healthcare, Airlines and FMCG along with Analytical Solutions.

You can buy it here.

By Lopez de Prado, Marcos

This book teaches readers how to structure Big Data in a way that is amenable to Machine Language Algorithms, how to conduct research on that data with ML algorithms, how to use supercomputing methods and how to backtest discoveries while avoiding false positives. The book addresses real-life problems faced by practitioners on a daily basis and explains scientifically sound solutions using math, supported by code and examples.

You can buy it here.

By Stuart Russel

In this book, the author explores the idea of intelligence in humans and machines. He describes the near time benefits that can be expected from intelligent personal assistants to vastly accelerated scientific researches. The author suggests that AI can be built on a new foundation by which machines will be designed where they will be uncertain about the human preference they are required to satisfy. Such machines would be humble, altruistic and committed to pursuing human objectives.

You can buy it here.

A career in Artificial Intelligence can be realized in a variety of spheres which include private organizations, public undertakings, education, arts, health care, government services, and military. The extent of artificial intelligence continues to advance every day. Hence, those with the ability to translate those digital bits of data into meaningful human conclusions will be able to sustain a much rewarding career in this field. You can check out a lot many courses and certifications provided online in this field. If your intent is promising, the courses will definitely be promising and a whole lot of opportunities will show up on your way.

People are also reading:

View original post here:
Top 10 Artificial Intelligence Books for Beginner in 2021 ...

Read More..

7 Risks Of Artificial Intelligence You Should Know | Built In

Last March, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceXfounder Elon Musk issued a friendly warning: Mark my words, he said, billionaire casualin a furry-collared bomber jacket and days old scruff, AIis far more dangerous than nukes.

No shrinking violet, especially when it comes to opining about technology, the outspoken Musk has repeated a version of these artificial intelligence premonitions in other settings as well.

I am really quite close to the cutting edge in AI, and it scares the hell out of me, he told his SXSW audience. Its capable of vastly more than almost anyone knows, and the rate of improvement is exponential.

Musk, though, is far from alone in his exceedingly skeptical (some might say bleakly alarmist) views. A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AIs impact could be cataclysmic unless its rapid development is strictly and ethically controlled.

Unless we learn how to prepare for, and avoid, the potential risks, he explained, AI could be the worst event in the history of our civilization.

Considering the number and scope of unfathomably horrible events in world history, thats really saying something.

And in case we havent driven home the point quite firmly enough, research fellow Stuart Armstrong from the Future of Life Institute has spoken of AI as an extinction risk were it to go rogue. Even nuclear war, he said, is on a different level destruction-wise because it would kill only a relatively small proportion of the planet. Ditto pandemics, even at their more virulent.

If AI went bad, and 95 percent of humans were killed, he said, then the remaining five percent would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.

How, exactly, would AI arrive at such a perilous point? Cognitive scientist and author Gary Marcus offered some details in an illuminating 2013 New Yorker essay. The smarter machines become, he wrote, themore their goals could shift.

Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called technological singularity or intelligence explosion, the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

As AI grows more sophisticated and ubiquitous, the voices warning against its current and future pitfalls grow louder. Whether it's the increasing automation of certain jobs, gender and racial bias issues stemming from outdated information sources orautonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And were still in the very early stages.

The tech community has long-debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been proposed as a few of the biggest dangers posed by AI.

Destructive superintelligence aka artificial general intelligence thats created by humans and escapes our control to wreak havoc is in a category of its own. Its also something that might or might not come to fruition (theories vary), so at this point its less risk than hypothetical threat and ever-looming source of existential dread.

Job automation is generally viewed as the most immediate concern. Its no longer a matter of if AI will replace certain types of jobs, but to what degree. In many industries particularly but not exclusively those whose workers perform predictable and repetitive tasks disruptioniswell underway. According to a 2019 Brookings Institution study, 36 million people work in jobs with high exposure to automation, meaning that before long at least 70 percent of their tasks ranging from retail sales and market analysis to hospitality and warehouse labor will be done using AI. An even newer Brookings report concludes that white collar jobs may actuallybe most at risk. And per a 2018 report from McKinsey & Company, the African American workforce will be hardest hit.

The reason we have a low unemployment rate, which doesnt actually capture people that arent looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy, renowned futurist Martin Ford (left) told Built In. I dont think thats going to continue.

As AI robots become smarter and more dextrous, he added, the same tasks will require fewer humans. And while its true that AI will create jobs, an unspecified number of which remain undefined, many will be inaccessible to less educationally advanced members of the displaced workforce.

If youre flipping burgers at McDonalds and more automation comes in, is one of these new jobs going to be a good match for you? Ford said. Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents really strong interpersonal skills or creativity that you might not have? Because those are the things that, at least so far, computers are not very good at.

John C. Havens, author of Heartificial Intelligence: Embracing Humanity andMaximizing Machines, calls bull on the theory that AI will create as many or more jobs than it replaces.

About four years ago, Havens said, he interviewed the head of a law firm about machine learning. The man wanted to hire more people, but he was also obliged to achieve a certain level of returns for his shareholders. A $200,000 piece of software, he discovered, could take the place of ten people drawing salaries of $100,000 each. That meant hed save $800,000. The software would also increase productivity by 70 percent and eradicate roughly 95 percent of errors. From a purely shareholder-centric, single bottom-line perspective, Havens said, there is no legal reason that he shouldnt fire all the humans. Would he feel bad about it? Of course. But thats beside the point.

Even professions that require graduate degrees and additional post-college training arent immune to AI displacement. In fact, technology strategist Chris Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for a massive shakeup.

Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure, he said. It's a lot of attorneys reading through a lot of information hundreds or thousands of pages of data and documents. Its really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you're trying to achieve is probably going to replace a lot of corporate attorneys.

Accountants should also prepare for a big shift, Messina warned. Once AI is able to quickly comb through reams of data to make automatic decisions based on computational interpretations, human auditors may well be unnecessary.

While job loss is currently the most pressing issue related to AI disruption, its merely one among many potential risks. In a February 2018 paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, 26 researchers from 14 institutions (academic, civil and industry) enumerated a host of other dangers that could cause serious harm or, at minimum, sow minor chaos in less than five years.

Malicious use of AI, they wrote in their 100-page report, could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example, he said, is Chinas Orwellian use of facial recognition technology in offices, schools and other venues. But thats just one country. A whole ecosphere of companies specialize in similar tech and sell it around the world.

What we can so far only guess at is whether thattech will ever become normalized.As with the internet, where we blithely sacrifice our digital data at the altar of convenience, will round-the-clock, AI-analyzed monitoring someday seem like a fair trade-off for increased safety and security despite its nefarious exploitation by bad actors?

Authoritarian regimes use or are going to use it, Ford said. The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?

AI will also give rise to hyper-real-seeming social media personalities that are very difficult to differentiate from real ones, Ford said. Deployed cheaply and at scale on Twitter, Facebook or Instagram, they could conceivably influence an election.

The same goes for so-called audio and video deepfakes created by manipulating voices and likenesses. The latter is already making waves. But the former, Ford thinks, will prove immensely troublesome. Using machine learning, a subset of AI thats involved in natural language processing, an audio clip of any given politician could be manipulated to make it seem as if that person spouted racist or sexist views when in fact they uttered nothing of the sort. If the clips quality is high enough so as to fool the general public and avoid detection, Ford added, it could completely derail a political campaign.

And all it takes is one success.

From that point on, he noted, no one knows whats real and whats not. So it really leads to a situation where you literally cannot believe your own eyes and ears; you can't rely on what, historically, weve considered to be the best possible evidence Thats going to be a huge issue.

Lawmakers, though frequently less than tech-savvy, are acutely aware and pressing for solutions.

Widening socioeconomicinequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when its a certain kind of work the predictable, repetitive kind thats prone to AI takeover research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money. (Then again, not everyone believes that.)

Various forms of AI bias are detrimental, too. Speaking recently to the New York Times, Princeton computer science professor Olga Russakovsky said it goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can amplify the former), AI is developed by humans and humans are inherently biased.

A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities, Russakovsky said. Were a fairly homogeneous population, so its a challenge to think broadly about world issues.

In the same article, Google researcher Timnit Gebru said the root of bias is social rather than technological, and called scientists like herself some of the most dangerous people in the world, because we have this illusion of objectivity. The scientific field, she noted, has to be situated in trying to understand the social dynamics of the world, because most of the radical change happens at the social level.

And technologists arent alone in sounding the alarm about AIs potential socio-economic pitfalls. Along with journalists and political figures, Pope Francis is also speaking up and hes not just whistling Sanctus. At a late-September Vatican meeting titled, The Common Good in the Digital Age, Francis warned that AI has the ability to circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.

If mankinds so-called technological progress were to become an enemy of the common good, he added, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.

A big part of the problem, Messina said, is the private sectors pursuit of profit above all else. Because thats what theyre supposed to do, he said. And so theyre not thinking of, Whats the best thing here? Whats going to have the best possible outcome?

The mentality is, If we can do it, we should try it; lets see what happens, he added. And if we can make money off it, well do a whole bunch of it. But thats not unique to technology. Thats been happening forever.

Not everyone agrees with Musk that AI is more dangerous than nukes, including Ford. But what if AI decides to launch nukes or, say, biological weapons sans human intervention? Or, whatif an enemy manipulates data to return AI-guided missiles whence they came? Both are possibilities. And both would be disastrous. The more than 30,000 AI/robotics researchers and others who signed an open letter on the subject in 2015 certainly think so.

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting, they wrote. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

(The U.S. Militarys proposed budget for 2020 is $718 billion. Of that amount, nearly $1 billion would support AI and machine learning for things like logistics, intelligence analysis and, yes, weaponry.)

Earlier this year, a story in Vox detailed a frightening scenario involving the development of a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the worlds computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Thats jarring, sure. But rest easy. In 2012 the Obama Administrations Department of Defense issued a directive regarding Autonomy in Weapon Systems that included this line: Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

And in early November of this year, a Pentagon group called the Defense Innovation Board published ethical guidelines regarding the design and deployment of AI-enabled weapons. According to the Washington Post, however, the boards recommendations are in no way legally binding. It now falls to the Pentagon to determine how and whether to proceed with them.

Well, thats a relief. Or not.

Have you ever considered that algorithms could bring down our entire financial system? Thats right, Wall Street. You might want to take notice. Algorithmic trading could be responsible for our next major financial crisis in the markets.

What is algorithmic trading? This type of trading occurs when a computer, unencumbered by the instincts or emotions that could cloud a humans judgement, execute trades based off of pre-programmed instructions. These computers can make extremely high-volume, high-frequency and high-value trades that can lead to big losses and extreme market volatility. Algorithmic High-Frequency Trading (HFT) is proving to be a huge risk factor in our markets. HFT is essentially when a computer places thousands of trades at blistering speeds with the goal of selling a few seconds later for small profits. Thousands of these trades every second can equal a pretty hefty chunk of change. The issue with HFT is that it doesnt take into account how interconnected the markets are or the fact that human emotion and logic still play a massive role in our markets.

A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in the hotel industry, which in turn could snowball people into selling off their shares in other travel-related companies, which could then affect logistics companies, food supply companies, etc.

Take the Flash Crash of May 2010 as an example. Towards the end of the trading day, the Dow Jones plunged 1,000 points (more than $1 trillion in value) before rebounding towards normal levels just 36 minutes later. What caused this crash? A London-based trader named Navinder Singh Sarao first caused the crash and then it became exacerbated by HFT computers. Apparently Sarao used a spoofing algorithm that placed an order for thousands of stock index futures contracts betting that the market would fall. Instead of going through with the bet, Sarao was going to cancel the order at the last second and buy the lower priced stocks that were being sold off due to his original bet. Other humans and HFT computers saw this $200 million bet and took it as a sign that the market was going to tank. In turn, HFT computers began one of the biggest stock sell-offs in history, causing a brief loss of more than $1 trillion globally.

Financial HFT algorithms arent always correct, either. We view computers as the end-all-be-all when it comes to being correct, but AI is still really just as smart as the humans who programmed it. In 2012, Knight Capital Group experienced a glitch that put them on the verge of bankruptcy. Knights computers mistakenly streamed thousands of orders per second into the NYSE market causing mass chaos for the company. The HFT algorithms executed an astounding 4 million trades of 397 million shares in only 45 minutes. The volatility created by this computer error led to Knight losing $460 million overnight and having to be acquired by another firm. Errant algorithms obviously have massive implications for shareholders and the markets themselves, and nobody learned this lesson harder than Knight.

Many believe the only way to prevent or at least temper the most malicious AI from wreaking havoc is some sort of regulation.

I am not normally an advocate of regulation and oversight I think one should generally err on the side of minimizing those things but this is a case where you have a very serious danger to the public, Musk said at SXSW.

It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important.

Ford agrees with a caveat. Regulation of AI implementation is fine, he said, but not of the research itself.

You regulate the way AI is used, he said, but you dont hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous.

Because any country that lags in AI development is at a distinct disadvantage militarily, socially and economically. The solution, Ford continued, is selective application:

We decide where we want AI and where we dont; where its acceptable and where its not. And different countries are going to make different choices. So China might have it everywhere, but that doesnt mean we can afford to fall behind them in the state-of-the-art.

Speaking about autonomous weapons at Princeton University in October, American General John R. Allen emphasized the need for a robust international conversation that can embrace what this technology is. If necessary, he went on, there should also be a conversation about how best to control it, be that a treaty that fully bans AI weapons or one that permits only certain applications of the technology.

For Havens, safer AI starts and ends with humans. His chief focus, upon which he expounds in his 2016 book, is this: How will machines know what we value if we dont know ourselves? In creating AI tools, he said, its vitally important to honor end-user values with a human-centric focus rather than fixating on short-term gains.

Technology has been capable of helping us with tasks since humanity began, Havens wrote in Heartificial Intelligence. But as a race weve never faced the strong possibility that machines may become smarter than we are or be imbued with consciousness. This technological pinnacle is an important distinction to recognize, both to elevate the quest to honor humanity and to best define how AI can evolve it. Thats why we need to be aware of which tasks we want to train machines to do in an informed manner. This involved individual as well as societal choice.

AI researchers Fei-Fei Li and John Etchemendy, of Stanford Universitys Institute for Human-Centered Artificial Intelligence, feel likewise. In a recent blog post, they proposed involving numerous people in an array of fields to make sure AI fulfills its huge potential and strengthens society instead of weakening it:

Our future depends on the ability of social- and computer scientists to work side-by-side with people from multiple backgrounds a significant shift from todays computer science-centric model, they wrote. The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS). This collaboration should run throughout an applications lifecycle from the earliest stages of inception through to market introduction and as its usage scales.

Messina is somewhat idealistic about what should happen to help avoid AI chaos, though hes skeptical that it will actually come to pass. Government regulation, he said, isnt a given especially in light of failures on that front in the social media sphere, whose technological complexities pale in comparison to those of AI. It will take a very strong effort on the part of major tech companies to slow progress in the name of greater sustainability and fewer unintended consequences especially massively damaging ones.

At the moment, he said, I dont think the onus is there for that to happen.

As Messina sees things, its going to take some sort of catalyst to arrive at that point. More specifically, a catastrophic catalyst like war or economic collapse. Though whether such an event will prove big enough to actually effect meaningful long-term change is probably open for debate.

For his part, Ford remains a long-run optimist despite being very un-bullish on AI.

I think we can talk about all these risks, and theyre very real, but AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face, including climate change.

When it comes to the near term, however, his doubts are more pronounced.

We really need to be smarter, he said. Over the next decade or two, I do worry about these challenges and our ability to adapt to them.

Read the original:
7 Risks Of Artificial Intelligence You Should Know | Built In

Read More..