Page 3,375«..1020..3,3743,3753,3763,377..3,3803,390..»

Using Artificial Intelligence to Connect Vehicles and Traffic Infrastructure – Chattanooga Pulse

The University of Tennessee-Chattanooga, with the University of Pittsburgh, Georgia Institute of Technology, Oak Ridge National Laboratory, and the City of Chattanooga, have been awarded $1.89 million from the U.S Department of Energy to create a new model for traffic intersections that reduces energy consumption.

UTCs Center for Urban Informatics and Progress will leverage its existing smart corridor to accommodate the new research.

This project is a huge opportunity for us, CUIP Director and principal investigator, Mina Sartipi, said. Collaborating with the City of Chattanooga and working with Georgia Tech, Pitt, and ORNL on a project that is future-oriented, novel, and full of potential is exciting. This work will contribute to the existing body of literature and lead the way for future research. Our existing infrastructure, the MLK Smart Corridor, will be the cornerstone for this work, as it gives us a precedent for applied researchresearch with real-world nuance.

In the DOE proposal, the research team noted the U.S. transportation sector alone accounted for more than 69 percent of petroleum consumption and more than 37 percent of the countrys CO2 emissions.An earlier 2012 National Traffic Signal Report Card found that inefficient traffic signals contribute to 295 million vehicle hours of traffic delay, accounting for 5-10 percent of all traffic related delays.

The project will leverage the capabilities of connected vehicles and infrastructures to optimize and manage traffic flow. The researchers note that while adaptive traffic control systems (ATCS) have been in use for a half century to improve mobility and traffic efficiency, they werent designed to address fuel consumption and emissions.

Likewise, while automobile and vehicle standards have increased significantly, their potential for greater improvement is hampered by inefficient traffic systems that increase idling time and stop-and-go traffic. Finding a solution is paramount since the National Transportation Operations Coalition graded the state of the nations traffic signals as D+.

Our vehicles and phones have combined to make driving safer while nascent ITS has improved traffic congestion in some cities. The next step in their evolution is the merging of these systems through AI," noted Aleksandar Stevanovic, associate professor of civil and environmental engineering at Pitts Swanson School of Engineering and director of the Pittsburgh Intelligent Transportation Systems (PITTS) Lab.

"Creation of such a system, especially for dense urban corridors and sprawling exurbs, can greatly improve energy and sustainability impacts. This is critical as our transportation portfolio will continue to have a heavy reliance on gasoline-powered vehicles for some time.

The goal of the 3+ year project is to develop a dynamic feedback Ecological ATCS (Eco-ATCS) which reduces fuel consumption and greenhouse gases while maintaining a highly operable and safe transportation environment. The integration of AI will allow additional infrastructure enhancements including emergency vehicle preemption, transit signal priority, and pedestrian safety. The ultimate goal is to reduce corridor-level fuel consumption by 20 percent.

Chattanooga is a city focused on embracing technology and innovation to create a safer and more efficient environment, Chattanooga Smart City Director, Kevin Comstock, said. Being supported and affirmed by the Department of Energy is an enormous vote of confidence in the direction were heading.

Georgia Tech team member, Michael Hunter, echoes that sentiment. Through this project we have the potential to develop a pilot deployment that may be replicated throughout the country, helping realize the vast potential of these technologies, he said.

The team consists of Mina Sartipi, Osama Osman, Dalei Wu, and Yu Liang from UTC, Michael Hunter from Georgia Institute of Technology, Aleksander Stevanovic from University of Pittsburgh, Kevin Comstock from the city of Chattanooga, and Derek Deter and Adian Cook from ORNL.

The Center for Urban Informatics and Progress is a smart city research center at the University of Tennessee at Chattanooga. CUIP is committed to applied smart city research that betters the lives of citizens every day. For more on the work were doing and our mission, visit http://www.utc.edu/cuip.

Like this story? Click here to Subscribe to more like this delivered weekly to your inbox!

Excerpt from:
Using Artificial Intelligence to Connect Vehicles and Traffic Infrastructure - Chattanooga Pulse

Read More..

Artificial Intelligence for Medical Evacuation in Great-Power Conflict – War on the Rocks

It is 4:45 a.m. in southern Afghanistan on a hot September day. A roadside improvised explosive device has just gone off and was followed by the call, Medic! Spc. Chazray Clark stepped right on the bomb, losing both of his feet and his left forearm. Clarks fellow soldiers immediately provided medical care, hoping he might survive. After all, the units forward operating base was only 1.5 miles away, and it had a trained medical evacuation (medevac) team waiting to respond to an event of this nature.

A 9-line medevac request was submitted just moments after the explosion occurred, and Clarks commanding officer, Lt. Col. Mike Katona, had been assured that a medevac helicopter was en route to the secured pickup location. Unfortunately, that was not the case; the medevac team was still awaiting orders 34 minutes after the call for help was transmitted.

Although the casualty collection point was secure, the current policy in place required an armed gunship to escort the medevac helicopter, but none were available. It wasnt until 5:24 a.m. that the medevac helicopter started to fly toward the pickup location, but it was too late. Clark arrived at Kandahar Air Field medical center at 5:49 a.m. and was pronounced dead just moments later.

No one knows if Clark would have survived his wounds if he had received advanced surgical care earlier, but most people would agree that his chances of survival would have been much higher. What went wrong? Why wasnt an armed escort available during this dire time? Are the current medevac policies in place outdated? If so, can artificial intelligence improve upon current practices?

With limited resources available, the U.S. military ought to carefully plan how medevac assets will be utilized prior to and during large-scale combat operations. How should resources be positioned now to maximize medevac effectiveness and efficiency? How can ground and air ambulances be dynamically repositioned throughout the course of an operation based on evolving, anticipated locations and intensities for medevac demand (i.e., casualties)? Moreover, how should those decisions be informed by operational restrictions and (natural and enemy-induced) risks to the use of ground and aerial routes as well as evacuation procedures at the casualty collection points? Finally, whenever a medevac request is received, which of the available assets should be dispatched, considering the anticipated future demands of a given region?

The military medevac enterprise is complex. As a result, any automation of location and dispatching decision-making requires accurate data, valid analytical techniques, and the deliberate integration and ethical use of both. Artificial intelligence and, more specifically, machine-learning techniques combined with traditional analytic methods from the field of operations research provide valuable tools to automate and optimize medevac location and dispatching procedures.

The U.S. military utilizes both ground and aerial assets to perform medevac missions. Rotary-wing air ambulances (i.e., HH-60M helicopters) are typically reserved for the most critically sick and/or wounded, for whom speed of evacuation and flexibility for routing directly to highly capable medical treatment facilities are essential to maximizing survivability. Ground ambulances cannot travel as far or as fast as air ambulances, but this limitation is offset by their greater proliferation throughout the force.

Machine Learning to Predict Medevac Demand

More than 4,500 U.S. military medevac requests were transmitted between 2001 and 2014 for casualties occurring in Afghanistan. The location, threat level, and severity of casualty events resulting in requests for medevac influence the demand for medevac assets. Indeed, it is likely that some regions may have higher demand than others, requiring more medevac assets when combat operations commence. A machine-learning model (e.g., neural networks, support vector regression, and/or random forest) can accurately predict demand for each combat region by considering relevant information, such as current mission plans, projected enemy locations, and previous casualty event data.

Effective machine-learning models require historical data that is representative of future events. Historical data for recent medevac operations can be obtained from significant activity reports from previous conflicts and the Medical Evacuation Proponency Division. For example, one study utilizes Operation Iraqi Freedom flight logs obtained from the Medical Evacuation Proponency Division to approximate the number of casualties at a given location to help identify the best allocation(s) of medical assets during steady-state combat operations. Open-source, unclassified data also exist (e.g., International Council on Security and Development, Defense Casualty Analysis System, and Data on Armed Conflict). Although historical data may not exist for every potential future operating environment, it can still be utilized to generalize casualty event characteristics. For example, one study models the spatial distribution of casualty cluster centers based on their proximity to main supply routes and/or rivers, where large populations are present. It utilizes Monte Carlo simulation to synthetically generate realistic data, which, in turn, can be leveraged by machine-learning practitioners to predict future demand.

Demand prediction via a machine-learning model is essential, but it is not enough to optimize medevac procedures. For example, consider a scenario wherein the majority of demand is projected to occur in two combat regions located on opposite sides of the area of operations. If there are not enough medevac resources to provide a timely response for all anticipated medevac demands in both of those regions, where should medevac assets be positioned? Alternatively, consider a scenario wherein one region needs the majority of medevac support at the beginning of an operation, but the anticipated demand shifts to another region (or multiple regions) later. Should assets be positioned to respond to demand from the first region even if it makes it impossible to reposition assets to respond to future demand from the other regions in a timely manner? How do these decisions impact combat operations in the long run?

Optimization Methods to Locate, Dynamically Relocate, and Dispatch Medevac Assets

How do current decisions impact future decisions? The decisions implemented throughout a combat operation are interdependent and should be made in conjunction with each other. More specifically, to create a feasible, realistic plan, it is necessary to make the initial medevac asset positioning decisions while considering the likely decisions to dynamically reposition assets over the duration of an operation. Moreover, every decision should account for total anticipated demand over all combat regions to ensure the limited resources are managed appropriately.

How many possible asset location options are there for a decision-maker to consider? As an example, suppose there are 20 dedicated ground and aerial medevac assets that need to be positioned across six different forward operating bases. Moreover, suppose decisions regarding the repositioning of these assets occur every day for a 14-day combat operation. For any day of the two-week combat operation, any of the 20 assets can be repositioned to one of six operating bases. Without taking into consideration distances, availability, demand constraints, or multiple asset types, the approximate number of options to consider is over 10,000! It is practically impossible for an individual (or even a team of people) to identify the optimal positioning policy without the benefit of insight provided by quantitative analyses.

Whereas a machine-learning model can predict when and where demand is likely to occur, it does not inform decision-makers where to position limited resources. To overcome this, operations research techniques more specifically, the development and analysis of optimization models can efficiently identify an optimal policy for dynamic asset location strategies for the area of operations over the entire planning horizon. The objectives of an optimization model define the quantitatively measured goal that decision-makers seek to maximize and/or minimize. For example, decision-makers may seek to maximize demand coverage, minimize response time, minimize the cost of repositioning assets, and/or maximize safety and security of medevac personnel. The decisions correspond to when, where, and how many of each type of asset is to be positioned across the forward operating bases for the planned combat operation, as well as how assets are dispatched in response to medevac requests. It is necessary to have information about unit capabilities and dispositions to accurately inform an optimization model. This information includes the number, type, and initial positioning of medevac assets as well as the projected demand locations, threat levels, and injury severity levels. An optimization model also considers operational constraints to ensure a feasible solution is generated. These constraints include travel distances and time, fuel capacity, forward operating base capacity, and political considerations.

Medevac assets may need to be dynamically repositioned (i.e., relocated) across different staging facilities, especially as disposition and intensity of demand changes, despite the long-term and strategic nature of combat operations. For example, it may be necessary to reposition assets from forward operating bases near combat regions with lower projected demand to bases near regions with higher projected demand. Moreover, it is important to consider projected threat and severity levels when determining which type of assets to position. For example, it may be beneficial to position armed escorts closer to combat regions with higher projected threat levels. Similarly, air ambulances should be positioned closer to combat regions with higher projected severity levels (i.e., life-threatening events). Inappropriate positioning of assets may result in delayed response times, increased risks, and decreased casualty survivability rates. One way to determine the location of medevac assets is to develop an optimization model that simultaneously considers the following objectives: maximize demand coverage, minimize response time, and minimize the number of relocations subject to force projection, logistical, and resource constraints. Trade-off analysis can be performed by assigning different weights (i.e., importance levels) to each objective considered. Given an optimal layout of medevac assets, another important decision that should be considered is how air ambulances will be dispatched in response to requests for service.

The U.S. military currently utilizes a closest-available dispatching policy to respond to incoming requests for service, which, as the name suggests, tasks the closest-available medevac unit to rapidly evacuate battlefield casualties from point of injury to a nearby trauma facility. In small-scale and/or low-intensity conflicts, this policy may be optimal. Unfortunately, this is not always the case, especially in large-scale, high-intensity conflicts. For example, suppose a non-life-threatening medevac request is submitted and only one air ambulance is available. Moreover, assume high-intensity operations are ongoing and life-threatening medevac requests are expected to occur in the near future. Is it better to task the air ambulance to service the current, non-life-threatening request, or should the air ambulance be reserved for a life-threatening request that is both expected and likely to occur in the near future?

Many researchers have explored scenarios in which the closest-available dispatching policy can be greatly improved upon by leveraging operations research techniques such as Markov decision processes and approximate dynamic programming. Dispatching decision-makers (i.e., dispatching authorities) should take into account a large number of uncertainties when deciding which medevac assets to utilize in response to requests for service. Utilizing approximate dynamic programming, military analysts can model large-scale, realistic scenarios and develop high-quality dispatching policies that take into account inherent uncertainties and important system characteristics. For example, one study shows that dispatching policies based on approximate dynamic programming can improve upon the closest-available dispatching policy by over 30 percent in regards to a lifesaving performance metric based on response time for a notional scenario in Syria.

Ethical Application Requires a Decision-Maker in the Loop

Optimization models may offer valuable insights and actionable policies, but what should decision-makers do when unexpected events occur (e.g., air ambulances become non-mission capable) or new information is obtained (e.g., an unmanned aerial vehicle captures enemy activity in a new location)? It is not enough to create and implement optimization models. Rather, it is necessary to create and deliver a readily understood dashboard that presents information and recommended decisions, the latter of which are informed by both machine learning and operations research techniques. To yield greater value, such a dashboard should allow its users (i.e., decision-makers) to conduct what-if analysis to test, visualize, and understand the results and consequences of different policies for different scenarios. Such a dashboard is not a be-all and end-all tool. Rather, it is a means for humans to effectively leverage information and analyses to make better decisions.

The future of decision-making involves both artificial intelligence and human judgment. Whereas humans lack the power and speed that artificial intelligence can provide for data processing tasks, artificial intelligence lacks the emotional intelligence needed when making tough and ethical decisions. For example, a machine-learning model may be able to diagnose complex combat operations and recommend decisions to improve medevac system performance, but the judgment of a human being is necessary to address intangible criteria that may elude quantification and input as data.

Whereas the effectiveness and efficiency of the U.S. military medevac system has been very successful for recent operations in Afghanistan, Iraq, and Syria, future operating environments may be vastly different from where the United States has been fighting over the past 20 years. Artificial intelligence and operations research techniques can combine to create effective decision-making tools that, in conjunction with human judgment, improve the medevac enterprise for large-scale combat operations, ultimately saving more lives.

The Way Forward

The Air Force Institute of Technology is currently examining a variety of medevac scenarios with different problem features to determine both the viability and benefit of incorporating the aforementioned artificial intelligence and operations research techniques within active medevac operations. Once a viable approach is developed, the next step is to obtain buy-in from senior military leaders. With a parallel, macroscopic-level focus, the Joint Artificial Intelligence Center, the Department of Defenses Artificial Intelligence Center of Excellence, is currently seeking new artificial intelligence initiatives to demonstrate value and spur momentum to accelerate the adoption of artificial intelligence and create a force fit for this era.

Capt. Phillip R. Jenkins, PhD, is an assistant professor of operations research at the Air Force Institute of Technology. His academic research involves problems relating to military defense, such asthe location, allocation, and dispatch of medical evacuation assets in a deployed environment. He is an active-duty Air Force officer with nearly eight years of experience as an operations research analyst.

Brian J. Lunday, PhD, is a professor of operations research at the Air Force Institute of Technology who researches optimal resource location and allocation modeling. He served for 24 years as an active-duty Army officer, both as an operations research analyst and a combat engineer.

Matthew J. Robbins, PhD, is an associate professor of operations research at the Air Force Institute of Technology. His academic research involves the development and application of computational stochastic optimization methods for defense-oriented problems. Robbins served for 20 years as an active-duty Air Force officer, holding a variety of intelligence and operations research analyst positions.

The views expressed in this article are those of the authors and do not reflect the official policy or position of the U.S. Air Force, the Department of Defense, or the U.S. government.

Image: Sgt. 1st Class Thomas Wheeler

Read this article:
Artificial Intelligence for Medical Evacuation in Great-Power Conflict - War on the Rocks

Read More..

Qualtrics Announces Delighted AI, a Machine Learning Engine to Automate Every Step of the Customer Feedback Process – PRNewswire

SALT LAKE CITY, SEATTLE, and PALO ALTO, Calif., Sept. 23, 2020 /PRNewswire/ -- Qualtrics, the leader in customer experience and creator of the experience management category, today announced Delighted AI, an artificial intelligence and machine learning engine built directly into Delighted's customer experience platform. Delighted, a Qualtrics company, developed its AI technology to intelligently automate every aspect of the customer feedback process, from scheduling to analysis and reporting, so that companies can focus on closing feedback loops faster than ever. Delighted AI is complementary to Qualtrics' existing Text iQ enterprise technology for CustomerXM, optimized for Delighted customers.

Today, the most successful customer experience programs are no longer measurement or metrics-based. Over the past few months, Net Promoter Scores have significantly declined in response to COVID-19, exposing customer experience gaps that companies have failed to address or identify. The companies who have emerged as customer experience leaders in the crisis have continuously listened to their customers, and more importantly, responded quickly to their preferences and expectations.

Delighted AI was created based on semantics and themes in the millions of customer feedback responses that Delighted and its customers have analyzed over several years to drive customer experience success.

"Delighted AI helped the right teams at our company understand customer feedback with more precision than ever before, which has been critical in the middle of a pandemic where we need to adapt and respond even more quickly to our customers' needs and expectations," said Roxana Turcanu, Growth Director for Adore Me, a New York-based e-commerce company. "We just recently launched a new try-at-home brand called Outlines, and we were able to do so with the help of Delighted AI by capturing and applying feedback early - this enabled us to pivot, at a rate we've never been able to do, towards what our customers actually wanted from our brand."

Benefits of Delighted AI include:

"Customer experience programs are rapidly evolving as companies have realized that relying on traditional metrics alone does not determine customer success. Instead, the customer experience leaders are winning based on gathering in-the-moment feedback that is immediately actionable and building a culture of continuous listening," said Caleb Elston, co-founder of Delighted. "We created Delighted AI to empower companies to spend less time configuring, implementing, and analyzing so they can focus on acting on insights faster than any other technology or human could before."

Acquired by Qualtrics in 2018, Delighted is one of the fastest and easiest ways to take action on customer feedback, which enables innovative brands and organizations of any size to quickly implement a customer experience program across every channel.

Learn more about Delighted AI here.

About QualtricsQualtrics, the leader in customer experience and creator of the Experience Management (XM) category, is changing the way organizations manage and improve the four core experiences of businesscustomer, employee, product, and brand. Over 11,000 organizations around the world are using Qualtrics to listen, understand, and take action on experience data (X-data)the beliefs, emotions, and intentions that tell you why things are happening, and what to do about it. The Qualtrics XM Platform is a system of action that helps businesses attract customers who stay longer and buy more, engage employees who build a positive culture, develop breakthrough products people love, and build a brand people are passionate about. To learn more, please visit qualtrics.com.

Contact: [emailprotected]

SOURCE Qualtrics

http://www.qualtrics.com

Original post:
Qualtrics Announces Delighted AI, a Machine Learning Engine to Automate Every Step of the Customer Feedback Process - PRNewswire

Read More..

Artificial Intelligence (AI) in Security Market 2020 Latest Trending Technology, Growing Demand, Application, Types, Services, Regional Analysis and…

Artificial Intelligence (AI) in Security Market report includes a survey, which explains value chain structure, industrial outlook, regional analysis, applications, market size, share, and forecast. The Coronavirus (COVID-19) outbreak influencing the growth of the market globally. The rapidly changing market scenario and initial and future assessment of the impact is covered in the research report. The Artificial Intelligence (AI) in Security market provides an overall analysis of the market based on types, applications, regional analysis, and for the forecast period from 2020 to 2026. The reports also include investment opportunities and probable threats in the market based on an intelligent analysis. This report focuses on the Artificial Intelligence (AI) in Security Market trends, future forecasts, growth opportunities, key end-user industries, and market-leading players. The objectives of the study are to present the key developments of the market across the globe. The report presents a 360-degree overview and SWOT analysis of the competitive landscape of the industries.

Get sample copy of Artificial Intelligence (AI) in Security Market report @ https://www.adroitmarketresearch.com/contacts/request-sample/1317

Artificial Intelligence (AI) in Security Market 2020 Industry Research Report is a professional and in-depth study on the current state of the Global Artificial Intelligence (AI) in Security industry. Moreover, research report categorizes the global Artificial Intelligence (AI) in Security market by top players/brands, region, type and end user. Artificial Intelligence (AI) in Security Market report also tracks the latest market dynamics, such as driving factors, restraining factors, and industry news like mergers, acquisitions, and investments. It provides market size (value and volume), Artificial Intelligence (AI) in Security market share, growth rate by types, applications, and combines both qualitative and quantitative methods to make micro and macro forecasts in different regions or countries.

Top Leading Key Players are:

Amazon.Com, Inc., Fortinet, Google (Alphabet Inc.), IBM Corporation, Intel Corporation, Micron Technology Inc., Nvidia Corporation, Palo Alto Networks Inc., Samsung Electronics Co., Ltd., Symantec. Acalvio Technologies, Inc., Cylance Inc., Darktrace, Securonix, Inc., Sift Science, Sparkcognition Inc., Threatmetrix Inc., Xilinx Inc.

Browse the complete report @ https://www.adroitmarketresearch.com/industry-reports/artificial-intelligence-ai-in-security-market

A thorough market study and investigation of trends in consumer and supply chain dynamics covered in this report helps businesses draw the strategies about sales, marketing, and promotion. Besides, market research performed in this Artificial Intelligence (AI) in Security Market report puts a light on the challenges, market structures, opportunities, driving forces, and competitive landscape for the business. It assists in obtaining an extreme sense of evolving industry movements before competitors. To gain competitive advantage in this swiftly transforming marketplace, opting for such market research report is highly suggested as it gives a lot of benefits for a thriving business.

Global Artificial Intelligence (AI) in Security market is segmented based by type, application and region.Based on Type, the market has been segmented into:

by Component (Platform, Services), Application (Identity and Access Management, Unified Threat Management, Antivirus/Antimalware, Risk and Compliance Management, Fraud Detection, and others), Industry Vertical (BFSI, retail, IT & Telecommunication, Automotive & Transportation, Manufacturing, Government & Defense, and others)

Based on application, the market has been segmented into:

By Application (Identity and Access Management, Unified Threat Management, Antivirus/Antimalware, Risk and Compliance Management, Fraud Detection, and others), Industry Vertical (BFSI, retail, IT & Telecommunication, Automotive & Transportation, Manufacturing, Government & Defense, and others)

This study also contains company profiling, product picture and specifications, sales, market share and contact information of various international, regional, and local vendors of Global Artificial Intelligence (AI) in Security Market. The market competition is constantly growing higher with the rise in technological innovation and M&A activities in the industry.

The report provides an in-depth analysis of the key developments and innovations of the market, such as research and development advancements, product launches, mergers & acquisitions, joint ventures, partnerships, government deals, and collaborations. The report provides a comprehensive overview of the regional growth of each market player. Additionally, the report provides details about the revenue estimation, financial standings, capacity, import/export, supply and demand ratio, production and consumption trends, CAGR, market share, market growth dynamics, and market segmentation analysis.

For Any Query on the Artificial Intelligence (AI) in Security Market: https://www.adroitmarketresearch.com/contacts/enquiry-before-buying/1317

About Us :

Contact Us :

Ryan JohnsonAccount Manager Global3131 McKinney Ave Ste 600, Dallas,TX 75204, U.S.APhone No.: USA: +1 972-362 -8199 / +91 9665341414

Read more:
Artificial Intelligence (AI) in Security Market 2020 Latest Trending Technology, Growing Demand, Application, Types, Services, Regional Analysis and...

Read More..

Scientists around the world join forces to combat anti-Semitism with artificial intelligence – New York Post

BERLIN An international team of scientists said Monday it had joined forces to combat the spread of anti-Semitism online with the help of artificial intelligence.

The project Decoding Anti-Semitism includes discourse analysts, computational linguists and historians who will develop a highly complex, AI-driven approach to identifying online anti-Semitism, the Alfred Landecker Foundation, which supports the project, said in a statement Monday.

In order to prevent more and more users from becoming radicalized on the web, it is important to identify the real dimensions of anti-Semitism also taking into account the implicit forms that might become more explicit over time, said Matthias Becker, a linguist and project leader from the Technical University of Berlin.

The team also includes researchers from Kings College in London and other scientific institutions in Europe and Israel.

Computers will help run through vast amounts of data and images that humans wouldnt be able to assess because of their sheer quantity, the foundation said.

Studies have also shown that the majority of anti-Semitic defamation is expressed in implicit ways for example through the use of codes (juice instead of Jews) and allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images, the statement said.

As implicit anti-Semitism is harder to detect, the combination of qualitative and AI-driven approaches will allow for a more comprehensive search, the scientists think.

The problem of anti-Semitism online has increased, as seen by the rise in conspiracy myths accusing Jews of creating and spreading COVID-19, groups tracking anti-Semitism on the internet have found.

The focus of the current project is initially on Germany, France and the U.K., but will later be expanded to cover other countries and languages.

The Alfred Landecker Foundation, which was founded in 2019 in response to rising trends of populism, nationalism and hatred toward minorities, is supporting the project with 3 million euros ($3.5 million), the German news agency dpa reported.

Read the original post:
Scientists around the world join forces to combat anti-Semitism with artificial intelligence - New York Post

Read More..

UK Information Commissioner’s Office publishes guidance on artificial intelligence and data protection – Lexology

On 30 July, the UK's Information Commissioner's Office ("ICO") published new guidance on artificial intelligence ("AI") and data protection. The ICO is also running a series of webinars to help organisations and businesses to comply with their obligations under data protection law when using AI systems to process personal data. This legal update summarises the main points from the guidance and the AI Accountability and Governance webinar hosted by the ICO on 22 September 2020.

As AI increasingly becomes a part of our everyday lives, businesses worldwide have to navigate the expanding landscape of legal and regulatory obligations associated with the use of AI systems. The ICO guidance recognises that using AI can have undisputable benefits, but that it can also pose risks to the rights and freedoms of individuals. The guidance offers a framework for how businesses can assess and mitigate these risks from a data protection perspective. It also stresses the value of considering data protection at an early stage of AI development, emphasising that mitigation of AI-associated risks should come at the design stage of the AI system.

Although the new guidance is not a statutory code of practice, it represents what the ICO deems to be best practice for data protection-compliant AI solutions and sheds light on how the ICO interprets data protection obligations as they apply to AI. However, the ICO confirmed that businesses might be able use other ways to achieve compliance. The guidance is the result of the ICO consultation on the AI auditing framework which was open for public comments earlier in 2020. It is designed to complement existing AI resources published by the ICO, including the recent Explaining decisions made with AI guidance produced in collaboration with The Alan Turing Institute (for further information on this guidance, please see our alert here) and the Big Data and AI report.

Who is the guidance aimed at and how is the guidance structured?

The guidance can be useful for (i) those undertaking compliance roles within organisations, such as data protection officers, risk managers, general counsel and senior management, and (ii) technology specialists, namely AI developers, data scientists, software developers / engineers and cybersecurity / IT risk managers.

The guidance is split into four sections:

Although the ICO notes that the guidance is written so that each section is accessible for both compliance and technology specialists, the ICO states that sections 1 and 4 are primarily aimed at those in compliance roles, with sections 2 and 3 containing the more technical material.

1. ACCOUNTABILITY AND GOVERNANCE IMPLICATIONS OF AI

The first section of the guidance focuses on the accountability principle, which is one of seven data processing principles in the European General Data Protection Regulation ("GDPR"). The accountability principle requires organisations to be able to demonstrate compliance with data protection laws. Though the ICO acknowledges the ever-increasing technical complexity of AI systems, the guidance highlights that the onus is on organisations to ensure their governance and risk capabilities are proportionate to the organisation's use of AI systems.

The ICO is clear in its message that organisations should not "underestimate the initial and ongoing level of investment and effort that is required" when it comes to demonstrating accountability for use of AI systems when processing personal data. The guidance indicates that senior management should understand and effectively address the risks posed by AI systems, such as through ensuring that appropriate internal structures exist, from policies to personnel, to enable businesses to effectively identify, manage and mitigate those risks.

With respect to AI-specific implications of accountability, the guidance focuses on three areas:

(a) Businesses processing personal data through AI systems should undertake DPIAs:

The ICO has made it clear that a data protection impact assessment ("DPIA") will be required in the vast majority of cases in which an organisation uses an AI system to process personal data because AI systems may involve processing which is likely to result in a high risk to individual's rights and freedoms.

The ICO stresses that DPIAs should not be considered just a box-ticking exercise. A DPIA allows organisations to demonstrate that they are accountable when making decisions with respect to designing or acquiring AI systems. The ICO suggested that organisations might consider having two versions of the DPIA: (i) a detailed internal one which is used by the organisation to help it identify and minimise data protection risk of the project and (ii) an external-facing one which can be shared with individuals whose data is processed by the AI system to help the individuals understand how the AI is making decisions about them.

The DPIA should be considered a living document which gets updated as the AI system evolves (which can be particularly relevant for deep learning AI systems). The guidance notes that where an organisation decides that it does not need to undertake a DPIA with respect to any processing related to an AI system, the organisation will still need to document how it reached such a conclusion.

The guidance provides helpful commentary on a number of considerations which businesses may need to grapple with when conducting a DPIA for AI systems, including guidance on:

The ICO also refers businesses to its general guidance on DPIAs and how to complete them outside the context of AI.

(b) Businesses should consider the data protection roles carried out by different parties in relation to AI systems and put in place appropriate documentation:

The ICO acknowledges that assigning controller / processor roles in respect to AI systems can be inherently complex, given the number of actors involved in the subsequent processing of personal data via the AI system. In this respect, the ICO draws attention to its work on data protection and cloud computing, with revisions to the ICO's Cloud Computing Guidance expected in 2021.

The ICO outlines a number of examples in which organisations take the role of controller / processor with respect to AI systems. The ICO is planning to consult on each of these controller and processor scenarios in the Cloud Computing Guidance review, so organisations can expect further clarity in 2021.

(c) Businesses should put in place documentation for accountability purposes to identify any "trade-offs" when assessing AI-related risks:

The ICO notes that there is a number of "trade-offs" when assessing different AI-related risks. Some common examples of such trade-offs are included in the guidance itself, such as where an organisation wishes to train an AI system capable of producing accurate statistical output on one hand, versus the data minimisation concerns associated with the quantity of personal data required to train such an AI system on the other.

The guidance provides advice to businesses seeking to manage risk associated with such trade-offs. The ICO recommends to put in place effective and accurate documenting processes for accountability purposes, but also for businesses to consider specific instances such as: (i) where an organisation acquires an AI solution and whether the associated trade-offs formed part of the organisation's due diligence processes, (ii) social acceptability concerns associated with certain trade-offs, and (iii) whether mathematical approaches can mitigate trade-off associated privacy risk.

2. ENSURING LAWFULNESS, FAIRNESS AND TRANSPARENCY IN AI SYSTEMS

The second section of the guidance focuses on ensuring lawfulness, fairness and transparency in AI systems and covers three main areas:

(a) Businesses should identify the purpose and an appropriate lawful basis for each processing operation in an AI system:

The guidance makes it clear that organisations must identify the purpose and an appropriate lawful basis for each processing operation in an AI system and specify these in their privacy notice.

It adds that it might be more appropriate to choose different lawful bases for the development and deployment phases of an AI system. For example, while performance of a contract might be an appropriate ground for processing personal data to deploy an AI system (e.g. to provide a quote to a customer before entering into a contract), it is unlikely that relying on this basis would be appropriate to develop an AI system.

The guidance makes it clear that legitimate interests provide the most flexible lawful basis for processing. However, if businesses rely on it, they are taking on an additional responsibility for considering and protecting people's rights and interests and must be able to demonstrate the necessity and proportionality of the processing through a legitimate interests assessment.

The guidance mentions that consent may be an appropriate lawful basis but individuals must have a genuine choice and be able to withdraw the consent as easily as they give it.

It might also be possible to rely on legal obligation as a lawful basis for auditing and testing the AI system if businesses are able to identify the specific legal obligation they are subject to (e.g. under the Equality Act 2010). However, it is unlikely to be appropriate for other uses of that data.

If the AI system processes special category or criminal convictions data, then the organisation will also need to ensure compliance with additional requirements in the GDPR and the Data Protection Act 2018.

(b) Businesses should assess the effectiveness of the AI system in making statistically accurate predictions about individuals:

The guidance notes that organisations should assess the merits of using a particular AI system in light of its effectiveness in making statically accurate and therefore valuable predications. In particular, organisations should monitor the system's precision and sensitivity. Organisations should also prioritise avoiding certain kind of errors based on the severity and nature of the particular risk.

Businesses should agree regular updates (retraining of the AI system) and reviews of statistical accuracy to guard against changing data, for example, if the data originally used to train the AI system is no longer reflective of the current users of the AI systems.

(c) Businesses should address the risks of bias and discrimination in using an AI system:

AI systems may learn from data which may be imbalanced (e.g. because the proportion of different genders in the training data is different than in the population using the AI system) and / or reflect past discrimination (e.g. if in the past, male candidates were invited more often to job interviews) which could lead to producing outputs which have discriminatory effect on individuals. The guidance makes it clear that obligations relating to discrimination under data protection law is separate and additional to organisations' obligations under the Equality Act 2010.

The guidance mentions various approaches developed by computer scientists studying algorithmic fairness which aim to mitigate AI-driven discrimination. For example, in cases of imbalanced training data, it may be possible to balance it out by adding or removing data about under/over-represented subsets of the population. In cases where the training data reflects past discrimination, the data may be manually modified, the learning process could be adapted to reflect this, or the model can be modified after training. However, the guidance warns that in some cases, simply retraining the AI model with a more diverse training set may not be sufficient to mitigate its discriminatory impact and additional steps might need to be taken.

The guidance recommends that businesses put in place policies and good practices to address risks related to bias and discrimination and undertake robust testing of the AI system on an ongoing basis against selected key performance metrics.

3. SECURITY ASSESSMENT AND DATA MINIMISATION IN AI SYSTEMS

The third section of the guidance is aimed at technical specialists and covers two main issues:

(a) Businesses should assess the security risks AI introduces and take steps to manage the risks of privacy attacks on AI systems:

AI systems introduce new kinds of complexity not found in more traditional IT systems. AI systems might also rely heavily on third party code and are often integrated with several other existing IT components. This complexity might make it more difficult to identify and manage security risks. As a result, businesses should ensure that they actively monitor and take into account the state-of-the-art security practices when using personal data in an AI context. Businesses should use these practices to assess AI systems for security risks and ensure that their staff have appropriate skills and knowledge to address these security risks. Businesses should also ensure that their procurement process includes sufficient information sharing between the parties to perform these assessments.

The guidance warns against two kinds of privacy attacks which allow the attacker to infer personal data of the individuals used to train the AI system:

The guidance then suggests some practical technical steps that businesses can take to manage the risks of such privacy attacks.

The guidance also warns against novel risks, such as adversarial examples which allow attackers to feed modified inputs into an AI model that will be misclassified by the AI system. The ICO notes that in some cases this could lead to a risk to the rights and freedom of individuals (e.g. if a facial recognition system is tricked to misclassify an individual for someone else). This would raise issues not only under data protection laws but possibly also under the Network and Information Systems (NIS) Directive.

(b) Business should take steps to minimise personal data when using AI systems and adopt appropriate privacy-enhancing methods:

AI systems generally require large amounts of data but the GDPR data minimisation principle requires business to identify the minimum amount of personal data they need to fulfil their purposes. This can create some tensions but the guidance suggests steps businesses can take to ensure that the personal data used by the AI system is "adequate, relevant and limited".

The guidance recommends that individuals accountable for the risk management and compliance of AI systems are familiar with techniques such as: perturbation (i.e. adding 'noise' to data), using synthetic data, adopting federated learning, using less "human readable" formats, making inferences locally rather than on a central server, using privacy-preserving query approaches, and considering anonymisation and pseudonymisation of the personal data. The guidance goes into some detail for each of these techniques and explains when they might be appropriate.

Importantly, ensuring security and data minimisation in AI systems is not a static process. The ICO suggests that compliance with data protection obligations requires ongoing monitoring of trends and developments in this area and being familiar with and adopting the latest security and privacy-enhancing techniques for AI systems. As a result, any contractual documentation that businesses put in place with service providers should take these privacy concerns into account.

4. INDIVIDUAL RIGHTS IN AI SYSTEMS

The final section of the guidance is aimed at compliance specialists and covers two main areas:

(a) Businesses must comply with individual rights requests in relation to personal data in all stages of the AI lifecycle, including training data, deployment data and data in the model itself:

Under the GDPR, individuals have a number of rights relating to their personal data. The guidance states that these rights apply wherever personal data is used at any of the various stages of the AI lifecycle from training the AI model to deployment.

The guidance is clear that even if the personal data is converted into a form that makes the data potentially much harder to link to a particular individual, this is not necessarily considered sufficient to take the data out of scope of the data protection law because the bar for anonymisation of personal data under the GDPR is high.

If it possible for an organisation to identify an individual in the data, directly or indirectly (e.g. by combining it with other data held by the organisation or other data provided by the individual), the organisation must respond to requests from individuals to exercise their rights under the GDPR (assuming that the organisation has taken reasonable measures to verify their identity and no other exceptions apply). The guidance recognises that the use of personal data with AI may sometimes make it harder to fulfil individual rights but warns that just because it may be harder to fulfil the GDPR obligations in the context of AI, they should not be regarded as manifestly unfounded or excessive. The guidance also provides further detail about how business should comply with specific individual rights requests in the context of AI.

(b) Businesses should consider the requirements necessary to support a meaningful human review of any decisions made by, or with the support of, AI using personal data:

There are specific provisions in the GDPR (particularly Article 22 GDPR) covering individuals' rights where processing involves solely automated individual decision-making, including profiling, with legal or similarly significant effects. Businesses that use such decision-making must tell individuals whose data they are processing that they are doing so for automated decision-making and give them "meaningful information about the logic involved, as well as the significance and the envisaged consequences" of the processing. The ICO and the European Data Protection Board have both previously published detailed guidance on the obligations concerning automated individual decision-making which can be of further assistance.

The GDPR requires businesses to implement suitable safeguards, such as right to obtain human intervention, express their point of view, contest the decision or obtain an explanation about the logic of such decision. The guidance mentions two particular reasons why AI decisions might be overturned: (i) if the individual is an outlier and their circumstances are substantially different from those considered in the training data, and (ii) if the assumptions in the AI model can be challenged, e.g. because of specific design choices. Therefore, businesses should consider the requirements necessary to support a meaningful human review of any solely automated decision-making process (including the interpretability requirements, training of staff and giving them appropriate authority). The guidance from the ICO and The Alan Turning Institute on Explaining decision made with AI considers this issue in further detail (for more information on that guidance, please see our alert here).

In contrast, decisions that are not fully automated but for which the AI system provides support to a human decision-maker do not fall within the scope of Article 22 GDPR. However, the guidance is clear that a decision does not fall outside of the scope of Article 22 just because a human has "rubber-stamped" it and the human decision-maker must have a meaningful role in the decision-making process to take the decision-support tool outside the scope of Article 22.

The guidance also warns that to have a meaningful human oversight also means that businesses need to address the risks of automation bias by human reviewers (i.e. relying on the output generated by the decision-support system and not using their own judgment) and the risks of lack of interpretability (i.e. outputs from AI systems that are difficult for a human reviewer to interpret / understand, for example, in deep-learning AI models). The guidance provides some suggestions how such risks might be addressed, including by considering these risks when designing / procuring the AI systems, by training staff and by effectively monitoring the AI system and the human reviewers.

Conclusion

This guidance from the ICO is another welcome step for the rising number of businesses that use AI systems in their day-to-day operations. It also provides more clarity on how businesses should interpret their data protection obligations as they apply to AI. This is especially important because this area of compliance is attracting the focus of different regulators.

The ICO mentions "monitoring intrusive and disruptive technology" as one of its three focus areas and AI as one of its priorities for its regulatory approach during the COVID-19 pandemic and beyond. As a result, the ICO is also running a free webinar series in autumn 2020 on various topics covered in the guidance to help businesses achieve data protection compliance when using AI systems. The ICO stated on the AI Accountability and Governance webinar on 22 September 2020 that it is currently developing its AI auditing capabilities so it can use its powers to conduct audits of AI systems in the future. However, the ICO staff on the webinar confirmed the ICO would take into account the effect of the COVID-19 pandemic before conducting any AI audits.

Other regulators have also been interested in the implications of AI. For example, the Financial Conduct Authority is working with The Alan Turing Institute on AI transparency in financial markets. Businesses should therefore follow the guidance from their respective regulators and put in place a strategy how to address the data protection (and other) risks associated with using AI systems.

See original here:
UK Information Commissioner's Office publishes guidance on artificial intelligence and data protection - Lexology

Read More..

University of Illinois Professor Vikram Adve to lead new artificial intelligence institute – newsindiatimes.com

Computer Science Professor at University of Illinois Vikram Adve will lead the AI Institute for Future Agricultural Resiliencne Management and Sustainability funded by the federal government. Photo: L. Brian Staufer at illinois.edu

The National Science Foundation and the U.S. Department of Agricultures National Institute of Food and Agriculture are announcing an investment of more than $140 million to establish seven artificial intelligence institutes in the U.S.

One of the new AI institutes will be led by an Indian American, Professor Vikram Adve of the University of Illinois, Urbana-Champaign, according to a press release from U of I. Each of the new institutes will receive about $20 million over five years.

Two of the seven AI institutes will be led by teams at the University of Illinois, Urbana-Champaign, one by Adve. They will support the work of researchers at the U. of I. and their partners at other academic and research institutions.

The USDA-NIFA will fund the AI Institute for Future Agricultural Resilience, Management and Sustainability (AIFARMS) at the U. of I. Illinois led by Adve who is a computer science professor .

The NSF will fund the AI Institute for Molecular Discovery, Synthetic Strategy and Manufacturing.

AIFARMS will advance AI research in computer vision, machine learning, soft-object manipulation and intuitive human-robot interaction to solve major agricultural challenges, the NSF reports.

Such challenges include sustainable intensification with limited labor, efficiency and welfare in animal agriculture, the environmental resilience of crops and the preservation of soil health. The institute will feature a novel autonomous farm of the future, new education and outreach pathways for diversifying the workforce in agriculture and technology, and a global clearinghouse to foster collaboration in AI-driven agricultural research, Adve is quoted saying in the press release.

The Molecule Maker Lab Institute will focus on the development of new AI-enabled tools to accelerate automated chemical synthesis to advance the discovery and manufacture of novel materials and bioactive compounds,

Read more from the original source:
University of Illinois Professor Vikram Adve to lead new artificial intelligence institute - newsindiatimes.com

Read More..

This researcher is getting others ready for a quantum world – Siliconrepublic.com

Dr Araceli Venegas-Gomez was inspired to become a quantum physics researcher after working in industry, and is now looking to inspire others.

Dr Araceli Venegas-Gomez spent several years working for Airbus in Germany as an aerospace engineer, before falling in love with quantum mechanics. She decided to follow her passion and moved to Scotland to pursue a PhD in quantum physics at the University of Strathclyde.

Following discussions with different quantum stakeholders over the last few years, Venegas-Gomez identified the need to bridge the gap between businesses and academia, and raise awareness of quantum research among the general public.

She was awarded an Optical Society fellowship engaged in international outreach in order to become a global ambassador advocating quantum technologies. To create a link between the different stakeholders in the quantum community and generate global opportunities with quantum technologies, she founded her own company called Qureca.

Qureca offers professional services, business development and an online platform for training and recruitment within quantum technologies. It is part of the EU Quantum Flagship programme, which was launched in 2018 to kick-start a competitive European industry in quantum technologies.

I hope I can support in the development of the skills necessary for the future quantum workforce ARACELI VENEGAS-GOMEZ

While working in industry I always wanted to learn more about physics, so I enrolled in a distance-learning medical physics postgraduate programme. When I was learning more about magnetic resonance imaging, I started to research articles about quantum physics and became really interested in the topic.

I then did some online courses and took annual leave from my work to attend conferences. It was clear to me that I wanted to go in that direction.

It was not until I bought a book called Do What You Love, The Money Will Follow that I asked myself what I wanted to do with my life, and I knew my next goal in life was to do a PhD in quantum physics.

My PhD was in quantum simulation in Prof Andrew Daleys group, Quantum Optics and Many-body Physics.

I worked on dynamics in many-body quantum systems with different ranges of interactions, where we investigated quantum magnetism with spin models.

It helps understand fundamental questions in the study of out-of-equilibrium dynamics of many-body systems.

These theoretical studies can be directly applied in cold-atom experiments and could open up new ways to engineer magnetism at the quantum level for new systems and future materials.

Coming from a different background, it was always hard to feel fully integrated, and still now I consider I have to learn a lot to be able to feel confident in any scientific conversation.

It is important to understand that science and research is a marathon and not a sprint. This can be applied to any area of research.

With my company, I hope I can support in the development of the skills necessary for the future quantum workforce.

I had the pleasure to meet William D Phillips [winner of the Nobel Prize in Physics in 1997] several times, and his approach to students, the way he participates in any event with such an eagerness to learn and how much he enjoys teaching difficult concepts in physics to the general public is admirable.

Are you a researcher with an interesting project to share? Let us know by emailing editorial@siliconrepublic.com with the subject line Science Uncovered.

Read more here:

This researcher is getting others ready for a quantum world - Siliconrepublic.com

Read More..

What is Nanoscience? | Outlook and How to Invest | INN – Investing News Network

Nanoscience has made an impact on a range of industries. With continuous developments, it will only get more exciting for investors.

Through nanotechnology, nanoscience has undeniably impacteda range of industries, from energy to medicine. In the face of continuous nanotechnology research and development, experts are promising an exciting future for the industry.

The terms nanoscience and nanotechnology have been around for a long time, and its common for them to be used interchangeably. However, its important to note that they are not the same.

According toErasmus Mundus, the European Unions higher education program, nanoscience refers to the study, manipulation and engineering of particles and structures on a nanometer scale. For its part, nanotechnology is described as the design and application of nanoscience.

In simple terms, nanoscience is the study of nanomaterials and properties, while nanotechnology is using these materials and properties to create a new product.

Here the Investing News Network provides a comprehensive look at nanoscience investing and nanomaterials, with an overview of the subjects and where they are headed in the future.

The University of Sydneys Nano Institute describes nanoscience as the study of the structure and function of materials on the nanometer scale.

Nanometers are classified as particles that are roughly the size of about 10 atoms in a row. Under those conditions, light and matter behave in a different way as compared to normal sizes.

These behaviours often defy the classical laws of physics and chemistry and can only be understood using the laws of quantum mechanics, the universitys research page states.

The Institute of Nanoscience of Aragon identifies carbon nanotubes (CNTs) as one example of a component that is designed at the nanoscale level. These structures are stronger than steel at the macroscale level. CNT powders are currently used in diverse commercial products, from rechargeable batteries to automotive parts to water filters.

Scientists, researchers and industry experts are enthusiastic about nanoscience and nanoparticles.

As noted in a study published by Jeffrey C. Grossman, a University of California student, quantum properties come into play at the nanoscale level. In simple terms, at the nanoscale level, a materials optical properties, such as color, can be controlled.

Further, the paper states that the surface-to-volume ratio increases at the nano size, opening up new possibilities for applications in catalysis, filtering, and new composite materials, to name only a few.

In other words, the opening up of surface area, which adds new possibilities, can have drastic effects on industries such as manufacturing. New applications in catalysis can allow manufacturing to be sped up, while new composite materials can add more dimension to an end product.

Nanoscale developments could also lead to increased resources and could play a role in the energy sector by increasing efficiency.

As the Royal Society putsit, the aim of nanoscience and nanotechnologies is to produce new or enhanced nanoscale materials.

Nanomaterials are formed when materials have their properties changed at the nanoscale level. Nanomaterials involve elements that contain at least one nanoscale structure, but there are several subcategories of nanomaterials based on their shape and size.

According to the Royal Society, nanowires, nanotubes and nanoparticles like quantum dots, along with nanocrystalline materials, are said to be nanomaterials.

While these are broader classifications of nanomaterials, each of them has several submaterials. Graphene is one popular submaterial and is an example of a nanoplate.

The Integrated Nano-Science & Commodity Exchange, a self-regulated commodity exchange, includes a wide range of nanomaterials and related commodities and lists more than 1,000 nanomaterials.

The exchange states that its entire product range is in excess of 4,500 products, including CNTs, graphene, graphite, ceramics, drug-delivery nanoparticles, metals, nanowires, micron powders, conductive inks, nano-fertilizers and nano-polymers.

As can be seen, nanoscience and nanotechnology are used in a variety of applications across diverse fields, from energy to manufacturing. The University of Sydneys Nano Institute highlights how nanoscienceimpacts manufacturing, energy and the environment through the continuous development of new nano and quantum materials.

With the advancement of materials science and technology, solutions are being worked on for the health and medicine fields, with nanobots gaining popularity in the medical field.

Similarly, nanomaterials like graphene are having a major impact in the technology field graphene is used for various purposes, including in cooling and in batteries.

According to IndustryARC, the global nanotechnology market is projected to reach US$121.8 billion by 2025, growing at a compound annual growth rate of 14.3 percent between 2020 and 2025.

In the US, the National Nanotechnology Initiative, a US government research and development initiative that involves 20 federal and independent agencies, has received cumulative funding of US$27 billion since 2001 to advance research and development of nanoscale projects.

With growth predicted across multiple areas and industries, and with researchers and institutes working on developing the nanoscience field, investors have a slew of nanotechnology stocks to consider.

One popular investment avenue is via graphene, with companies in the space including Applied Graphene Materials (LSE:AGM,OTC Pink:APGMF) and Haydale Graphene Industries (LSE:HAYD). Meanwhile, nanotech stock options include firms such as NanoViricides (NYSE:NNVC), Nano Dimension (NASDAQ:NNDM) and Sona Nanotech (CSE:SONA).

This is an updated version of an article first published by the Investing News Network in 2019.

Dont forget to follow us @INN_Technology or real time updates!

Securities Disclosure: I, Melissa Pistilli, hold no direct investment interest in any company mentioned in this article.

More:

What is Nanoscience? | Outlook and How to Invest | INN - Investing News Network

Read More..

NU receives $115 million federal grant to research and develop beyond state-of-the-art quantum computer – Daily Northwestern

Courtesy of James Sauls

SQMS director Anna Grassellino and deputy director James Sauls hold a superconducting radio-frequency cavity at Fermilab. These cavities play a pivotal role in developing quantum technologies.

Grace Wu, Assistant Video EditorSeptember 23, 2020

The U.S. Department of Energy recently announced that the Superconducting Quantum Materials and Systems Center, a partnership between NU and Fermi National Accelerator Laboratory, was selected as one of five national quantum information science centers.

The grant, as part of the National Quantum Initiative, consists of $115 million to be used over five years. Quantum information science has huge implications for multiple fields, ranging from traditional sciences such as physics and chemistry to industries like finance to national security, due to quantum computers ability to process computations at never-before seen speeds.

Quantum computers are able to perform computations at such high speeds due to the nature of quantum bits, or qubits. Classical, or traditional, computers data is encoded in binary bits, a one or a zero, and the data can only exist in one of these states at a time. Qubits are unique because they have an atomic system that allows them to exist in both states simultaneously, thus creating exponential computing potential.

Google and IBM have already each built their own quantum computer. At its unveiling in Oct. 2019, Googles Sycamore quantum computer could process 53 qubits, meaning it could process two to the 53rd power quantum bits, or pieces of information. For context, the Sycamore processor performed a computation in 200 seconds that would take the fastest classical computers about 10,000 years.

Out of the five NQI centers, SQMS is the only center who has proposed to build a next-generation, state-of-the-art quantum computer, according to SQMS deputy director and McCormick Prof. James Sauls. This computer is aimed to process over 100 qubits, and NU has the technology to engineer it, Sauls added.

There will be some synergy between these five different centers as we grow, and we figure out what each other are doing, Sauls said. Another mission that the Department of Energy has is to make the five national centers something bigger than just the sum of each part.

As for NUs partnership with Fermilab in SQMS, research in quantum information science has already been underway before submitting the proposal for the DOE grant, according to McCormick Prof. Peter Voorhees, one of the six materials science and engineering faculty working in the center. Fermilab has some of the worlds best superconducting electromagnetic cavities, and Northwestern has already established strength and knowledge in the field of superconductivity and materials science, Voorhees said.

Weve been working on it before, and we were waiting for people to agree with us that its an important thing to do, Voorhees said. Between (Fermilab) and Northwestern, this is the place where you want to put your investment.

SQMS has four components dedicated to developing the technology, building the devices, elaborating on the physics of the sensing and recruiting young scientists and providing them with research experience and learning in quantum information science, Sauls said.

There are currently 35 people on staff from NU, a number that will easily grow to 50, Sauls said. Faculty from the physics and astronomy, materials science and engineering, and electrical engineering departments will lead research and engineering initiatives. SQMS will also be working in conjunction with Rigetti Computing, National Institute of Standards and Technology and other universities.

In addition to engineering a powerful computer, SQMS will also be engaging in fundamental science research, as the same devices that are used for computing can also be used in detecting particles such as dark matter, Sauls said.

Research funded by the DOE has not yet commenced because the grant hasnt arrived yet at NU, Voorhees said. However, he said he thinks its going to happen in record time due to the technologys important implications.

I think its really fun and exciting to be part of an effort thats truly interdisciplinary, Voorhees said. (It) involves a national lab (and) people from other disciplines that is in an area thats so exciting and promising for the future.

Email: [emailprotected]Twitter: @gracewu_10

Related Stories: Northwesterns Center for Molecular Quantum Transduction receives $12.4 million in research funding

Read more:
NU receives $115 million federal grant to research and develop beyond state-of-the-art quantum computer - Daily Northwestern

Read More..