There is widespread agreement that responsible artificial intelligence requires principles such as fairness, transparency, privacy, human safety, and explainability. Nearly all ethicists and tech policy advocates stress these factors and push for algorithms that are fair, transparent, safe, and understandable.1
But it is not always clear how to operationalize these broad principles or how to handle situations where there are conflicts between competing goals.2 It is not easy to move from the abstract to the concrete in developing algorithms and sometimes a focus on one goal comes at the detriment of alternative objectives.3
In the criminal justice area, for example, Richard Berk and colleagues argue that there are many kinds of fairness and it is impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness.4 While sobering, that assessment likely is on the mark and therefore must be part of our thinking on ways to resolve these tensions.
Algorithms also can be problematic because they are sensitive to small data shifts. Ke Yang and colleagues note this reality and say designers need to be careful in system development. Worrying, they point out that small changes in the input data or in the ranking methodology may lead to drastic changes in the output, making the result uninformative and easy to manipulate.5
Algorithms also can be problematic because they are sensitive to small data shifts.
In addition, it is hard to improve transparency with digital tools that are inherently complex. Even though the European Union has sought to promote AI transparency, researchers have found limited gains in consumer understanding of algorithms or the factors that guide AI decisionmaking. Even as AI becomes ubiquitous, it remains an indecipherable black box for most individuals.6
In this paper, I discuss ways to operationalize responsible AI in the federal government. I argue there are six steps to responsible implementation:
There need to be codes of conduct that outline major ethical standards, values, and principles. Some principles cut across federal agencies and are common to each one. This includes ideas such as protecting fairness, transparency, privacy, and human safety. Regardless of what a government agency does, it needs to assure that its algorithms are unbiased, transparent, safe, and capable of maintaining the confidentiality of personal records.7
But other parts of codes need to be tailored to particular agency missions and activities. In the domestic area, for example, agencies that work on education and health care must be especially sensitive to the confidentiality of records. There are existing laws and rights that must be upheld and algorithms cannot violate current privacy standards or analyze information in ways that generate unfair or intrusive results.8
In the defense area, agencies have to consider questions related to the conduct of war, how automated technologies are deployed in the field, ways to integrate intelligence analytics into mission performance, and mechanisms for keeping humans in the decisionmaking loop. With facial recognition software, remote sensors, and autonomous weapons systems, there have to be guardrails regarding acceptable versus unacceptable uses.
As an illustration of how this can happen, many countries came together in the 20th century and negotiated agreements outlawing the use of chemical and biological weapons, and the first use of nuclear weapons. There were treaties and agreements that mandated third-party inspections and transparency regarding the number and type of weapons. Even at a time when weapons of mass destruction were pointed at enemies, adversarial countries talked to one another, worked out agreements, and negotiated differences for the safety of humanity.
As the globe moves towards greater and more sophisticated technological innovation, both domestically and in terms of military and national security, leaders must undertake talks that enshrine core principles and develop conduct codes that put those principles into concrete language. Failure to do this risks using AI in ways that are unfair, dangerous, or not very transparent.9
Some municipalities already have enacted procedural safeguards regarding surveillance technologies. Seattle, for example, has enacted a surveillance ordinance that establishes parameters for acceptable uses and mechanisms for the public to report abuses and offer feedback. The law defines relevant technologies that fall under the scope of the law but also illustrates possible pitfalls. In such legislation, it is necessary to define what tools rely upon algorithms and/or machine learning and how to distinguish such technologies from conventional software that analyzes data and acts on that analysis.10 Conduct codes wont be very helpful unless they clearly delineate the scope of their coverage.
Employees need appropriate operational tools that help them safely design and deploy algorithms. Previously, developing an AI application required detailed understanding of technical operations and advanced coding. With high-level applications, there might be more than a million lines of code to instruct processors on how to perform certain tasks. Through these elaborate software packages, it is difficult to track broad principles and how particular programming decisions might create unanticipated consequences.
Employees need appropriate operational tools that help them safely design and deploy algorithms.
But now there are AI templates that bring sophisticated capabilities to people who arent engineers or computer scientists. The advantage of templates is they increase the scope and breadth of applications in a variety of different areas and enable officials without strong technical backgrounds to use AI and robotic process automation in federal agencies.
At the same time, though, it is vital that templates be designed in ways where their operational deployment promotes ethics and fights bias. Ethicists, social scientists, and lawyers need to be integrated into product design so that laypeople have confidence in the use of these tools. There cannot be questions about how these packages operate or on what basis they make decisions. Agency officials have to feel confident that algorithms will make decisions impartially and safely.
Right now, it sometimes is difficult for agency officials to figure out how to assess risk or build emerging technologies into their missions.11 They want to innovate and understand they need to expedite the use of technology in the public sector. But they are not certain whether to develop products in-house or rely on proprietary or open-source software from the commercial market.
One way to deal with this issue is to have procurement systems that help government officials choose products and design systems that work for them. If the deployment is relatively straightforward and resembles processes common in the private sector, commercial products may be perfectly viable as a digital solution. But if there are complexities in terms of mission or design, there may need to be proprietary software designed for that particular mission. In either circumstance, government officials need a procurement process that meets their needs and helps them choose products that work for them.
We also need to keep humans in some types of AI decisionmaking loops so that human oversight can overcome possible deficiencies of automated software. Carnegie Mellon University Professor Maria De-Arteaga and her colleagues suggest that machines can reach false or dangerous conclusions and human review is essential for responsible AI.12
However, University of Michigan Professor Ben Green argues that it is not clear that humans are very effective at overseeing algorithms. Such an approach requires technical expertise that most people lack. Instead, he says there needs to be more research on whether humans are capable of overcoming human-based biases, inconsistencies, and imperfections.13 Unless humans get better at overcoming their own conscious and unconscious biases, manual oversight runs the risk of making bias problems worse.
In addition, operational tools must be human-centered and fit the agency mission. Algorithms that do not align with how government officials function are likely to fail and not achieve their objectives. In the health care area, for example, clinical decisionmaking software that does not fit well with how doctors manage their activities are generally not successful. Research by Qian Yang and her colleagues documents how user-centered design is important for helping physicians use data-driven tools and integrating AI into their decisionmaking.14
Finally, the community and organizational context matter. As argued by Michael Katell and colleagues, some of the most meaningful responsible AI safeguards are based not on technical criteria but on organizational and mission-related factors.15 The operationalization of AI principles needs to be tailored to particular areas in ways that advance agency mission. Algorithms that are not compatible with major goals and key activities are not likely to work well.
To have responsible AI, we need clear evaluation benchmarks and metrics. Both agency and third-party organizations require a means of determining whether algorithms are serving agency missions and delivering outcomes that meet conduct codes.
One virtue of digital systems is they generate a large amount of data that can be analyzed in real-time and used to assess performance. They enable benchmarks that allow agency officials to track performance and assure algorithms are delivering on stated objectives and making decisions in fair and unbiased ways.
To be effective, performance benchmarks should distinguish between substantive and procedural fairness. The former refers to equity in outcomes, while the latter involves the fairness of the process, and many researchers argue that both are essential to fairness. Work by Nina Grgic-Hlaca and colleagues, for example, suggests that procedural fairness needs to consider the input features used in the decision process, and evaluate the moral judgments of humans regarding the use of these features. They use a survey to validate their conclusions and find that procedural fairness may be achieved with little cost to outcome fairness.16
Joshua New and Daniel Castro of the Center for Data Innovation suggest that error analysis can lead to better AI outcomes. They call for three kinds of analysis (manual review, variance analysis, and bias analysis). Comparing actual and planned behavior is important as is identifying cases where systematic errors occur.17 Building those types of assessments into agency benchmarking would help guarantee safe and fair AI.
A way to assure useful benchmarking is through open architecture that enables data sharing and open application programming interfaces (API). Open source software helps others keep track of how AI is performing and data sharing enables third-party organizations to assess performance. APIs are crucial to data exchange because they help with data sharing and integrating information from a variety of different sources. AI often has impact in many areas so it is vital to compile and analyze data from several domains so that its full impact can be evaluated.
Technical standards represent a way for skilled professionals to agree on common specifications that guide product development. Rather than having each organization develop its own technology safeguards, which could lead to idiosyncratic or inconsistent designs, there can be common solutions to well-known problems of safety and privacy protection. Once academic and industry experts agree on technical standards, it becomes easy to design products around those standards and safeguard common values.
An area that would benefit from having technical standards is fairness and equity. One of the complications of many AI algorithms is the difficulty of measuring fairness. As an illustration, fair housing laws prohibit financial officials from making loan decisions based on race, gender, and marital status in their assessments.
One of the complications of many AI algorithms is the difficulty of measuring fairness.
Yet AI designers either inadvertently or intentionally can find proxies that approximate these characteristics and therefore allow the incorporation of information about protected categories without the explicit use of demographic background.18
AI experts need technical standards that guard against unfair outcomes and proxy factors that allow back-door consideration of protected characteristics. It does not help to have AI applications that indirectly enable discrimination by identifying qualities associated with race or gender and incorporating them in algorithmic decisions. Making sure this does not happen should be a high priority for system designers.
Pilot projects and organizational sandboxes represent ways for agency personnel to experiment with AI deployments without great risk or subjecting large numbers of people to possible harm. Small scale projects that can be scaled up when preliminary tests go well protect AI designers from catastrophic failures while still offering opportunities to deploy the latest algorithms.
Federal agencies typically go through several review stages before launching pilot projects. According to Dillon Reisman and colleagues at AI Now, there are pre-acquisition reviews, initial agency disclosures, comment periods, and due process challenges periods. Throughout these reviews, there should be regular public notices so vendors know the status of the project. In addition, there should be careful attention to due process and disparate analysis impact.
As part of experimentation, there needs to be rigorous assessment. Reisman recommends opportunities for researchers and auditors to review systems once they are deployed.19 By building assessment into design and deployment, it maximizes the chance to mitigate harms before they reach a wide scale.
The key to successful AI operationalization is a well-trained workforce where people have a mix of technical and nontechnical skills. AI impact can range so broadly that agencies require lawyers, social scientists, policy experts, ethicists, and system designers in order to assess all its ramifications. No single type of expertise will be sufficient for the operationalization of responsible AI.
For that reason, agency executives need to provide funded options for professional development so that employees gain the skills required for emerging technologies.20 As noted in my previous work, there are professional development opportunities through four-year colleges and universities, community colleges, private sector training, certificate programs, and online courses, and each plays a valuable role in workforce development.21
Federal agencies should take these responsibilities seriously because it will be hard for them to innovate and advance unless they have a workforce whose training is commensurate with technology innovation and agency mission. Employees have to stay abreast of important developments and learn how to implement technological applications in their particular divisions.
Technology is an area where breadth of expertise is as important as depth. We are used to allowing technical people to make most of the major decisions in regard to computer software. Yet with AI, it is important to have access to a diverse set of skills, including those of a non-technical nature. A Data and Society article recommended that it is crucial to invite a broad and diverse range of participants into a consensus-based process for arranging its constitutive components. 22 Without access to individuals with societal and ethical expertise, it will be impossible to implement responsible AI.
Thanks to James Seddon for his outstanding research assistance on this project.
The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.
Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.
Read more:
Six Steps to Responsible AI in the Federal Government - Brookings Institution
- What is Artificial Intelligence? How Does AI Work? | Built In [Last Updated On: September 5th, 2019] [Originally Added On: September 5th, 2019]
- Artificial Intelligence What it is and why it matters | SAS [Last Updated On: September 5th, 2019] [Originally Added On: September 5th, 2019]
- artificial intelligence | Definition, Examples, and ... [Last Updated On: September 5th, 2019] [Originally Added On: September 5th, 2019]
- Benefits & Risks of Artificial Intelligence - Future of ... [Last Updated On: September 5th, 2019] [Originally Added On: September 5th, 2019]
- What is AI (artificial intelligence)? - Definition from ... [Last Updated On: September 11th, 2019] [Originally Added On: September 11th, 2019]
- What is Artificial Intelligence (AI)? ... - Techopedia [Last Updated On: September 13th, 2019] [Originally Added On: September 13th, 2019]
- 9 Powerful Examples of Artificial Intelligence in Use ... [Last Updated On: September 18th, 2019] [Originally Added On: September 18th, 2019]
- What's the Difference Between Robotics and Artificial ... [Last Updated On: September 18th, 2019] [Originally Added On: September 18th, 2019]
- The Impact of Artificial Intelligence - Widespread Job Losses [Last Updated On: September 18th, 2019] [Originally Added On: September 18th, 2019]
- Artificial Intelligence & the Pharma Industry: What's Next ... [Last Updated On: September 18th, 2019] [Originally Added On: September 18th, 2019]
- Artificial Intelligence | GE Research [Last Updated On: September 18th, 2019] [Originally Added On: September 18th, 2019]
- A.I. Artificial Intelligence (2001) - IMDb [Last Updated On: October 5th, 2019] [Originally Added On: October 5th, 2019]
- 10 Best Artificial Intelligence Course & Certification [2019 ... [Last Updated On: October 15th, 2019] [Originally Added On: October 15th, 2019]
- Artificial Intelligence in Healthcare: the future is amazing ... [Last Updated On: October 15th, 2019] [Originally Added On: October 15th, 2019]
- Will Artificial Intelligence Help Resolve the Food Crisis? - Inter Press Service [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- Two-thirds of employees would trust a robot boss more than a real one - World Economic Forum [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- UofL partners with industry experts to launch Artificial Intelligence Innovation Consortium Lane Report | Kentucky Business & Economic News - The... [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- China Sees Surge of Edtech Investments With Focus on Artificial Intelligence - Karma [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- NIST researchers use artificial intelligence for quality control of stem cell-derived tissues - National Institutes of Health [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- Indiana University Touts Big Red 200 and Artificial Intelligence at SC19 - HPCwire [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- One way for the Pentagon to prove it's serious about artificial intelligence - C4ISRNet [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- Artificial Intelligence Will Enable the Future, Blockchain Will Secure It - Cointelegraph [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- Artificial intelligence has become a driving force in everyday life, says LivePerson CEO - CNBC [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- 4 Reasons to Use Artificial Intelligence in Your Next Embedded Design - DesignNews [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- Artificial Intelligence Essay - 966 Words | Bartleby [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- AI News: Track The Latest Artificial Intelligence Trends And ... [Last Updated On: November 18th, 2019] [Originally Added On: November 18th, 2019]
- AI in contact centres: It's time to stop talking about artificial intelligence - Verdict [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Newsrooms have five years to embrace artificial intelligence or they risk becoming irrelevant - Journalism.co.uk [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Scientists used IBM Watson to discover an ancient humanoid stick figure - Business Insider [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- The Mark Foundation Funds Eight Projects at the Intersection of Artificial Intelligence and Cancer Research - BioSpace [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Colorado at the forefront of AI and what it means for jobs of the future - The Denver Channel [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Highlights: Addressing fairness in the context of artificial intelligence - Brookings Institution [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Artificial intelligence won't kill journalism or save it, but the sooner newsrooms buy in, the better - Nieman Journalism Lab at Harvard [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- How To Get Your Rsum Past The Artificial Intelligence Gatekeepers - Forbes [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Epiq expands company-wide initiative to accelerate the deployment of artificial intelligence for clients globally - GlobeNewswire [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Preparing the Military for a Role on an Artificial Intelligence Battlefield - The National Interest Online [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Podcast decodes ethics in artificial intelligence and its relevance to public - Daily Bruin [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Global Military Artificial Intelligence (AI) and Cybernetics Market Report, 2019-2024: Focus on Platforms, Technologies, Applications and Services -... [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Artificial intelligence warning: Development of AI is comparable to nuclear bomb - Express.co.uk [Last Updated On: November 20th, 2019] [Originally Added On: November 20th, 2019]
- Google's new study reveals 'Artificial Intelligence benefiting journalism' - Digital Information World [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- Artificial Intelligence (AI) in Retail Market worth $15.3 billion by 2025 - Exclusive Report by Meticulous Research - GlobeNewswire [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- With artificial intelligence to a better wood product - Newswise [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- Report to Congress on Artificial Intelligence and National Security - USNI News [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- Most plastic is not getting recycled, and AI robots could be a solution - Business Insider [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- Fujifilm Showcases Artificial Intelligence Initiative And Advances AI - AiThority [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- Artificial intelligence could be one of the most valuable tools mankind has built - here's one small but meani - Business Insider India [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- Artificial Intelligence: A Need of Modern 'Intelligent' Education - Thrive Global [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- Drones And Artificial Intelligence Help Combat The San Francisco Bays Trash Problem - Forbes [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- DesignCon Expands Into Artificial Intelligence, Automotive, 5G, IoT, and More For 2020 Edition - I-Connect007 [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- Is St. Louis ready for artificial intelligence? It will steal white-collar jobs here, too - STLtoday.com [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- IT chiefs recognise the risks of artificial intelligence bias - ComputerWeekly.com [Last Updated On: November 23rd, 2019] [Originally Added On: November 23rd, 2019]
- PNNL researchers working to improve doctor-patient care through artificial intelligence - NBC Right Now [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- How Augmented Reality and Artificial Intelligence Are Helping Entrepreneurs Create a Better Customer Experience - CTOvision [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Manufacturing Leaders' Summit: Realising the promise of Artificial Intelligence - Manufacturer.com [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- 2019 Artificial Intelligence in Precision Health - Dedication to Discuss & Analyze AI Products Related to Precision Healthcare Already Available -... [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Artificial intelligence will affect Salt Lake, Ogden more than most areas in the nation, study shows - KSL.com [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- It Pays To Break Artificial Intelligence Out Of The Lab, Study Confirms - Forbes [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- The Best Artificial Intelligence Stocks of 2019 -- and The Top AI Stock for 2020 - The Motley Fool [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Artificial Intelligence of Things (AIoT) Market Research Report 2019-2024 - Embedded AI in Support of IoT Things/Objects Will Reach $4.6B Globally by... [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- How Augmented Reality and Artificial Intelligence Are Helping Entrepreneurs Create a Better Customer Experience - Entrepreneur [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- SC Proposes Introduction Of Artificial Intelligence In Justice Delivery System - Inc42 Media [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Artificial intelligence in FX 'may be hype' - FX Week [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Fujifilm Showcases Artificial Intelligence Initiative And Advances at RSNA 2019 - Imaging Technology News [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- The Surprising Way Artificial Intelligence Is Transforming Transportation - Forbes [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Artificial Intelligence in 2020: The Architecture and the Infrastructure - Gigaom [Last Updated On: December 2nd, 2019] [Originally Added On: December 2nd, 2019]
- AI IN BANKING: Artificial intelligence could be a near $450 billion opportunity for banks - here are the strat - Business Insider India [Last Updated On: December 2nd, 2019] [Originally Added On: December 2nd, 2019]
- The impact of artificial intelligence on humans - Bangkok Post [Last Updated On: December 2nd, 2019] [Originally Added On: December 2nd, 2019]
- Should the EU embrace artificial intelligence, or fear it? - EURACTIV [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- BioSig Technologies Announces New Collaboration on Development of Artificial Intelligence Solutions in Healthcare - GlobeNewswire [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence-based fitness is promising but may not be for everyone - Livemint [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Pondering the Ethics of Artificial Intelligence in Health Care Kansas City Experts Team Up on Emerging - Flatland [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Baidu Leads the Way in Innovation with 5712 Artificial Intelligence Patent Applications - GlobeNewswire [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial Intelligence and National Security, and More from CRS - Secrecy News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Longer Looks: The Psychology Of Voting; Overexcited Neurons And Artificial Intelligence; And More - Kaiser Health News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Emotion Artificial Intelligence Market Business Opportunities and Forecast from 2019-2025 | Eyesight Technologies, Affectiva - The Connect Report [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- The next generation of user experience is artificially intelligent - ZDNet [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- What Jobs Will Artificial Intelligence Affect? - EHS Today [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Will the next Mozart or Picasso come from artificial intelligence? No, but here's what might happen instead - Ladders [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence apps, Parkinsons and me - BBC News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- VA launches National Artificial Intelligence Institute to drive research and development - FierceHealthcare [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]