Category Archives: Alphago

Different Types of Robot Programming Languages – Analytics Insight

Robots are by far the most efficient use of modern science. Robots not only reduce human labor but also execute error-free activities. Many businesses are expressing an interest in robotics. Automated machines have gained popularity in recent years. Keeping the situation in mind, we shall discuss robotic computer languages.

So, in order for robots to do tasks, they must be programmed. Robot programming is the process through which robots acquire instructions from computers. A robotic programmer must be fluent in several programming languages. So lets get started.

There are about 1500 robotic programming languages accessible worldwide. They are all involved in robotic training. In this section, we will go through the top programming languages accessible today.

The easiest way to get started with robotics is to learn C and C++. Both of these are general-purpose programming languages with almost identical features. C++ is a modified version of C that adds a few features. You should now see why C++ is the most popular robotic programming language. It enables a low-level hardware interface and delivers real-time performance.

C++ is the most mature programming language for getting the greatest results from a robot. C++ allows you to code in three different ways. The Constructor, Autonomous, and OperatorControl methods are among these. In this constructor mode, the initializing code runs to build a class. It will execute at the start of the program in this scenario.

It aids in the initialization of sensors and the creation of other WPILib objects. The autonomous approach guarantees that the code is executed. It only works for a set amount of time. The robot then moves on to the teleoperation section. The OperatorControl technique is used in this case.

Python is a powerful programming language that may be used to create and test robots. In terms of automation and post-process robotic programming, it outperforms other platforms. You may use this to build a script that will compute, record, and activate a robot code.

It is not necessary to teach anything by hand. This enables rapid testing and visualization of the simulations, programs, and logic solutions. Python uses fewer lines of code than other programming languages. It also includes a large number of libraries for fundamental functions. Pythons primary goal is to make programming easier and faster.

Any item can be created, modified, or deleted. In addition, we may code the robots motions in the same script. All of this is accomplished with very little code. Python is among the finest robotic programming languages as a result of this.

Java is a programming language that enables robots to do activities that are similar to those performed by humans. It also provides a variety of APIs to meet the demands of robots. Java has artificial language characteristics to a high degree.

It enables you to construct high-level algorithms, searching, and neural algorithmic algorithms. Java also allows you to run the same code on many computers.

Java is not built into machine code since it is an interpretative language. Rather, in execution, the Java virtual computer interprets the commands. Java has become quite popular in the field of robotics as a result of this. As a result, Java is preferable to alternative robotic programming languages. Java is used by modern AIs such as IBM Watson and AlphaGo.

Microsofts .NET programming language is used to create apps with Visual Studio. It provides a good basis for anyone interested in pursuing a career in robotics. .NET is primarily used by programmers for port and socket development.

It supports various languages while allowing for horizontal scaling. It also offers a uniform environment and makes programming in C++ or Java easier. All of the tools and IDEs have been thoroughly tested and are accessible on the Microsoft Developer Network.

In addition, the merging of languages is smooth. As a result, we can confidently rank this among the best robotic programming languages.

In robotic engineering, MATLAB and its open-source cousins like Octave are extremely popular. In terms of data analysis, it is considerably ahead of many other robotic computer languages. MATLAB is not really a programming language in the traditional sense. Yet, engineering solutions based on complex mathematics can be found here.

Robotic developers will learn how to create sophisticated graphs using MATLAB data. It is quite helpful in the development of the complete robotic system. It also aids the development of deeply established robotic foundations in the robot business. Its a tool that lets you apply your methods to simulate the outcome. Engineers may use this simulation to fine-tune the system design and eliminate mistakes.

There have been cases when MATLAB has been used to build a complete robot. As a result, it must be included among the top ten languages. Kuka kr6 is one of the greatest instances of MATLAB application. MATLAB was also used to create and simulate this robot by the developers.

One of the first robotic computer languages was Lisp. It was introduced to the market to allow computer applications to use mathematical terminology. Lisp is an AI domain that is mostly used for creating Robot Operating Systems.

Tree data structures, automated storage management, syntax highlighting, and elevated-order characteristics are among the features available. As a result, it is simple to use and aids in the elimination of implementation mistakes after an issue have been identified.

This problem-solving procedure takes place at the prototype stage, not the manufacturing stage. It also includes capabilities like the read-eval-print loop and self-hosting compilation.

One of the earliest programming languages to hit the market was Pascal. Its still quite useful, especially for newcomers. It is based on the Fundamental programming language and teaches excellent programming skills. Pascal is being used by manufacturers to create robotic programming languages.

ABBs RAPID and Kukas KRL are two examples. Nevertheless, most developers consider Pascal to be obsolete for everyday use. Theyve also highlighted its significance for newcomers.

It will assist you in learning other robot programming languages more quickly. This is only recommended for complete novices. When youve gained some expertise in robotics programming, you can transition to another language.

And its a wrap. We hope that you found this article helpful regarding robotic programming languages. Weve covered all of the pros and cons of the top robotic programming languages. You can choose the most appropriate language for your needs. Even now, robotics has a promising future. So now is the ideal moment to get started.

See the article here:
Different Types of Robot Programming Languages - Analytics Insight

Challenges and New Frontiers of AI – ETCIO.com

By Som Pal Choudhury

The phenomenal impact that Artificial Intelligence (AI) is projected to have on our economy and our daily lives is nothing short of astounding. It is predicted that AI will significantly ($15.7 trillion) contribute to the world economy by 2030. While its prominence has magnified its adoption and use-cases, criticisms abound with its adoption resulting in job losses, unintended biases, privacy, surveillance concerns and even the energy-hogging data centres building the AI models. As with any new technology, its abuse versus its safe and productive use with the right sets of ethics and regulations rests on us.

With significant adoption underway in all facets of life and business, the challenges and concerns around training AI with unbiased data, data scarcity, trust, explainability and privacy are becoming the top concerns for broader adoption. Researchers and thought leaders worldwide are trying to solve them with several new frontiers emerging and being explored. We took a deeper dive to understand these challenges and summarise our learnings here.

Artificial Intelligence research has significantly picked up in India, and our review of patents and research shows a solid research base here in edge AI and Federated Learning. Large tech giants have released edge frameworks orthogonal to the well-entrenched cloud-based AI/ML. Federated learning involves a central server that collates information from many edge-generated models to create a global model without transferring local data for training. It has a hyper-personalised approach, is time-efficient, cost-effective and supposedly privacy friendly as user data is not sent to the cloud.

AutoML has seen significant progress to ensure that data scientists are not stuck in repetitive and time-consuming tasks starting from data cleaning, playing around with different models and hyper-parameters and eventually fine-tuning them for best results. AutoML uses an inherent reinforcement learning and recurrent neural network approach so that these models and parameters start with an initial input or auto-picked, but gets continuously and automatically refined based on results.

There are a wide variety of platforms in the market today, and we are at Gen 3 of AutoML evolution with more verticalised domain-specific platforms. Most platforms still select the model and the hyperparameters, which means that the data scientists still need to do the bulk of the work in data preparation and cleaning, where the majority of time is often spent. Other advanced platforms also include cleaning, encoding and feature extraction, a must to build a good model quickly, but the approach is template driven and may not always be a good fit.

AI practitioners have always been plagued with a paucity of data and hence the effort to generate acceptable models with reduced datasets or simply their quest to find more data. Finding more data include public annotated data (e.g. Google public dataset, AWS open data), data augmentation running transforms on available data and transfer learning where other similar but the larger dataset is used to train the models. Rapid progress continues on creation of artificial or synthetic data. Synthetic Minority Over-sampling Technique (SMOTE) and several of its modifications are used in classic cases where minority data is sparse and hence oversampled. Generating completely new data with self-learning (AlphaGo self-played 4.9 million times) and simulation (recreating city traffic scenarios using gaming engines) are more recent approaches to create synthetic data. Unfortunately, more data also amplifies the resource and time constraints to train, including the time and effort required to clean, remove noise, remove redundancies, outliers etc. The holy grail of AI training is Few-Shot Learning (FSL), that is, training with a smaller dataset. It is an area of active research, as highlighted in this recent survey paper.

A vast amount of open-source models, datasets, active collaboration and benchmarks continue to accelerate AI development. Open AIs GPT-3 launch took NLP to another level with 175 billion parameters trained on 570 gigabytes of text. Huawei recently trained the Chinese version of GPT-3 with 1.1 terabytes of Chinese text. Alphabet subsidiary Deepminds AlphaFold had the most significant breakthrough in Biology with 92.4 percent accuracy in the well-known protein structure and folding prediction competition. Cityscapes has built a large-scale 50 cities dataset of diverse urban street scenes. Beyond image and language recognition, the next frontier of AI is intent understanding from video. While India rose in the AI Vibrancy index from rank 23 to 5 in 2021, a lot still needs to be done in terms of collaboration, open-source and India specific datasets.

With the growing need for security of sensitive and private information, there is a call for machine learning algorithms to be run on data that is protected by encryption. Homomorphic encryption (HE) is a concept that is now being leveraged to train models on data without decrypting it and risking data leaks. Intel is one of the players in this space that has collaborated with Microsoft to develop silicon for this purpose. With growing interest in research and development in this field, these HE methods will become more commonplace and advanced.

Removing toxicity and biases is the aim of Ethical AI or Responsible AI, but development is at nascent stages. Google and Accenture have announced Responsible AI frameworks. European Commissions white paper on AI focuses on trust, and the UN AI ethics committee formation is an excellent initiative.

The evolution of AI is happening at a breakneck pace, and 2021 will be no different.

The author is Partner, BIF and Arjun Nair, Intern BIF, Junior at Brown University

Read more:
Challenges and New Frontiers of AI - ETCIO.com

AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications – MedTech Intelligence

An increasing number of medical devices incorporate artificial intelligence (AI) capabilities to support therapeutic and diagnostic applications. In spite of the risks connected with this innovative technology, the applicable regulatory framework does not specify any requirements for this class of medical devices. To make matters even more complicated for manufacturers, there are no standards, guidance documents or common specifications for medical devices on how to demonstrate conformity with the essential requirements.

The term artificial intelligence (AI) describes the capability of algorithms to take over tasks and decisions by mimicking human intelligence.1 Many experts believe that machine learning, a subset of artificial intelligence, will play a significant role in the medtech sector.2,3 Machine learning is the term used to describe algorithms capable of learning directly from a large volume of training data. The algorithm builds a model based on training data and applies the experience, it has gained from the training to make predictions and decisions on new, unknown data. Artificial neural networks are a subset of machine learning methods, which have evolved from the idea of simulating the human brain.22 Neural networks are information-processing systems used for machine learning and comprise multiple layers of neurons. Between the input layer, which receives information, and the output layer, there are numerous hidden layers of neurons. In simple terms, neural networks comprise neurons also known as nodes which receive external information or information from other connected nodes, modify this information, and pass it on, either to the next neuron layer or to the output layer as the final result.5 Deep learning is a variation of artificial neural networks, which consist of multiple hidden neural network layers between the input and output layers. The inner layers are designed to extract higher-level features from the raw external data.

The role of artificial intelligence and machine learning in the health sector was already the topic of debate well before the coronavirus pandemic.6 As shown in an excerpt from PubMed several approaches for AI in medical devices have already been implemented in the past (see Figure 1). However, the number of publications on artificial intelligence and medical devices has grown exponentially since roughly 2005.

Artificial intelligence in the medtech sector is at the beginning of a growth phase. However, expectations for this technology are already high, and consequently prospects for the digital future of the medical sector are auspicious. In the future, artificial intelligence may be able to support health professionals in critical tasks, controlling and automating complex processes. This will enable diagnosis, therapy and care to be optimally aligned to patients individual needs, thereby increasing treatment efficiency, which in turn will ensure an effective and affordable healthcare sector in the future.4

However, some AI advocates tend to overlook some of the obstacles and risks encountered when artificial intelligence is implemented in clinical practice. This is particularly true for the upcoming regulation of this innovative technology. The risks of incorporating artificial intelligence in medical devices include faulty or manipulated training data, attacks on AI such as adversarial attacks, violation of privacy and lack of trust in technology. In spite of these technology-related risks, the applicable standards and regulatory frameworks do not include any specific requirements for the use of artificial intelligence in medical devices. After years of negotiations in the European Parliament, Regulation (EU) 2017/745 on medical devices and Regulation (EU) 2017/746 on in-vitro diagnostic medical devices entered into force on May 25, 2017. In contrast to Directives, EU Regulations enter directly into force in the EU Member States and do not have to be transferred into national law. The new regulations impose strict demands on medical device manufacturers and the Notified Bodies, which manufacturers must involve in the certification process of medical devices and in-vitro diagnostic medical devices (excluding class I medical devices and nonsterile class A in-vitro diagnostic medical devices, for which the manufacturers self-declaration will be sufficient).

Annex I to both the EU Regulation on medical devices (MDR) and the EU Regulation on in vitro diagnostic medical devices (IVDR) define general safety and performance requirements for medical devices and in-vitro diagnostics. However, these general requirements do not address the specific requirements related to artificial intelligence. To make matters even more complicated for manufacturers, there are no standards, guidance documents or common specifications on how to demonstrate conformity with the general requirements. To place a medical device on the European market, manufacturers must meet various criteria, including compliance with the essential requirements and completion of the appropriate conformity assessment procedure. By complying with the requirements, manufacturers ensure that their medical devices fulfill the high levels of safety and health protection required by the respective regulations.

To ensure the safety and performance of artificial intelligence in medical devices and in-vitro diagnostics, certain minimum requirements must be fulfilled. However, the above regulations define only general requirements for software. According to the general safety and performance requirements, software must be developed and manufactured in keeping with the state of the art. Factors to be taken into account include the software lifecycle process and risk management. Beyond the above, repeatability, reliability and performance in line with the intended use of the medical device must be ensured. This implicitly requires artificial intelligence to be repeatable, performant, reliable and predictable. However, this is only possible with a verified and validated model. Due to the absence of relevant regulatory requirements and standards, manufacturers and Notified Bodies are determining the state of the art for developing and testing artificial intelligence in medical devices, respectively. During the development, assessment and testing of AI, fundamental differences between artificial intelligence (particularly machine learning) and conventional software algorithms become apparent.

Towards the end of 2019, and thus just weeks before the World Health Organizations (WHO) warning of an epidemic in China, a Canadian company (BlueDot) specializing in AI-based monitoring of the spread of infectious diseases alerted its customers to the same risk. To achieve this the companys AI combed through news reports and databases of animal and plant diseases. By accessing global flight ticketing data, the AI system correctly forecast the spread of the virus in the days after it emerged. This example shows the high level of performance that can already be achieved with artificial intelligence today.7 However, it also reveals one of the fundamental problems encountered with artificial intelligence: Despite the distribution of information of the outbreak to various health organizations in different countries, international responses were few. One reason for this lack of response to the AI-based warning is the lack of trust in technology that we do not understand, which plays a particularly significant role in medical applications.

In clinical applications, artificial intelligence is predominantly used for diagnostic purposes. Analysis of medical images is the area where the development of AI models is most advanced. Artificial intelligence is successfully used in radiology, oncology, ophthalmology, dermatology and other medical disciplines.2 The advantages of using artificial intelligence in medical applications include the speed of data analysis and the capability of identifying patterns invisible to the human eye.

Take the diagnosis of osteoarthritis, for example. Although medical imaging enables healthcare professionals to identify osteoarthritis, this is generally at a late stage after the disease has already caused some cartilage breakdown. Using an artificial-intelligence system, a research team led by Dr. Shinjini Kundu analyzed magnetic resonance tomography (MRT) images. The team was able to predict osteoarthritis three years before the first symptoms manifested themselves.8 However, the team members were unable to explain how the AI system arrived at its diagnosis. In other words, the system was not explainable. The question now is whether patients will undergo treatment such as surgery, based on a diagnosis made by an AI system, which no doctor can either explain or confirm.

Further investigations revealed that the AI system identified diffusion of water into cartilage. It detected a symptom invisible to the human eye and, even more important, a pattern that had previously been unknown to science. This example again underlines the importance of trust in the decision of artificial intelligence, particularly in the medtech sector. Justification of decisions is one of the cornerstones of a doctor-patient (or AI-patient) relationship based on mutual trust. However, to do so the AI system must be explainable, understandable and transparent. Patients, doctors and other users will only trust in AI systems if their decisions can be explained and understood.

Many medical device manufacturers wonder why assessment and development of artificial intelligence must follow a different approach to that of conventional software. The reason is based on the principles of how artificial intelligence is developed and how it performs. Conventional software algorithms take an input variable X, process it using a defined algorithm and supply the result Y as the output variable (if X, then Y). The algorithm is programmed, and its correct function can be verified and validated. The requirements for software development, validation and verification are described in the two standards IEC 62304 and IEC 82304-1. However, there are fundamental differences between conventional software and artificial intelligence implementing a machine learning algorithm. Machine learning is based on using data to train a model without explicitly programming the data flow line by line. As described above, machine learning is trained using an automated appraisal of existing information (training data). Given this, both the development and conformity assessment of artificial intelligence necessitate different standards. The following sections provide a brief overview of the typical pitfalls.

A major disadvantage of artificial intelligence, in particular machine learning based on neural networks, is the complexity of the algorithms. This makes them highly non-transparent, hence their designation of black-box AI (see Figure 2). The complex nature of AI algorithms not only concerns their mathematical description but alsoin the case of deep-learning algorithmstheir high level of dimensionality and abstraction. For these classes of AI, the extent to which input information contributes to a specific decision is mostly impossible to determine. This is why AI is often referred to as black box AI. Can we trust the prediction of the AI system in such a case and, in a worst-case scenario, can we identify a failure of the system or a misdiagnosis?

A world-famous example of the result of a black-box AI was the match between AlphaGo, the artificial intelligence system made by DeepMind (Google) and the Go world champion, Lee Sedol. In the match, which was watched by an audience of 60 million including experts, move 37 showed the significance of these particular artificial intelligence characteristics. The experts described the move as a mistake, predicting that AlphaGo would lose the match since in their opinion the move made no sense at all. In fact, they went even further and said, Its not a human move. Ive never seen a human play this move9.

None of them understood the level of creativity behind AlphaGos move, which proved to be critical for winning the match. While understanding the decision made by the artificial intelligence system would certainly not change the outcome of the match, it still shows the significance of the explainability and transparency of artificial intelligence, particularly in the medical field. AlphaGo could also have been wrong!

One example of AI with an intended medical use was the application of artificial intelligence for determining a patients risk of pneumonia. This example shows the risk of black-box AI in the MedTech sector. The system in question surprisingly identified the high-risk patients as non-significant.10 Rich Caruana, one of the leading AI experts at Microsoft, who was also one of the developers of the system, advised against the use of the artificial intelligence he had developed: I said no. I said we dont understand what it does inside. I said I was afraid.11

In this context, it is important to note that open or explainable artificial intelligence, also referred to as white box, is by no means inferior to black-box AI. While there have been no standard methods for opening the black box, there are promising approaches for ensuring the plausibility of the predictions made by AI models. Some approaches try to achieve explainability based on individual predictions on input data. Others, by contrast, try to limit the range of input pixels that impact the decisions of artificial intelligence.12

Medical devices and their manufacturers must comply with further regulatory requirements in addition to the Medical Device Regulation (MDR) and the In-vitro Diagnostics Regulation (IVDR). The EUs General Data Protection Regulation (GDPR), for instance, is of particular relevance for the explainability of artificial intelligence. It describes the rules that apply to the processing of personal data and is aimed at ensuring their protection. Article 110 of the Medical Device Regulation (MDR) explicitly requires measures to be taken to protect personal data, referencing the predecessor of the General Data Protection Regulation.

AI systems which influence decisions that might concern an individual person must comply with the requirements of Articles 13, 22 and 35 of the GDPR.

Where personal data are collected, the controller shall provide.the following information: the existence of automated decision-making and at least in those cases, meaningful information of the logic involved13

In simple terms this means, that patients who are affected by automated decision-making must be able to understand this decision and have the possibility to take legal action against it. However, this is precisely the type of understanding which is not possible in the case of black box AI. Is a medical product implemented as black-box AI eligible for certification as a medical device? The exact interpretation of the requirements specified in the General Data Protection Regulation is currently the subject of legal debate.14

The Medical Device Regulation places manufacturers under the obligation to ensure the safety of medical devices. Among other specifications, Annex I to the regulation includes, , requirements concerning the repeatability, reliability and performance of medical devices (both for stand-alone software and software embedded into a medical device):

Devices that incorporate electronic programmable systems, including software, shall be designed to ensure repeatability, reliability and performance in line with their intended use. (MDR Annex I, 17.1)15

Compliance with general safety and performance requirements can be demonstrated by utilizing harmonized standards. Adherence to a harmonized standard leads to the assumption of conformity, whereby the requirements of the regulation are deemed to be fulfilled. Manufacturers can thus validate artificial intelligence models in accordance with the ISO 13485:2016 standard, which, among other requirements, describes the processes for the validation of design and development in clause 7.3.7.

For machine learning two independent sets of data must be considered. In the first step, one set of data is needed to train the AI model. Subsequently, another set of data is necessary to validate the model. Validation of the model should use independent data and can also be performed by cross-validation in the meaning of the combined use of both data sets. However, it must be noted that AI models can only be validated using an independent data set. Now, which ratio is recommended for the two sets of data? This is not an easy question to answer without more detailed information about the characteristics of the AI model. A look at the published literature (state of the art) recommends a ratio of approx. 80% training data to approx. 20% validation data. However, the ratio being used depends on many factors and is not set in stone. The notified bodies will continue to monitor the state of the art in this area and, within the scope of conformity assessment, also request the reasons underlying the ratio used.

Another important question concerns the number of data sets. As the number of data sets depends on the following factors, this issue is difficult to assess, depending on:

Generally, the larger the number of data the more performant the model can be assumed to work. In their publication on speech recognition, Banko and Brill from Microsoft state, After throwing more than one billion words within context at the problem, any algorithm starts to perform incredibly well16

At the other end of the scale, i.e. the minimum number of data sets required, computational learning theory offers approaches for estimating the lower threshold. However, general answers to this question are not yet known and these approaches are based on ideal assumptions and only valid for simple algorithms.

Manufacturers need to look not only at the number of data, but also at the statistical distribution of both sets of data. To prevent bias, the data used for training and validation must represent the statistical distribution of the environment of application. Training with data that are not representative will result in bias. The U.S. healthcare system, for example, uses artificial intelligence algorithms to identify and help patients with complex health needs. However, it soon became evident that where patients had the same level of health risks, the model suggested Afro-Americans less often for enrolment in these special high-risk care management programs.17 Studies carried out by Obermeyer, et al. showed the cause for this to be racial bias in training data. Bias in training data not only involves ethical and moral aspects that need to be considered by manufacturers: it can also affect the safety and performance of a medical device. Bias in training data could, for example, result in certain indications going undetected on fair skin.

Many deep learning models rely on a supervised learning approach, in which AI models are trained using labelled data. In cases involving labelled data, the bottleneck is not the number of data, but the rate and accuracy at which data are labeled. This renders labeling a critical process in model development. At the same time, data labelling is error-prone and frequently subjective, as it is mostly done by humans. Humans also tend to make mistakes in repetitive tasks (such as labelling thousands of images).

Labeling of large data volumes and selection of suitable identifiers is a time- and cost-intensive process. In many cases, only a very minor amount of the data will be processed manually. These data are used to train an AI system. Subsequently the AI system is instructed to label the remaining data itselfa process that is not always error-free, which in turn means that errors will be reproduced.7 Nevertheless, the performance of artificial intelligence combined with machine learning very much depends on data quality. This is where the accepted principle of garbage in, garbage out becomes evident. If a model is trained using data of inferior quality, the developer will also obtain a model of the same quality.

Other properties of artificial intelligence that manufacturers need to take into account are adversarial learning problems and instabilities of deep learning algorithms. Generally, the assumption in most machine learning algorithms is that training and test data are governed by identical distributions. However, this statistical assumption can be influenced by an adversary (i.e., an attacker that attempts to fool the model by providing deceptive input). Such attackers aim to destabilize the model and to cause the AI to make false predictions. The introduction of certain adversarial patterns to the input data that are invisible to the human eye causes major errors of detection to be made by the AI system. In 2020, for example, the security company McAfee demonstrated their ability to trick Teslas Mobileye EyeQ3 AI System into driving 80 km/h over the speed limit, simply by adding a 5 cm strip of black tape to a speed limit sign.24

AI methods used in the reconstruction of MRT and CT images have also proved unstable in practice time and again. A study investigating six of the most common AI methods used in the reconstruction of MRT and CT images proved these methods to be highly unstable. Even minor changes in the input images, invisible to the human eye, result in completely distorted reconstructed image.18 The distorted images included artifacts such as removal of tissue structures, which might result in misdiagnosis. Such an attack may cause artificial intelligence to reconstruct a tumor at a location where there is none in reality or even remove cancerous tissue from the real image. These artifacts are not present when manipulated images are reconstructed using conventional algorithms.18

Another vulnerability of artificial intelligence concerns image-scaling attacks. This vulnerability has been known since as long ago as 2019.19 Image-scaling attacks enable the attacker to manipulate the input data in such a way that artificial intelligence models with machine learning and image scaling can be brought under the attackers control. Xiao et al., for example, succeeded in manipulating the well-known machine-learning scaling library, TensorFlow, in such a manner that attackers could even replace complete images.19 An example of such an image-scaling attack is shown in Figure 3. In this scaling operation, the image of a cat is replaced by an image of a dog. Image-scaling attacks are particularly critical as they can both distort training of artificial intelligence and influence the decisions of artificial intelligence trained using manipulated images.

Adversarial attacks and stability issues pose significant threats to the safety and performance of medical devices incorporating artificial intelligence. Especially concerning is the fact that the conditions of when and where the attacks could occur, are difficult to predict. Furthermore, the response of AI to adversarial attacks is difficult to specify. If, for instance, a conventional surgical robot is attacked, it can still rely on other sensors. However, changing the policy of the AI in a surgical robot might lead to unpredictable behavior and by this to catastrophic (from a human perspective) responses of the system. Methods to address the above vulnerabilities and reduce susceptibility to errors do exist. For example, the models can be subjected to safety training, making them more resilient to the vulnerabilities. Defense techniques such as adversarial training and defense distillation have already been practiced successfully in image reconstruction algorithms.21 Further methods include human-in-the-loop approaches, as humans performance is strongly robust against adversarial attacks targeting AI systems. However, this approach has limited applicability in instances where humans can be directly involved.25

Although many medical devices using artificial intelligence have already been approved, the regulatory pathways in the medtech sector are still open. At present no laws, common specifications or harmonized standards exist to regulate AI application in medical devices. In contrast to the EU authorities, the FDA published a discussion paper on a proposed regulatory framework for artificial intelligence in medical devices in 2019. The document is based on the principle of risk management, software-change management, guidance on the clinical evaluation of software and a best-practice approach to the software lifecycle.20 in 2021, the FDA published their action plan on furthering AI in medical devices. The action plan consists of five next steps, with the foremost being to develop a regulatory framework explicitly for change control of AI, a good machine learning practice and new methods to evaluate algorithm bias and robustness 26

In 2020 the European Union also published a position paper on the regulation of artificial intelligence and medical devices. The EU is currently working on future regulation, with a first draft expected in 2021.

Chinas National Medical Products Administration (NMPA) published the Technical Guiding Principles of Real-World Data for Clinical Evaluation of Medical Devices guidance document. It specifies obligations concerning requirements-analysis, data collection and processing, model definition, verification, and validation as well as post-market surveillance.

Japans Ministry of Health, Labour and Welfare is working on a regional standard for artificial intelligence in medical devices. However, to date this standard is available in Japanese only. Key points of assessment are plasticity the predictability of models, quality of data and degree of autonomy. 27

In Germany, the Notified Bodies have developed their own guidance for artificial intelligence. The guidance document was prepared by the Interest Group of the Notified Bodies for Medical Devices in Germany (IG-NB) and is aimed at providing guidance to Notified Bodies, manufacturers and interested third parties. The guidance follows the principle that the safety of AI-based medical devices can only be achieved by means of a process-focused approach that covers all relevant processes throughout the whole life cycle of a medical device. Consequently, the guidance does not define specific requirements for products, but for processes.

The World Health Organization, too, is currently working on a guideline addressing artificial intelligence in health care.

Artificial intelligence is already used in the medtech sector, albeit currently somewhat sporadically. However, at the same time the number of AI algorithms certified as medical devices has increased significantly over the last years.28 Artificial intelligence is expected to play a significant role in all stages of patient care. According to the requirements defined in the Medical Device Regulation, any medical device, including those incorporating AI, must be designed in such a way as to ensure repeatability, reliability and performance according to its intended use. In the event of a fault condition (single fault condition), the manufacturer must implement measures to minimize unacceptable risks and reduction of the performance of the medical device (MDR Annex I, 17.1). However, this requires validation and verification of the AI model.

Many of the AI models used are black-box models. In other words, there is no transparency in how these models arrive at their decisions. This poses a problem where interpretability and trustworthiness of the systems are concerned. Without transparent and explainable AI predictions, the medical validity of a decision might be doubted. Some current errors of AI in pre-clinical applications might fuel doubts further. Explainable and approvable AI decisions are a prerequisite for the safe use of AI on actual patients. This is the only way to inspire trust and maintain it in the long term.

The General Data Protection Regulation demands a high level of protection of personal data. Its strict legal requirements also apply to processing of sensitive health data in the development or verification of artificial intelligence.

Adversarial attacks aim at influencing artificial intelligence, both during the training of the model and in the classification decision. These risks must be kept under control by taking suitable measures.

Impartiality and fairness are important, safety-relevant, moral and ethical aspects of artificial intelligence. To safeguard these aspects, experts must take steps to prevent bias when training the system.

Another important question concerns the responsibility and accountability of artificial intelligence. Medical errors made by human doctors can generally be traced back to the individuals, who can be held accountable if necessary. However, if artificial intelligence makes a mistake the lines of responsibility become blurred. For medical devices on the other hand, the question is straightforward. The legal manufacturer of the medical device incorporating artificial intelligence must ensure the safety and security of the medical device and assume liability for possible damage.

Regulation of artificial intelligence is likewise still at the beginning of development involving various approaches. All major regulators around the globe have defined or are starting to define requirements for artificial intelligence in medical devices. A high level of safety in medical devices will only be possible with suitable measures in place to regulate and control artificial intelligencebut this must not impair the development of technical innovation.

Follow to Page 2 for References.

The Chinese government is investing heavily in the development of new technologies that leverage AI.

If youre looking to market your medical device, there are many tasks to complete.

The term "Big Data" is a few years old, but its implications for medical devices

The race to apply AI to medical treatment is rapidly accelerating in China and Japan.

Go here to see the original:
AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications - MedTech Intelligence

Can we teach Artificial Intelligence to make moral judgements? – Innovation Origins

A question that preoccupies me as a moral philosopher is to what extent artificial intelligence (AI) is capable of making moral judgments. To address that question, of course, we first need to know how humans arrive at moral judgments. Unfortunately, no consensus on that exists. Moral psychologist Jonathan Haidt argues that our moral reasoning is guided in the first place by our intuition. Reason is a slave of the passions, as philosopher David Hume stated in the 18th century.

Haidt presented test subjects with a taboo scenario about a brother and sister who have sex with each other one time only. The objections were addressed. The siblings use contraceptives (birth control pill and condom) and it happens with mutual consent. The majority intuitively disapproves of this scenario and then seek arguments to support that intuition. If respondents are given more time to think about it and are also provided with substantiated arguments, then they are more likely to be okay with it. A calm conversation and the provision of arguments can make people change their gut instincts and their judgments. When there is an open conversation with mutual understanding and affection, people are more willing to change their minds.

Machine learning and deep learning are opening up opportunities for AI to develop a kind of moral intuition by providing data and letting algorithms search for patterns in that data. The word intuition is not really the right one, because AI always concerns calculations. Like in the case study with AlphaGo, you could confront an algorithm with millions of scenarios. In this instance, about morality. Then have it play against them (as a form of self-play) and learn from mistakes. AI will find a pattern, for example about right and wrong, and can consequently develop a kind of intuition. It continues to be extremely important to look critically at how AI discovers patterns. After all, not every pattern is desirable, as AI could also develop preferences based on e.g. popularity.

Want to be inspired 365 days per year? Heres the opportunity. We offer you one "origin of innovation" a day in a compact Telegram message. Seven days a week, delivered around 8 p.m. CET. Straight from our newsroom. Subscribe here, it's free!

But a good and convincing moral judgement goes beyond intuition. It is supported by high-quality arguments. If someone judges that a specific act is wrong, that same person must be able to substantiate why that is. Complete arbitrariness is avoided this way. It also makes it possible to gauge the extent to which the judgement is susceptible to prejudice, to name one thing. So, teaching AI to use intuition is not enough. AI will also have to learn to argue. Research has been going on in the legal domain for some time now into how AI can be used to assist lawyers in evaluating legal argumentation. In this case, it is mainly about modeling legal argumentation. In the Netherlands, philosophers are researching to what extent an argumentation machine is able to recognize fallacies. However, the research is still in its infancy.

The morally right thing to do, under any circumstances, is to do whatever has the best reasons for doing it. Giving equal weight to the interests of each individual who will be affected by what people do. Quite apart from the question of whether AI will ever be able to do this, no consensus exists on those best reasons. This certainly complicates the choice of which data we should use to train AI with. The theory, and more specifically, the definition of morality that you adhere to and subsequently train AI with, will determine the outcome. In this case, moral judgment. When you connect ethics and AI, you inevitably end up being stuck with making choices that then determine the direction of that moral judgment. In short; for now, this question remains highly speculative.

About this column:

In a weekly column, alternately written byEveline van Zeeland, Eugene Franken, Helen Kardan, Katleen Gabriels, Bert Overlack, Weijma, Bernd Maier-Leppla and Colinda de Beer, Innovation Origins tries to find out what the future will look like. These columnists, occasionally supplemented by guest bloggers, are all working on solutions in their own way on the problems of our time. So that tomorrow will be goodHere are all the previous articles.

View post:
Can we teach Artificial Intelligence to make moral judgements? - Innovation Origins

Computing to win: Addressing the policy blind spot that threatens national AI ambitions – Atlantic Council

Servers run inside the Facebook New Albany Data Center on Thursday, February 6, 2020 in New Albany, Ohio. Photo via Joshua A. Bickel/Dispatch and Reuters.

Artificial intelligence (AI) is causing significant structural changes to global competition and economic growth. AI may generate trillions of dollars in new value over the next decade, but this value will not be easily captured or evenly distributed across nations. Much of it will depend on how governments invest in the underlying computational infrastructure that makes AI possible.

Yet early signs point to a blind spota lack of understanding, measurement, and planning. It comes in the form of compute divides that throttle innovation across academia, startups, and industry. Policymakers must make AI compute a core component of their strategic planning in order to fully realize the anticipated economic windfall.

AI compute refers to a specialized stack of hardware and software optimized for AI applications or workloads. This computer can be located and accessed in different ways (public clouds, private data centers, individual workstations, etc.) and leveraged to solve complex problems across domains from astrophysics to e-commerce and to autonomous vehicles.

Conventional information technology (IT) infrastructure is now widely available as a utility through public cloud service providers. This idea of infrastructure as a service includes computing for AI, so the public cloud is duly credited with democratizing access. At the same time, it has created a state of complacency that AI compute will be there when we need it. AI, however, is not the same as IT.

Today, when nations plan for AI, they gloss over AI compute, with policies focused almost exclusively on data and algorithms. No government leader can answer three fundamental questions: How much domestic AI compute capacity do we have? How does this compare to other nations? And do we have enough capacity to support our national AI ambitions? This lack of uniform data, definitions, and benchmarks leaves government leaders (and their scientific advisors) unequipped to formulate a comprehensive plan for AI compute investments.

Recognizing this blind spot, the Organisation for Economic Cooperation and Development (OECD) recently established a new task force to tackle the issue. Theres nothing that helps our member countries assess what [AI compute] they need and what they have, and so some of them are making large but not necessarily well-informed investments in AI compute, Karine Perset, head of the OECD AI Policy Observatory, told VentureBeat in January.

Measuring domestic AI compute capacity is a complex task compounded by a paucity of good data. While there are widely accepted standards for measuring the performance of individual AI systems, they use highly technical metrics and dont apply well to nations as a whole. There is also the nagging question of capacity versus effective use. What if a nation acquires sufficient AI compute capacity but lacks the skills and ecosystem to effectively use it? In this case, more public investment may not lead to more public benefit.

Nonetheless, more than sixty national AI strategies were formulated and published from June 2017 to December 2020. These plans average sixty-five pages and coalesce around a common set of topics including transforming legacy industries, expanding opportunities for AI education, advancing public-sector AI adoption, and promoting responsible or trustworthy AI principles. Many national plans are aspirational and not detailed enough to be operational.

In conducting a survey of forty national AI plans, and analyzing the scope and depth of content around AI compute, we found that on average national AI plans had 23,202 words while sections on AI compute averaged only 269 words. Put simply, 98.8 percent of the content of national AI strategies focuses on vision, and only 1.2 percent covers the infrastructure needed to execute this vision.

This would be the equivalent of a national transportation strategy that devotes less than 2 percent of its recommendations to roads, bridges, and highways. Just like transportation investments, AI compute capacity will be a crucial driver of the future wealth of nations. And yet, national AI plans generally exclude detailed recommendations on this topic. There are exceptions, of course. France, Norway, India, and Singapore had particularly comprehensive sections on AI infrastructure, some with detailed recommendations for AI compute requirements, performance, and system size.

Exponential growth in computational power has led to astonishing improvements in AI capabilities, but it also raises questions of inequality in compute distribution. Fairness, inclusion, and ethics are now center stage in AI policy discussionsand these apply to compute, even though it is often ignored or relegated to side workshops at major conferences. Unequal access to computational power as well as the environmental cost of training computationally complex models require greater attention.

Some of the more popular AI models are large neural networks that run on state-of-the-art machines. For example, AlphaGo Zero and GPT-3 require millions of dollars in AI compute. This has led to AI research being dominated and shaped by a few actors mostly affiliated with big tech companies or elite universities. Governments may have to step up and reduce the compute divide by developing national research clouds. Initiatives in Europe and China are underway to develop indigenous high-performance computing (HPC) technologies and related computing supply chains that reduce their dependence on foreign sources and promote technology sovereignty.

Governments in the United States, Europe, China, and Japan have been making substantial investments in the exascale race, a contest to develop supercomputers capable of one billion billion (1018) calculations per second. The rationale behind investments in HPC is that the benefits are now going beyond scientific publications and prestige, as computing capabilities become a necessary instrument for scientific discovery and innovation. The machines are the engine for economic prosperity.

Understanding compute requirements is a non-trivial measurement challenge. At a micro level, it is important to scientifically understand the relative contribution of AI compute to driving AI progress on, for example, natural-language understanding, computer vision, and drug discovery. At a macro level, nations and companies need to assess compute requirements using a data-driven approach that accounts for future industrial pathways. Wouldnt it be eye-opening to benchmark how much compute power is being used at the company or national level and to help evaluate future compute needs?

In 2019, Stanford University organized a workshop with over one hundred interdisciplinary experts to better understand opportunities and challenges related to measurement in AI policy. The participants unanimously agreed that growth in computational power is leading to measurable improvements in AI capabilities, while also raising issues of the efficiency of AI models and the inherent inequality of compute distribution. Getting measurement right will be a stepping stone to addressing this blind spot in national AI policymaking. Going forward, we recommend that governments pivot to develop a national AI compute plan with detailed sections on capacity, effectiveness, and resilience.

The completeness of a national AI strategy forecasts that nations ability to compete in the digital global economy. Few national AI strategies, however, reflect a robust understanding of domestic AI compute capacity, how to use it effectively, and how to structure it in a resilient manner. Nations need to take action by measuring and planning for the computational infrastructure needed to advance their AI ambitions. The future of their economies is at stake.

Saurabh Mishra is an economist and former researcher at Stanford Universitys Institute for Human-Centered Artificial Intelligence.

Keith Strier is the chair of the AI Compute Taskforce at the Organization for Economic Cooperation and Development, and vice president for Worldwide AI Initiatives at the NVIDIA corporation.

Wed, Mar 24, 2021

Cyber conflict has not yet escalated from a fight inside cyberspace to a more traditional armed attack because of cyberspace. In part, this is because countries understand there are some tacit upper limits to escalation above which the response from the offended country will be war. Unfortunately, this happy state may not last.

New AtlanticistbyJason Healey and Robert Jervis

Tue, Mar 16, 2021

Theres a silver lining to parenting in a pandemic: Its an education in the core concepts of international relations, as well as a useful reminder that were all operating in a condition of anarchy. Heres a moms primer on deterrence, coercion, credibility, and reassurance.

New AtlanticistbyEmma Ashford, Erica Borghard

Read the original:
Computing to win: Addressing the policy blind spot that threatens national AI ambitions - Atlantic Council

Is there intelligence in artificial intelligence? – Vaughan Today

This article was republished from France Conversation

Nearly 10 years ago, in 2012, the scientific world was amazed at the exploits of deep learning ( Deep learning). Three years later, this technology allowed AlphaGo to run Defeat the Go Champions. Some feared. Elon Musk, Stephen Hawking and Bill Gates were concerned about the approaching end of humanity, which would be replaced by an AI that would get out of control.

Wasnt that a bit of a stretch? This is exactly what the AI thinks. In an article he has Written in 2020 in a Watchman, GPT-3, this giant neural network of 175 billion parameters shows:

Im here to convince you not to worry. Artificial intelligence will not destroy humans. Trust me.

At the same time, we know that the power of the machines is constantly increasing. Training a network like GPT-3 was literally out of the question just five years ago. It is impossible to know what his successors could do in five, ten, or twenty years. If todays neural networks can replace dermatologists, why not replace us all?

Lets take the question back.

We immediately think of skills that involve our intuition or our creativity. No luck, Amnesty International claims to attack us in these areas as well. As proof of this, the fact that software-generated businesses have been sold quite expensive, some amounting to nearly half a million dollars. On the musical side, of course, everyone will have their own opinion, but we can actually recognize the accepted bluegrass or roughly Rachmaninoff in the MuseNet tradition created, like GPT-3, by OpenAI.

Will we soon have to surrender to the inevitable control of AI? Before calling for rebellion, lets try to see what we are dealing with. Artificial intelligence is based on several technologies, but its recent success is due to only one: neural networks, especially those of deep learning. However, a neural network is nothing more than a machine that can be linked. Deep web that She talked about it in 2012 Associated images: horse, boat, mushroom, with corresponding words. It is not enough to cry a genius.

However, this correlation mechanism has the somewhat miraculous characteristic of being persistent. You present a horse the network has never seen, it recognizes it as a horse. Youre adding noise to the image, dont disturb it. why ? Because the continuity of the process ensures that if the input to the network changes a little, its output will change a little as well. If the still hesitant network forced it to search for its best answer, it probably wouldnt change: the horse remains a horse, even if it differs from learned examples, even if the picture is noisy.

Good, but why do we say such associative behavior is smart? The answer seems clear: it can diagnose skin cancer, grant bank loans, keep the car on the road, detect diseases in physiological signals, etc. Thanks to their power to connect, these networks gain forms of experience that require years of study from humans. And when one of these skills, for example writing a newspaper article, appears to be holding on for some time, it suffices to make the machine swallow more examples, as happened with GPT-3, because the machine begins to produce convincing results.

Is it really to be smart? No. This kind of performance is only a small side of intelligence at best. What neural networks do is like rote learning. This is not, of course, because these networks constantly fill in the gaps between the examples they have been shown. Lets say it is almost by heart. Human experts, be they doctors, pilots or Go players, often dont do anything else when deciding reflexively, thanks to the sheer amount of examples learned during their training. But humans have many other powers.

The neural network cannot learn arithmetic. The correlation between processes such as 32 + 73 and their results has limitations. They can only reproduce the strategy of the mutt trying to guess the outcome, sometimes falling right. Too difficult account? How about an initial IQ test like: Continued Sequence 1223334444. Correlation by continuity does not always help to see the structure, N repeat N 5 times and continues with five 5. Are you still too difficult? The associations programs cant even guess that the animal that died on Tuesday is not alive on the Wednesday. why ? What are they missing?

Modeling in cognitive science has revealed the existence of several mechanisms, other than correlation by continuity, which are all components of human intelligence. Because their experience is prepared in advance, they cannot think of the right time to decide that the dead animal is still dead, or still Understand the meaning From the phrase he still hasnt died and the strangeness of this other sentence: He is not always dead. Their single digestion of large amounts of data does not allow them to do so Identify new structures Very obvious to us, like identical number sets in the sequence 1223334444. Their rote memorization strategy is also blind. Unpublished anomaly.

Detecting anomalies is an interesting case, because it is often through it that we measure the intelligence of others. The neural network will not see that the nose is missing from the face. Through continuity, he will continue to recognize the person, or perhaps confuse him with another. But he had no way of realizing that not having a nose in the middle of the face was an anomaly.

There are many other cognitive mechanisms inaccessible to neural networks. They are searching for their automation. It implements the operations performed at processing time, as neural networks simply perform the associations previously learned.

With a decade of hindsight Deep learning, The informed audience is starting to see neural networks as a super mechanism rather than as intelligence. For example, the press recently alerted to the astonishing performance of the DALL-E program, which produces creative images from verbal descriptions for example, the images that DALL-E imagines from the terms d-armchair. Lawyer, on the site OpenAI). We now hear far more thoughtful judgments than the cautionary reactions that followed the launch of AlphaGo: Its absolutely amazing, but we must not forget that this is an artificial neural network, trained to accomplish a task; there is neither creativity nor any form of intelligence. (Fabian) Chauvier, France Inter, January 31, 2021)

No form of intelligence? Lets not take too much, but lets stay clear about the huge divide separating neuron networks from what would be true AI.

Jean Louis Dessalles wrote Very Artificial Intelligence for the editions of Odile Jacob (2019). Lecturer, Institute of Mines Communication (IMT)

Read the original:
Is there intelligence in artificial intelligence? - Vaughan Today

One Thousand and One Talents: The Race for AI Dominance – Just Security

Introduction

The March 2016 defeat of Go world champion Lee Sedol by DeepMind, Alphabets artificially intelligent AlphaGo algorithm, will be remembered as a crucial turning point in the U.S.-China relationship. Oxfords Future of Humanity Institute branded the event Chinas Sputnik moment: a moment of realization among its political and military leaders that artificial intelligence (A.I.) could be Chinas key to achieving global hegemony and dominance over the United States.

Since then, Chinas government has plowed ahead in developing its A.I. capabilities, with President Xi Jinping calling for his country to become a world leader[1] as fast as possible. Given that A.I. technologies could contribute an estimated $112 billion to the Chinese economy by 2030[2], it is no surprise that Beijing believes A.I. to be a new focus of international competition.[3]

For the simple reason that China has focused on pragmatic, collaborative policies rather than restrictive, unilateral ones, it is currently on track to overtake the United States in the A.I. race. A.I.s potentially devastating military applications make the A.I. race not just a struggle for economic dominance, but also a national security threat for whichever State loses the advantage.

While the Biden administrations readiness to boost research and development (R&D) spending and reverse the previous administrations assault on U.S. alliances is a promising first step towards combating this technological challenge, much more is necessary to ensure that the United States technological capabilities do not fall behind those of rising powers.

To hold the line, the United States must leverage its historic alliances with Europe, Australia, and Southeast Asia to pool R&D funding into a multilateral A.I. research group. By creating incentives for scientists across the globe to collaborate together on U.S.-led A.I. development, the United States can ensure that its allies and partners maintain a technological edge over China long into the future.

A.I. Scare

In addition to its commercial benefits, the rapid development of A.I. will also advantage China by adding a potentially devastating tool to its cyberwar arsenal. Concerns about weaponized A.I. have recently been raised by the United Kingdoms Government Communications Headquarters. Their 2020 report claimed that military A.I. would facilitate the rapid adaptation of malware and require a speed of response far greater than human decision-making allows, thus making it difficult for countries to defend against it with current software. The conclusion that many experts have drawn is that the threat of A.I. cyberattacks necessitates the development of defensive A.I. by countries at risk of being targeted.

This threat is not merely theoretical; indeed, China has repeatedly indicated its intention to leverage new technologies like A.I. for offensive purposes. From a 2010 hack of Google by a group with ties to Chinas Peoples Liberation Army to a suspected cyberattack on Australian political institutions in 2020, it is clear that China will not shy away from utilizing the military applications of its emerging technologies.

In fact, Chinas 2017 Next Generation Artificial Intelligence Development Plan made this explicit. The report called for enhancing A.I. civil-military integration by establishing lines of communication and coordination between research institutions, private companies, and the military. Given that any future A.I. cyberattacks could be aimed at U.S. allies and interests, it is vital that the United States prioritizes the development of its own A.I. capabilities to defend against novel techniques.

Unfortunately, research shows that the United States is somewhat unprepared for incoming attacks. While China funneled an estimated $70 billion into A.I. in 2020 (up from $12 billion in 2017), the United States government devoted only $4.9 billiona quarter of what was allocated to the Chinese port of Tianjin for A.I. development alone. It was encouraging to see the Trump administration unveil its American A.I. initiative in response to Chinas 2017 plan albeit with a 19 month delay yet this was only a first step in the right direction. A multilateral strategy is also necessary to prevent China from overtaking the United States in a crucial sector which has the potential to tip the global balance of power.

The Xi Doctrine

The forward-leaning policies initiated by President Xi Jinping have led to many advancements, accelerating Chinas A.I. program and imperiling U.S. national security in the process. One of Xis most effective initiatives has been the so-called thousand talents plan, which offers high salaries and tempting benefits to scientists and researchers who agree to work with China on emerging technologies.[4] The plan has been enormously successful: a CIA official estimated that as many as 10,000 scientists from around the world have participated.

Its potential to grant China a strategic edge over the United States and its allies has also led the U.S. Senate to label the program a threat to American interests. Concerns center around the risk that U.S.-based scientists participating in the plan could transfer research achievements from American laboratories to Chinese ones, thereby accelerating Chinese A.I. development at the United States expense.

Instead of mitigating the issue, Trump era policy responses exacerbated Chinas lead by focusing on increasing A.I. export restrictions. In an attempt to prevent the outflow of sensitive military technologies to China and other hostile states, the U.S. Department of Commerce imposed restrictions on the exports of A.I. technologies. Far from giving the United States a competitive edge, the policy likely stymied A.I. investment by requiring businesses to obtain licenses, a requirement which elongates the export process and imposes high compliance costs on struggling startups. Proof of these policies damaging effects came in 2017 when, for the first time ever, Chinese A.I. startups received a greater share of global venture funding than U.S. startups received.

The Washington Pact

In order to improve U.S. A.I. policy, it is vital that the Biden administration understands two points. First, greater R&D spending is necessary to ensure that the United States can keep up with China on A.I. For the most part, the new administration has embraced this: Bidens campaign reiterated former Google CEO Eric Schmidts assertion that the United States must boost tech R&D because China is on track to surpass the U.S. in R&D. It even went on to claim that Chinas main reason for investing in new technologies was to overtake American technological primacy and dominate future industries.

Second, because American allies are themselves investing heavily into A.I., it is prudent to adopt multilateral solutions which leverage the United States historic alliances as opposed to unilateral America first responses. For instance, Germanys A.I. Made in Germany plan has allocated 3 billion to A.I. research over the next five years, while Frances A.I. for Humanity initiative has injected 1.5 billion into the sector. To balance against Chinas advancements, the United States should take advantage of these alliances and ensure that global investments go into developing A.I. capabilities across the broader liberal democratic sphere.

This second necessity does not appear to have received as much attention from the Biden administration so far. Despite its general recommitment to multilateralism through rejoining the Paris Climate Accord, reprioritizing NATO, and calling for a Summit for Democracy, the Biden administration has largely overlooked the idea of multilateral cooperation on A.I. research.

To match the Chinese technological challenge, the United States must establish research initiatives alongside its historic allies which will benefit U.S. A.I. development. This will have the effect of protecting U.S. national security long into the future by guaranteeing that the United States retains the edge over China in crucial A.I. innovations.

At the center of this policy should be an upgraded equivalent of Chinas thousand talents scheme that would be run as a joint initiative between America and its allies. The European Union, United Kingdom, Australia, and Japans determination to invest heavily into A.I., paired with their historic ties to the United States, suggests potential for large-scale multilateral research collaboration led by the United States.

The Biden administration should therefore suggest the foundation of a multilateral research programcall it One Thousand and One Talentswith the aim of attracting the best A.I. specialists from around the globe. Participating governments would funnel their annual A.I. budgets into the scheme in order to fund research projects with important military and commercial applications. The program would ensure that salaries would be directly competitive with Chinas thousand talents program and that incentives would be put in place to make the Western alternative more attractive than the Chinese one. Like NATO, U.S. leadership would be justified by its status as the main benefactor of the scheme.

The emphasis on multilateralism as a response to U.S.-Chinese competition should come as no surprise. As Princeton professor John Ikenberry writes, the key thing for U.S. leaders to remember when dealing with China is that it may be possible for China to overtake the United States alone, but it is much less likely that China will ever manage to overtake the Western order. It is no different with A.I.

Conclusion

The new technological challenges facing America call for a far-sighted and judicious foreign policy worthy of the worlds greatest superpower. While China may have the advantages of unrestricted State investment and well-planned incentive programs, it lacks alliances that run as deep as the NATO friendships the United States has long depended on. To overcome current Chinese advancements in A.I., the United States must unite with its partners around the world in order to increase the talent, funding, and skill available to it.

The proposed Thousand And One Talents research scheme would boost the United States competitiveness vis-a-vis China by pooling the resources of some of the wealthiest and most technologically advanced nations into U.S.-led A.I. development. Given the inevitability of Chinas rise, multilateral cooperation with like-minded democracies is the only way of ensuring that the U.S. does not face an existential security threat in the future.

The Biden administration must rise to the challenge by uniting with U.S. allies to compete with China on A.I. It is too risky to go it alone.

Editors Note:An earlier version of this essay received an honorable mention in New AmericasReshaping U.S. Security Policy for the COVID Eraessay competition.

[1] Quoted in Strittmatter, Kai, We Have Been Harmonized: Life in Chinas Surveillance State, p.165

[2] Ibid, p.166-167

[3] Quoted in Ibid, p.166-167

[4] Strittmatter, Kai, We Have Been Harmonized: Life in Chinas Surveillance State, p.171

Follow this link:
One Thousand and One Talents: The Race for AI Dominance - Just Security

PNYA Post Break Will Explore the Relationship Between Editors and Assistants – Creative Planet Network

n honor of Womens History Month, Post Break, Post New York Alliance (PNYA)s free webinar series, will examine the way two top female editors have worked with their assistants to deliver shows for HBO, Freeform and others.

By ArtisansPR Published: March 23, 2021

Free video conference slated for Thursday, March 25th at 4:00 p.m. EDT

NEW YORK CITYA strong working relationship between the editor and her assistants is crucial to successfully completing films and television shows. In honor of Womens History Month, Post Break, Post New York Alliance (PNYA)s free webinar series, will examine the way two top female editors have worked with their assistants to deliver shows for HBO, Freeform and others.

Agns Challe-Grandits, editor of the upcoming Freeform series Single, Drunk Female and her assistant, Tracy Nayer will join Shelby Siegel, Emmy and ACE award winner for the HBO series The Jinx: The Life and Deaths of Robert Durst and her assistant, JiYe Kim, to discuss collaboration, how they organize their projects and how editors and assistants support one another. The discussion will be moderated by Post Producer Claire Shanley.

The session is scheduled for Thursday, March 25th at 4:00pm EDT. Following the webinar, attendees will have an opportunity to join small, virtual breakout groups for discussion and networking.

Panelists

Agns Grandits has decades of experience as a film and television editor. Her current project is Single Drunk Female, a new, half-hour comedy for Freeform. Her previous television credits include P. Valley and SweetBitter for STARZ, Divorce for HBO, Odd Mom Out for Bravo and The Breaks for VH1. She also worked for Showtime on The Affair and Nurse Jackie. In addition, she edited The Jim Gaffigan Show for TV Land, Gracepoint for Fox, an episode on the final season of Bored to Death for HBO, and 100 Centre Street, directed by Sydney Lumet for A&E. Her credits with HBO also include Sex and the City and The Wire.

Tracy Nayer has been an Assistant Editor for more than ten years and has been assisting Agns Grandits for five. She began her career in editorial finishing at a large post-production studio.

Shelby Siegel is an Emmy award-winning film and television editor who has worked in New York for more than 20 years. Her credits include Andrew Jareckis Capturing the Friedmans and All Good Things, Jonathan Caouettes Tarnation, and Gary Hustwits Helvetica and Urbanized. She won Emmy and ACE awards for HBOs acclaimed six-part series The Jinx: The Life and Deaths of Robert Durst. Most recently, she edited episodes of Quantico (ABC), High Maintenance (HBO) and The Deuce (HBO). She began her career working under some of the industrys top directors, including Paul Haggis (In the Valley of Elah), Mike Nichols (Charlie Wilsons War), and Ang Lee on his Oscar-winning films, Crouching Tiger, Hidden Dragon and Brokeback Mountain. She also worked on the critically acclaimed series The Wire.

JiYe Kim began her career in experimental films, working with Anita Thacher and Barbara Hammer. Her first credit as an assistant editor came for Alphago (2017). Her most recent credits include High Maintenance, The Deuce, Her Smell and Share.

Moderator

Claire Shanley is a Post Producer whose recent projects include The Plot Against America and The Deuce. Her background also includes post facility and technical management roles. She served as Managing Director at Sixteen19 and Technical Director at Broadway Video. She Co-Chairs the Board of Directors of the NYC LGBT Center and serves on the Advisory Board of NYWIFT (NY Women in Film & Television).

When: Thursday, March 25, 2021, 4:00pm EDT

Title: The E&A Team

REGISTER HERE

Sound recordings of past Post Break sessions are available here: https://www.postnewyork.org/page/PNYAPodcasts

Past Post Break sessions in video blog format are available here: https://www.postnewyork.org/blogpost/1859636/Post-Break

About Post New York Alliance (PNYA)

The Post New York Alliance (PNYA) is an association of film and television post-production facilities, labor unions and post professionals operating in New York State. The PNYAs objective is to create jobs by: 1) extending and improving the New York State Tax Incentive Program; 2) advancing the services the New York Post Production industry provides; and 3) creating avenues for a diverse talent pool to enter into The Industry.

http://www.pnya.org

View original post here:
PNYA Post Break Will Explore the Relationship Between Editors and Assistants - Creative Planet Network

Diffblue’s First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers – StreetInsider.com

Get inside Wall Street with StreetInsider Premium. Claim your 1-week free trial here.

OXFORD, United Kingdom, March 22, 2021 (GLOBE NEWSWIRE) -- Diffblue, creators of the worlds first AI for code solution that automates writing unit tests for Java, today announced that its free IntelliJ plugin, Diffblue Cover: Community Edition, is now available to use to create unit tests for all of an organizations Java code both open source and commercial.

Free for any individual user, the IntelliJ plugin is availablehere for immediate download. It supports both IntelliJ versions 2020.02 and 2020.03. The Diffblue Cover: Community Edition to date has already automatically created nearly 150,000 Java unit tests!

Diffblue also offers a professional version for commercial customers who require premium support as well as indemnification and the ability to write tests for packages. In addition, Diffblue offers a CLI version of Diffblue Cover, perfect for teams to collaborate using.

Diffblues pioneering technology, developed by researchers from the University of Oxford, is based on reinforcement learning, the same machine learning strategy that powered AlphaGo, Alphabet subsidiary DeepMinds software program that beat the world champion player of Go.

Diffblue Cover automates the burdensome task of writing Java unit tests, a task that takes up as much as 20 percent of Java developers time. Diffblue Cover creates Java tests at speeds 10X-100X faster than humans that are also easy for developers to understand, and automatically maintains the tests as the code evolves even on applications with tens of millions of lines of code. Most unit test generators create boilerplate code for tests, rather than tests that compile and run. These tools guess the inputs that can be used as a starting point, but developers have to finish them to get functioning tests. Diffblue Cover is uniquely able to create complete human-readable unit tests that are ready to run immediately.

Diffblue Cover today supports Java, the most popular enterprise programming language in the Global 2000. The technology behind Diffblue Cover can also be extended to support other popular programming languages such as Python, Javascript and C#.

About DiffblueDiffblue is leading the automation of software creation through the power of AI. Founded by researchers from the University of Oxford, Diffblue Cover uses AI for code to write unit tests that help software teams and organizations efficiently improve their code coverage and quality and to ship software faster, more frequently and with fewer defects. With customers including AWS and Goldman Sachs, Diffblue is venture-backed by Goldman Sachs and Oxford Sciences Innovation. Follow us on Twitter:@diffblueHQ

Editorial contact DiffblueLonn Johnston, Flak42lonn@flak42.com+1.650.219.7764

Original post:
Diffblue's First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers - StreetInsider.com

Brinks Home Security Will Leverage AI to Drive Customer Experience – Security Sales & Integration

A partnership with startup OfferFit aims to unlock new insights into customer journey mapping with an AI-enabled, self-learning platform.

DALLAS Brinks Home Security has embarked on what it terms an artificial intelligence (AI) transformation in partnership with OfferFit to innovate true 1-to-1 marketing personalization, according to an announcement.

Founded last year, OfferFit uses self-learning AI to personalize marketing offers down to the individual level. Self-learning AI allows companies to scale their marketing offers using real-time results driven by machine learning.

Self-learning AI, also called reinforcement learning, first came to national attention through DeepMinds AlphaGo program, which beat human Go champion Lee Sedol in 2016. While the technology has been used in academic research for years, commercial applications are just starting to be implemented.

Brinks Home Security CEO William Niles approached OfferFit earlier this year about using the AI platform to test customer marketing initiatives, according to the announcement. The pilot program involved using OfferFits proprietary AI to personalize offers for each customer in the sample set.

At first, the AI performed no better than the control. However, within two weeks, the AI had reached two times the performance of the control population. By the end of the third week, it had reached four times the result of the control group, the announcement states.

Brinks Home Security is now looking to expand use cases to other marketing and customer experience campaigns with the goal of providing customers with relevant, personalized offers and solutions.

The companies that flourish in the next decade will be the leaders in AI adoption, Niles says. Brinks Home Security is partnering with OfferFit because we are on a mission to have the best business intelligence and marketing personalization in the industry.

Personalization is a key component in creating customers for life. The consumer electronics industry, in particular, has a huge opportunity to leverage this type of machine learning to provide customers with more meaningful company interactions, not only at the point of sale but elsewhere in the customer lifecycle.

Our goal is to create customers for life by providing a premium customer experience, says Jay Autrey, chief customer officer, Brinks Home Security. To achieve that, we must give each customer exactly the products and services they need to be safe and comfortable in their home. OfferFit lets us reach true one-to-one personalization.

The Brinks Home Security test allowed OfferFit to see its AI adapting through a real-world case. Both companies see opportunities to expand the partnership and its impact on the customer lifecycle.

We know that AI is the future of marketing personalization, and pilot programs like the one that Brinks Home Security just completed demonstrate the value that machine learning can have for a business and on its customers, comments OfferFit CEO George Khachatryan.

Read more from the original source:
Brinks Home Security Will Leverage AI to Drive Customer Experience - Security Sales & Integration