Page 2,940«..1020..2,9392,9402,9412,942..2,9502,960..»

Use Of Artificial Intelligence Attracts Legislative And Regulatory Attention In The EU, US, And Israel – Technology – Worldwide – Mondaq News Alerts

30 April 2021

Pearl Cohen Zedek Latzer Baratz

To print this article, all you need is to be registered or login on Mondaq.com.

The European Commission is proposing new legislative rules aimedto promote excellence and trust in the field of ArtificialIntelligence (AI). The new proposal of EU regulation lays down: (a)harmonized rules for the use of artificial intelligence systems inthe EU; (b) prohibitions of certain particularly harmful AIpractices; (c) specific requirements for high-risk AI systems andobligations for operators of such systems; (d) harmonizedtransparency rules for AI systems intended to interact withindividuals, such as emotion recognition systems, biometriccategorization systems, and AI systems used to generate ormanipulate image, audio or video content; and (e) rules on marketmonitoring and surveillance.

The proposal's declared purpose is to lay down a balancedand proportionate regulatory approach between the minimalrequirements to address the risks and problems linked to AI,without unduly constraining or hindering technological developmentor otherwise disproportionately increasing the cost of placing AIsolutions on the market.

Meanwhile, in the United States, the Federal Trade Commissionhas offered business guidance on AI and algorithms, and howcompanies can manage the consumer protection risks of AI andalgorithms. The FTC emphasizes that the use of AI tools should betransparent, explainable, fair, and empirically sound whilefostering accountability. The FTC says that the use of AItechnology to make predictions, recommendations, or decisions hasgreat potential to improve welfare and productivity. However, italso presents risks, such as the potential for unfair ordiscriminatory outcomes or the perpetuation of existingsocioeconomic disparities.

In Israel, the Innovation Authority and the Ministry of Justicehave published a call for public comments and proposals onregulatory restraints and possible regulation in the field of AI,with an emphasis on experimenting and implementing AI systems, suchas decision support systems with or without the involvementof human judgment.

The call seeks feedback from the general public on questionssuch as the nature of desirable AI regulation consideringIsrael's leading position as an R&D hub in the AI field;global regulatory models aimed to advance the AI field; andregulatory gaps between Israel and other countries. Comments can besubmitted by email until May 13, 2021.

CLICK HEREto read the EuropeanCommission's proposed regulation.

CLICK HEREto read the recent FTC guidefor use of AI and algorithms.

CLICK HEREto read the Israeli AI call forpublic comments (in Hebrew).

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Technology from Worldwide

Sheppard Mullin Richter & Hampton

Utah recently amended its breach notice law to provide certain defenses to companies who suffer a data breach.

More here:
Use Of Artificial Intelligence Attracts Legislative And Regulatory Attention In The EU, US, And Israel - Technology - Worldwide - Mondaq News Alerts

Read More..

Europe Seeks to Tame Artificial Intelligence with the Worlds First Comprehensive Regulation – JD Supra

In what could be a harbinger of the future regulation of artificial intelligence (AI) in the United States, the European Commission published its recent proposal for regulation of AI systems. The proposal is part of the European Commissions larger European strategy for data, which seeks to defend and promote European values and rights in how we design, make and deploy technology in the economy. To this end, the proposed regulation attempts to address the potential risks that AI systems pose to the health, safety, and fundamental rights of Europeans caused by AI systems.

Under the proposed regulation, AI systems presenting the least risk would be subject to minimal disclosure requirements, while at the other end of the spectrum AI systems that exploit human vulnerabilities and government-administered biometric surveillance systems are prohibited outright except under certain circumstances. In the middle, high-risk AI systems would be subject to detailed compliance reviews. In many cases, such high-risk AI system reviews will be in addition to regulatory reviews that apply under existing EU product regulations (e.g., the EU already requires reviews of the safety and marketing of toys and radio frequency devices such as smart phones, Internet of Things devices, and radios).

Applicability

The proposed AI regulation applies to all providers that market in the EU or put AI systems into service in the EU as well as users of AI systems in the EU. This scope includes governmental authorities located in the EU. The proposed regulation also applies to providers and users of AI systems whose output is used within the EU, even if the producer or user is located outside of the EU. If the proposed AI regulation becomes law, the enterprises that would be most significantly affected by the regulation are those that provide high-risk AI systems not currently subject to detailed compliance reviews under existing EU product regulations, but that would be under the AI regulation.

Scope of AI Covered by the AI Regulation

The term AI system is defined broadly as software that uses any of several identified approaches to generate outputs for a set of human-defined objectives. These approaches cover far more than artificial neural networks and other technologies currently viewed by many as traditional as AI. In fact, the identified approaches cover many types of software that few would likely consider AI, such as statistical approaches and search and optimization methods. Under this definition, the AI regulation would seemingly cover the day-to-day tools of nearly every e-commerce platform, social media platform, advertiser, and other business that rely on such commonplace tools to operate.

This apparent breadth can be assessed in two ways. First, this definition may be intended as a placeholder that will be further refined after the public release. There is undoubtedly no perfect definition for AI system, and by releasing the AI regulation in its current form, lawmakers and interested parties can alter the scope of the definition following public commentary and additional analysis. Second, most AI systems inadvertently caught in the net of this broad definition would likely not fall into the high-risk category of AI systems. In other words, these systems generally do not negatively affect the health and safety or fundamental rights of Europeans, and would only be subject to disclosure obligations similar to the data privacy regulations already applicable to most such systems.

Prohibited AI Systems

The proposed regulation prohibits uses of AI systems for purposes that the EU considers to be unjustifiably harmful. Several categories are directed at private sector actors, including prohibitions on the use of so-called dark patterns through subliminal techniques beyond a persons consciousness, or the exploitation of age, physical or mental vulnerabilities to manipulate behavior that causes physical or psychological harm.

The remaining two areas of prohibition are focused primarily on governmental actions. First, the proposed regulation would prohibit use of AI systems by public authorities to develop social credit systems for determining a persons trustworthiness. Notably, this prohibition has carveouts, as such systems are only prohibited if they result in a detrimental or unfavorable treatment, and even then only if unjustified, disproportionate, or disconnected from the content of the data gathered. Second, indiscriminate surveillance practices by law enforcement that use biometric identification are prohibited in public spaces except in certain exigent circumstances, and with appropriate safeguards on use. These restrictions reflect the EUs larger concerns regarding government overreach in the tracking of its citizens. Military uses are outside the scope of the AI regulation, so this prohibition is essentially limited to law enforcement and civilian government actors.

High-Risk AI Systems

High-risk AI systems receive the most attention in the AI regulation. These are systems that, according to the memorandum accompanying the regulation, pose a significant risk to the health and safety or fundamental rights of persons. This boils down to AI systems that (1) are a regulated product or are used as a safety component for a regulated product like toys, radio equipment, machinery, elevators, automobiles, and aviation, or (2) fall into one of several categories: biometric identification, management of critical infrastructure, education and training, human resources and access to employment, law enforcement, administration of justice and democratic processes, migration and border control management, and systems for determining access to public benefits. The regulation contemplates this latter category evolving over time to include other products and services, some of which may face little product regulation at present. Enterprises that provide these products may be venturing into an unfamiliar and evolving regulatory space.

High-risk AI systems would be subject to extensive requirements, necessitating companies to develop new compliance and monitoring procedures, as well as make changes to products both on the front end and the back end such as:

Transparency Requirements

The regulation would impose transparency and disclosure requirements for certain AI systems regardless of risk. Any AI system that interacts with humans must include disclosures to the user they are interacting with an AI system. The AI regulation provides no further details on this requirement, so a simple notice that an AI system is being used would presumably satisfy this regulation. Most AI systems (as defined in the regulation) would fall outside of the prohibited and high-risk categories, and so would only be subject to this disclosure obligation. For that reason, while the broad definition of AI system captures much more than traditional artificial intelligence techniques, most enterprises will feel minimal impact from being subject to these regulations.

Penalties

The proposed regulation provides for tiered penalties depending on the nature of the violation. Prohibited uses of AI systems (subliminal manipulation, exploitation of vulnerabilities, and development of social credit systems) and prohibited development, testing, and data use practices could result in fines of the higher of either 30,000,000 EUR or 6% of a companys total worldwide annual revenue. Violation of any other requirements or obligations of the proposed regulation could result in fines of the higher of either 20,000,000 EUR or 4% of a companys total worldwide annual revenue. Supplying incorrect, incomplete, or misleading information to certification bodies or national authorities could result in fines of the higher of either 10,000,000 EUR or 2% of a companys total worldwide annual revenue.

Notably, EU government institutions are also subject to fines, with penalties up to 500,000 EUR for engaging in prohibited practices that would result in the highest fines had the violation been committed by a private actor, and fines for all other violations up to 250,000 EUR.

Prospects for Becoming Law

The proposed regulation remains subject to amendment and approval by the European Parliament and potentially the European Council, a process which can take several years. During this long legislative journey, components of the regulation could change significantly, and it may not even become law.

Key Takeaways for U.S. Companies Developing AI Systems

Compliance With Current Laws

Although the proposed AI regulation would mark the most comprehensive regulation of AI to date, stakeholders should be mindful that current U.S. and EU laws already govern some of the conduct it attributes to AI systems. For example, U.S. federal law prohibits unlawful discrimination on the basis of a protected class in numerous scenarios, such as in employment, the provision of public accommodations, and medical treatment. Uses of AI systems that result in unlawful discrimination in these arenas already pose significant legal risk. Similarly, AI systems that affect public safety or are used in an unfair or deceptive manner could be regulated through existing consumer protection laws.

Apart from such generally applicable laws, U.S. laws regulating AI are limited in scope, and focus on disclosures related to AI systems interacting with people or are limited to providing guidance under current law in an industry-specific manner, such as with autonomous vehicles. There is also a movement towards enhanced transparency and disclosure obligations for users when their personal data is processed by AI systems, as discussed further below.

Implications for Laws in the United States

To date, no state or federal laws specifically targeting AI systems have been successfully enacted into law. If the proposed EU AI regulation becomes law, it will undoubtedly influence the development of AI laws in Congress and state legislatures, and potentially globally. This is a trend we saw with the EUs General Data Protection Regulation (GDPR), which has shaped new data privacy laws in California, Virginia, Washington, and several bills before Congress, as well as laws in other countries.

U.S. legislators have so far proposed bills that would regulate AI systems in a specific manner, rather than comprehensively as the EU AI regulation purports to do. In the United States, algorithmic accountability legislation attempts to address concerns about high-risk AI systems similar to those articulated in the EU through self-administered impact assessments and required disclosures, but lacks the EU proposals outright prohibition on certain uses of AI systems, and nuanced analysis of AI systems used by government actors. Other bills would solely regulate government procurement and use of AI systems, for example, California AB-13 and Washington SB-5116, leaving industry free to develop AI systems for private, nongovernmental use. Upcoming privacy laws such as the California Privacy Rights Act (CPRA) and the Virginia Consumer Data Protection Act (CDPA), both effective January 1, 2023, do not attempt to comprehensively regulate AI, instead focusing on disclosure requirements and data subject rights related to profiling and automated decision-making.

Conclusion

Ultimately, the AI regulation (in its current form) will have minimal impact on many enterprises unless they are developing systems in the high-risk category that are not currently regulated products. But some stakeholders may be surprised, and unsatisfied with, the fact that the draft legislation puts relatively few additional restrictions on purely private sector AI systems that are not already subject to regulation. The drafters presumably did so to not overly burden private sector activities. But it is yet to be seen whether any enacted form of the AI regulation would strike that balance in the same way.

[View source.]

Here is the original post:
Europe Seeks to Tame Artificial Intelligence with the Worlds First Comprehensive Regulation - JD Supra

Read More..

Using Artificial Intelligence Tools to Run Proactive Health Check Investigations – insideBIGDATA

In the legal world, and in particular the world of electronic discovery, artificial intelligence (AI) has been around for more than a decade. It is no longer unusual or controversial for organizations to use AI technologies in litigation, especially where large or complex data sets are involved. Legal teams now routinely turn to AI to defensibly accelerate the process of identifying documents likely to be responsive to requests for evidence.

Innovations like technology assisted review (TAR), for example, rely heavily on machine learning and natural language processing to make connections and identify patterns within a body of data in a matter of seconds. This is work that would take even the most qualified human reviewers many, many hours to do manually, and with less accuracy.

Apart from sheer computing power, one of the most useful features of AI technology like machine learning is its ability to quickly learn and continuously improve the accuracy of its outputs with the essentially passive assistance of human reviewers. In continuous active learning (CAL), now a feature of leading eDiscovery platforms, even the process of training machines to find what youre looking for is performed algorithmically with no direction from human document reviewers beyond the coding or labeling they perform in the process of manual review. This is a remarkably efficient and cost-effective way to teach machines to identify responsive information, and it has enormous potential for other vital corporate functions. A notable example is compliance.

The usefulness of active learning as a proactive compliance and information governance tool has only recently begun to be explored and appreciated. Across the corporate landscape, reactive approaches to potential problems hidden in data stores are far more commonand ultimately more costly and risky. Companies will typically wait until a whistleblower complains or an employee happens upon a potential problem, and then respond by launching an internal investigation.

AI technology can help your organization avoid this scenario. You can use it to:

This handful of examples represents only a small fraction of potential use cases for AI in compliance and governance activities. Every industry will present a different set of use cases. Nevertheless, enterprises in just about every vertical face daunting compliance challenges requiring the identification of data-based risks in vast repositories of structured and unstructured data. This data is generated by hundreds or thousands of applications operating within diverse and often poorly integrated systems. This is the kind of environment where AI shines.

If your organization is already using an eDiscovery platform with built-in AI tools, it might make sense to explore how you can use those tools for broader data management, information governance, and risk mitigation purposes. As you run regular health checks, you will get a better understanding of your data and your approach to data-based compliance will be more proactive and cost-effective. That means fewer investigations in response to potential issues and, in many cases, less litigation overall.

About the Author

David Carns is the Chief Revenue Officer of Casepoint. He joined Casepoint as a Director of Client Services in 2010, rose the ranks to Chief Strategy Officer until his most recent promotion in 2019. In addition to being a recovering attorney, David possesses a lifelong passion for technology and its advancements. His career has always found him at the intersection of technology and the legal field given his intimate knowledge of both. Carns holds a Juris Doctorate from The John Marshall Law School and a Bachelors degree in Philosophy from DePauw University.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

See more here:
Using Artificial Intelligence Tools to Run Proactive Health Check Investigations - insideBIGDATA

Read More..

Artificial Intelligence Is Misreading Human Emotion – The Atlantic

At a remote outpost in the mountainous highlands of Papua New Guinea, a young American psychologist named Paul Ekman arrived with a collection of flash cards and a new theory. It was 1967, and Ekman had heard that the Fore people of Okapa were so isolated from the wider world that they would be his ideal test subjects.

Like Western researchers before him, Ekman had come to Papua New Guinea to extract data from the indigenous community. He was gathering evidence to bolster a controversial hypothesis: that all humans exhibit a small number of universal emotions, or affects, that are innate and the same all over the world. For more than half a century, this claim has remained contentious, disputed among psychologists, anthropologists, and technologists. Nonetheless, it became a seed for a growing market that will be worth an estimated $56 billion by 2024. This is the story of how affect recognition came to be part of the artificial-intelligence industry, and the problems that presents.

When Ekman arrived in the tropics of Okapa, he ran experiments to assess how the Fore recognized emotions. Because the Fore had minimal contact with Westerners and mass media, Ekman had theorized that their recognition and display of core expressions would prove that such expressions were universal. His method was simple. He would show them flash cards of facial expressions and see if they described the emotion as he did. In Ekmans own words, All I was doing was showing funny pictures. But Ekman had no training in Fore history, language, culture, or politics. His attempts to conduct his flash-card experiments using translators floundered; he and his subjects were exhausted by the process, which he described as like pulling teeth. Ekman left Papua New Guinea, frustrated by his first attempt at cross-cultural research on emotional expression. But this would be just the beginning.

Today affect-recognition tools can be found in national-security systems and at airports, in education and hiring start-ups, in software that purports to detect psychiatric illness and policing programs that claim to predict violence. The claim that a persons interior state can be accurately assessed by analyzing that persons face is premised on shaky evidence. A 2019 systematic review of the scientific literature on inferring emotions from facial movements, led by the psychologist and neuroscientist Lisa Feldman Barrett, found there is no reliable evidence that you can accurately predict someones emotional state in this manner. It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts, the study concludes. So why has the idea that there is a small set of universal emotions, readily interpreted from a persons face, become so accepted in the AI field?

To understand that requires tracing the complex history and incentives behind how these ideas developed, long before AI emotion-detection tools were built into the infrastructure of everyday life.

The idea of automated affect recognition is as compelling as it is lucrative. Technology companies have captured immense volumes of surface-level imagery of human expressionsincluding billions of Instagram selfies, Pinterest portraits, TikTok videos, and Flickr photos. Much like facial recognition, affect recognition has become part of the core infrastructure of many platforms, from the biggest tech companies to small start-ups.

Whereas facial recognition attempts to identify a particular individual, affect recognition aims to detect and classify emotions by analyzing any face. These systems already influence how people behave and how social institutions operate, despite a lack of substantial scientific evidence that they work. Automated affect-detection systems are now widely deployed, particularly in hiring. The AI hiring company HireVue, which can list Goldman Sachs, Intel, and Unilever among its clients, uses machine learning to infer peoples suitability for a job. In 2014, the company launched its AI system to extract microexpressions, tone of voice, and other variables from video job interviews, which it used to compare job applicants against a companys top performers. After considerable criticism from scholars and civil-rights groups, it dropped facial analysis in 2021, but kept vocal tone as an assessment criterion. In January 2016, Apple acquired the start-up Emotient, which claimed to have produced software capable of detecting emotions from images of faces. Perhaps the largest of these start-ups is Affectiva, a company based in Boston that emerged from academic work done at MIT.

Affectiva has coded a variety of emotion-related applications, primarily using deep-learning techniques. These approaches include detecting distracted and risky drivers on roads and measuring consumers emotional responses to advertising. The company has built what it calls the worlds largest emotion database, made up of more than 10 million peoples expressions from 87 countries. Its monumental collection of videos was hand-labeled by crowdworkers based primarily in Cairo.

Outside the start-up sector, AI giants such as Amazon, Microsoft, and IBM have all designed systems for emotion detection. Microsoft offers perceived emotion detection in its Face API, identifying anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise, while Amazons Rekognition tool similarly proclaims that it can identify what it characterizes as all seven emotions and measure how these things change over time, such as constructing a timeline of the emotions of an actor.

Emotion-recognition systems share a similar set of blueprints and founding assumptions: that there is a small number of distinct and universal emotional categories, that we involuntarily reveal these emotions on our faces, and that they can be detected by machines. These articles of faith are so accepted in some fields that it can seem strange even to notice them, let alone question them. But if we look at how emotions came to be taxonomizedneatly ordered and labeledwe see that questions lie in wait at every corner.

Ekmans research began with a fortunate encounter with Silvan Tomkins, then an established psychologist at Princeton who had published the first volume of his magnum opus, Affect Imagery Consciousness, in 1962. Tomkinss work on affect had a huge influence on Ekman, who devoted much of his career to studying its implications. One aspect in particular played an outsize role: the idea that if affects are an innate set of evolutionary responses, they would be universal and thus recognizable across cultures. This desire for universality has an important bearing on why this theory is widely applied in AI emotion-recognition systems today. The theory could be applied everywhere, a simplification of complexity that was easily replicable at scale.

In the introduction to Affect Imagery Consciousness, Tomkins framed his theory of biologically based universal affects as one addressing an acute crisis of human sovereignty. He was challenging the development of behaviorism and psychoanalysis, two schools of thought that he believed treated consciousness as a mere by-product that was in service to other forces. He noted that human consciousness had been challenged and reduced again and again, first by Copernicuswho displaced man from the center of the universethen by Darwinwhose theory of evolution shattered the idea that humans were created in the image of a Christian Godand most of all by Freudwho decentered human consciousness and reason as the driving forces behind our motivations. Tomkins continued, The paradox of maximal control over nature and minimal control over human nature is in part a derivative of the neglect of the role of consciousness as a control mechanism. To put it simply, consciousness tells us little about why we feel and act the way we do. This is a crucial claim for all sorts of later applications of affect theory, which stress the inability of humans to recognize both the feeling and the expression of affects. If we as humans are incapable of truly detecting what we are feeling, then perhaps AI systems can do it for us?

Tomkinss theory of affects was his way to address the problem of human motivation. He argued that motivation was governed by two systems: affects and drives. Tomkins proposed that drives tend to be closely associated with immediate biological needs, such as hunger and thirst. They are instrumental; the pain of hunger can be remedied with food. But the primary system governing human motivation and behavior is that of affects, involving positive and negative feelings. Affects, which play the most important role in human motivation, amplify drive signals, but they are much more complex. For example, it is difficult to know the precise causes that lead a baby to cry, expressing the distress-anguish affect.

How can we know anything about a system in which the connections between cause and effect, stimulus and response, are so tenuous and uncertain? Tomkins proposed an answer: The primary affects . . . seem to be innately related in a one-to-one fashion with an organ system which is extraordinarily visiblenamely, the face. He found precedents for this emphasis on facial expression in two works published in the 19th century: Charles Darwins The Expression of the Emotions in Man and Animals, from 1872, and an obscure volume by the French neurologist Guillaume-Benjamin-Amand Duchenne de Boulogne from 1862.

Tomkins assumed that the facial display of affects was a universal human trait. Affects, Tomkins believed, are sets of muscle, vascular, and glandular responses located in the face and also widely distributed through the body, which generate sensory feedback . . . These organized sets of responses are triggered at subcortical centers where specific programs for each distinct affect are storeda very early use of a computational metaphor for a human system. But Tomkins acknowledged that the interpretation of affective displays depends on individual, social, and cultural factors. He admitted that there were very different dialects of facial language in different societies. Even the forefather of affect research raised the possibility that interpreting facial displays depends on social and cultural context.

Given that facial expressions are culturally variable, using them to train machine-learning systems would inevitably mix together all sorts of different contexts, signals, and expectations. The problem for Ekman, and later for the field of computer vision, was how to reconcile these tensions.

During the mid-1960s, opportunity knocked at Ekmans door in the form of a large grant from what is now called the Defense Advanced Research Projects Agency (DARPA), a research arm of the Department of Defense. DARPAs sizable financial support allowed Ekman to begin his first studies to prove universality in facial expression. In general, these studies followed a design that would be copied in early AI labs. He largely duplicated Tomkinss methods, even using Tomkinss photographs to test subjects from Chile, Argentina, Brazil, the United States, and Japan. Subjects were presented with photographs of posed facial expressions, selected by the designers as exemplifying or expressing a particularly pure affect, such as fear, surprise, anger, happiness, sadness, and disgust. Subjects were then asked to choose among these affect categories and label the posed image. The analysis measured the degree to which the labels chosen by subjects correlated with those chosen by the designers.

From the start, the methodology had problems. Ekmans forced-choice response format would be later criticized for alerting subjects to the connections that designers had already made between facial expressions and emotions. Further, the fact that these emotions were faked would raise questions about the validity of the results.

The idea that interior states can be reliably inferred from external signs has a long history. It stems in part from the history of physiognomy, which was premised on studying a persons facial features for indications of his character. Aristotle believed that it is possible to judge mens character from their physical appearance . . . for it has been assumed that body and soul are affected together. The Greeks also used physiognomy as an early form of racial classification, applied to the genus man itself, dividing him into races, in so far as they differ in appearance and in character (for instance Egyptians, Thracians, and Scythians).

Physiognomy in Western culture reached a high point during the 18th and 19th centuries, when it was seen as part of the anatomical sciences. A key figure in this tradition was the Swiss pastor Johann Kaspar Lavater, who wrote Essays on Physiognomy: For the Promotion of Knowledge and the Love of Mankind, originally published in German in 1789. Lavater took the approaches of physiognomy and blended them with the latest scientific knowledge. He believed that bone structure was an underlying connection between physical appearance and character type. If facial expressions were fleeting, skulls seemed to offer a more solid material for physiognomic inferences. Skull measurement was a popular technique in race science, and was used to support nationalism, white supremacy, and xenophobia. This work was infamously elaborated on throughout the 19th century by phrenologists such as Franz Joseph Gall and Johann Gaspar Spurzheim, as well as in scientific criminology through the work of Cesare Lombroso.

But it was the French neurologist Duchenne, described by Ekman as a marvelously gifted observer, who codified the use of photography and other technical means in the study of human faces. In Mcanisme de la physionomie humaine, Duchenne laid important foundations for both Darwin and Ekman, connecting older ideas from physiognomy and phrenology with more modern investigations into physiology and psychology. He replaced vague assertions about character with a more limited investigation into expression and interior mental and emotional states.

Duchenne worked in Paris at the Salptrire asylum, which housed up to 5,000 people with a wide range of mental illnesses and neurological conditions. Some would become his subjects for distressing experiments, part of the long tradition of medical and technological experimentation on the most vulnerable, those who cannot refuse. Duchenne, who was little known in the scientific community, decided to develop techniques of electrical shocks to stimulate isolated muscle movements in peoples faces. His aim was to build a more complete anatomical and physiological understanding of the face. Duchenne used these methods to bridge the new psychological science and the much older study of physiognomic signs, or passions. He relied on the latest photographic advancements, such as collodion processing, which allowed for much shorter exposure times, enabling Duchenne to freeze fleeting muscular movements and facial expressions in images.

Even at these early stages, the faces were never natural or socially occurring human expressions but simulations produced by the brute application of electricity to the muscles. Regardless, Duchenne believed that the use of photography and other technical systems would transform the squishy business of representation into something objective and evidentiary, more suitable for scientific study. Darwin praised Duchennes magnificent photographs and included reproductions in his own work.

Ekman would follow Duchenne in placing photography at the center of his experimental practice. He believed that slow-motion photography was essential to his approach, because many facial expressions operate at the limits of human perception. The aim was to find so-called microexpressionstiny muscle movements in the face.

One of Ekmans ambitious plans in his early research was to codify a system for detecting and analyzing facial expressions. In 1971, he co-published a description of what he called the Facial Affect Scoring Technique (FAST).

Relying on posed photographs, the approach used six basic emotional types largely derived from Ekmans intuitions. But FAST soon ran into problems when other scientists encountered facial expressions not included in its typology. So Ekman decided to ground his next measurement tool in facial musculature, harkening back to Duchennes original electroshock studies. Ekman identified roughly 40 distinct muscular contractions on the face and called the basic components of each facial expression an action unit. After some testing and validation, Ekman and Wallace Friesen published the Facial Action Coding System (FACS) in 1978; updated editions continue to be widely used.

Despite its financial success, FACS was very labor-intensive to use. Ekman wrote that it took 75 to 100 hours to train users in the FACS methodology, and an hour to score a single minute of facial footage. This challenge presented exactly the type of opportunity that the emerging field of computer vision was hungry to take on.

As work into the use of computers in affect recognition began to take shape, researchers recognized the need for a collection of standardized images to experiment with. A 1992 National Science Foundation report co-written by Ekman recommended that a readily accessible, multimedia database shared by the diverse facial research community would be an important resource for the resolution and extension of issues concerning facial understanding. Within a year, the Department of Defense began funding a program to collect facial photographs. By the end of the decade, machine-learning researchers had started to assemble, label, and make public the data sets that drive much of todays machine-learning research. Academic labs and companies worked on parallel projects, creating scores of photo databases. For example, researchers in a lab in Sweden created Karolinska Directed Emotional Faces. This database comprises images of individuals portraying posed emotional expressions corresponding to Ekmans categories. Theyve made their faces into the shapes that accord with six basic emotional states: joy, anger, disgust, sadness, surprise, and fear. When looking at these training sets, it is difficult to not be struck by a sense of pantomime: Incredible surprise! Abundant joy! Paralyzing fear! These subjects are literally making machine-readable emotion.

As the field grew in scale and complexity, so did the types of photographs used in affect recognition. Researchers began using the FACS system to label data generated not from posed expressions but rather from spontaneous facial expressions, sometimes gathered outside of laboratory conditions. Ekmans work had a profound and wide-ranging influence. The New York Times described Ekman as the worlds most famous face reader, and Time named him one of the 100 most influential people in the world. He would eventually consult with clients as disparate as the Dalai Lama, the FBI, the CIA, the Secret Service, and the animation studio Pixar, which wanted to create more lifelike renderings of cartoon faces. His ideas became part of popular culture, included in best sellers such as Malcolm Gladwells Blink and a television drama, Lie to Me, on which Ekman was a consultant for the lead characters role, apparently loosely based on him.

His business prospered: Ekman sold techniques of deception detection to agencies such as the Transportation Security Administration, which used them to develop the Screening of Passengers by Observation Techniques (SPOT) program. SPOT has been used to monitor air travelers facial expressions since the September 11 attacks, in an attempt to automatically detect terrorists. The system uses a set of 94 criteria, all of which are allegedly signs of stress, fear, or deception. But looking for these responses means that some groups are immediately disadvantaged. Anyone who is stressed, is uncomfortable under questioning, or has had negative experiences with police and border guards can score higher. This creates its own forms of racial profiling. The SPOT program has been criticized by the Government Accountability Office and civil-liberties groups for its racial bias and lack of scientific methodology. Despite its $900 million price tag, there is no evidence that it has produced clear successes.

As Ekmans fame spread, so did the skepticism of his work, with critiques emerging from a number of fields. An early critic was the cultural anthropologist Margaret Mead, who debated Ekman on the question of the universality of emotions in the late 1960s. Mead was unconvinced by Ekmans belief in universal, biological determinants of behavior that exist separately from highly conditioned cultural factors.

Scientists from different fields joined the chorus over the decades. In more recent years, the psychologists James Russell and Jos-Miguel Fernndez-Dols have shown that the most basic aspects of the science remain uncertain. Perhaps the foremost critic of Ekmans theory is the historian of science Ruth Leys, who sees a fundamental circularity in Ekmans method. The posed or simulated photographs he used were assumed to express a set of basic affective states that were, Leys wrote, already free of cultural influence. These photographs were then used to elicit labels from different populations to demonstrate the universality of facial expressions. The psychologist and neuroscientist Lisa Feldman Barrett puts it bluntly: Companies can say whatever they want, but the data are clear. They can detect a scowl, but thats not the same thing as detecting anger.

More troubling still is that in the field of the study of emotions, researchers have not reached consensus about what an emotion actually is. What emotions are, how they are formulated within us and expressed, what their physiological or neurobiological functions could be, their relation to stimuliall of this remains stubbornly unsettled. Why, with so many critiques, has the approach of reading emotions from a persons face endured? Since the 1960s, driven by significant Department of Defense funding, multiple systems have been developed that are more and more accurate at measuring facial movements. Ekmans theory seemed ideal for computer vision because it could be automated at scale. The theory fit what the tools could do.

Powerful institutional and corporate investments have been made based on perceived validity of Ekmans theories and methodologies. Recognizing that emotions are not easily classified, or that theyre not reliably detectable from facial expressions, could undermine an expanding industry. Many machine-learning papers cite Ekman as though these issues are resolved, before directly proceeding into engineering challenges. The more complex issues of context, conditioning, relationality, and culture are often ignored. Ekman himself has said he is concerned about how his ideas are being commercialized, but when hes written to tech companies asking for evidence that their emotion-recognition programs work, he has received no reply.

Instead of trying to build more systems that group expressions into machine-readable categories, we should question the origins of those categories themselves, as well as their social and political consequences. For example, these systems are known to flag the speech affects of women, particularly Black women, differently from those of men. A study conducted at the University of Maryland has shown that some facial recognition software interprets Black faces as having more negative emotions than white faces, specifically registering them as angrier and more contemptuous, even when controlling for their degree of smiling.

This is the danger of automating emotion recognition. These tools can take us back to the phrenological past, when spurious claims were used to support existing systems of power. The decades of scientific controversy around inferring emotional states consistently from a persons face underscores a central point: One-size-fits-all detection is not the right approach. Emotions are complicated, and they develop and change in relation to our cultures and historiesall the manifold contexts that live outside the AI frame.

But already, job applicants are judged unfairly because their facial expressions or vocal tones dont match those of other employees. Students are flagged at school because their faces appear angry, and customers are questioned because their facial cues indicate they may be shoplifters. These are the people who will bear the costs of systems that are not just technically imperfect, but based on questionable methodologies. A narrow taxonomy of emotionsgrown from Ekmans initial experimentsis being coded into machine-learning systems as a proxy for the infinite complexity of emotional experience in the world.

This article is adapted from Kate Crawfords recent book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.

See the original post here:
Artificial Intelligence Is Misreading Human Emotion - The Atlantic

Read More..

Administration and Congressional Update on Artificial Intelligence in the U.S. – JD Supra

[co-author: Christina Barone]

On April 9, 2021, the Office of Management and Budget (OMB) submitted President Bidens discretionary funding request (the Request) to Congress for Fiscal Year (FY) 2022. The Request lays out the Presidents discretionary funding recommendations across a wide range of policy areas, including a strategy for investing in emerging technology areas, maintaining economic competitiveness and national security and positioning the U.S. to out-compete China. The Request is high-level and did not include proposed legislative text.

The Presidents Request recommends:

On April 21, 2021, a group of bipartisan lawmakers reintroduced the Endless Frontier Act (H.R.2731 and S.1260) to establish a new Directorate for Technology (the Directorate) at the NSF, a regional technology hub program and require a strategy and report on economic security, research, innovation, manufacturing and job creation. The bill would authorize $100 billion over five years for the Directorate to strengthen U.S. leadership in critical technology areas through innovation, research, commercialization and education and ensure that the U.S. maintains its competitive edge in technologies of the future.

The legislation identifies ten initial technology domains for the new NSF Directorate to fund research, including AI and machine learning, semiconductors, quantum computing, advanced communications technology, cybersecurity and synthetic biology.

Additionally, the Directorate is authorized to:

The Endless Frontier Act also establishes a novel Supply Chain Resiliency and Crisis Response Program at the Department of Commerce. The new program would monitor supply chain vulnerabilities and provide investments to diversify supply chains for products critical to national security. Lastly, the bill proposes a $2.4 billion investment to enhance and expand the Manufacturing USA network.

On April 21, 2021, Rep. Maxine Waters (D-CA), Chair of the House Financial Services Committee, renewed the Committees AI Task Force. The Task Force was created during the 116th Congress to ensure the responsible use of emerging and predictive technologies in the financial sector. Rep. Bill Foster (D-IL) will continue leading the Task Forces work to examine whether emerging technologies in the financial services and housing industries serve the needs of consumers, investors, small businesses and the public.

Congress and the Biden-Harris administration continue to take action to ensure the U.S. maintains its global leadership in technologies of the future, including AI. Additional investments and a new approach to accelerate U.S. science and technology developments are beginning to materialize in light of growing concerns that other countries are ready to challenge Americas position on the innovation stage. The Akin Gump cross-practice AI team continues to monitor forthcoming congressional, administrative and private-stakeholder initiatives related to AI.

More:
Administration and Congressional Update on Artificial Intelligence in the U.S. - JD Supra

Read More..

Why Physics has Relevancy To Artificial Intelligence And Building AI Leadership Brain Trust? – Forbes

This blog is a continuation of theBuilding AI Leadership Brain Trust Blog Serieswhich targets board directors and CEOs toaccelerate their duty of care to develop stronger skills and competencies in AI in order to ensure their AI programs achieve sustaining results.

My last two blogs introduced value of science and stressed its importance to AI, and focused on the importance of AI professionals having some foundation in computing science as a cornerstone for designing and developing AI models and production processes, as well as the richness of complexity sciences and appreciating that integrating diverse disciplines into complex AI programs is key for successful returns on investments (ROI).

This blog introduces the importance of physics and explores its relationship to AI as often I see AI solutioning teams missing physics skills in the solutioning constructs - which I believe is a strategic mistake for many complex AI programs. Its important for C levels to understand that AI is not a singular discipline it requires many other skills to get the solution architecture right. So deeply understand the business problem in front of you - and the more complex the problem is the increased value physicists will have in guiding you forward.

Atomic molecule on blackboard

In the Brain Trust Series, I have identified over 50 skills required to help evolve talent in organizations committed to advancing AI literacy. The last few blogs have been discussing the technical skills relevancy. To see the full AI Brain Trust Framework introduced in thefirst blog, reference here.

We are currently focused on the technical skills in the AI Brain Trust Framework

Technical Skills:

1.Research Methods Literacy

2.Agile Methods Literacy

3.User Centered Design Literacy

4.Data Analytics Literacy

5.Digital Literacy (Cloud, SaaS, Computers, etc.)

6.Mathematics Literacy

7.Statistics Literacy

8.Sciences (Computing Science, Complexity Science, Physics) Literacy

9.Artificial Intelligence (AI) and Machine Learning (ML) Literacy

10.Sustainability Literacy

What is the relevance of physics to AI as a discipline?

There are so many aspects of physics that can be applied to AI hence, it does not take one long to appreciate the value of this science discipline. One of the most significant discoveries in physics was the Higgs Boson Particle, often referred to as the God Particle which was discovered using an AI neural network to help identify complex patterns in particle collisions.

The last blog stressed the importance of complexity science and the most important aspect of physics is that this discipline teaches you about how to understand and decompose complex processes.

In prior blogs, I stressed that the importance of building an AI model requires three main enablements: 1) collecting and analyzing the data 2.) developing the AI model and 3.) evaluating the model outcomes and determining value. Each of these areas has relevance to physics and a strong AI expert will appreciate the value that physics know-how can bring to enable engineering teams to tackle the most complex problems in the world.

Lets start first with data analysis. There are many forms of machine learning approaches, but the one that has the closest linkages to physics is neural networks which are trained to identify complex patterns, as well as find new patterns. Examples of how AI can be applied to solve a physics problem would be to classify thousands of images and be able to identify black holes, being able to detect subtle changes in light around objects is an example of the disciplines coming together.

Physics professionals also use terms like gravitational lensing for image analysis using neural networks to tease out the classifications to finer levels of details, while AI specialists simply say image processing. What is always a challenge in diverse disciplines is geek speak often confusing business leaders who cannot decipher the language meanings.

In addition, many acclaimed physicists purport that they are the major contributors to advancing the AI field, so rivalry friction exists in these disciplines as well, and pardon the pun.

Neural networks are particularly good at enabling AI models to be able to detect changes in radio waves or even earths gravitational waves, or to determine when specific rays may the hit earths atmosphere and provide timing insights as well.

Being able to encode different particle behavior and observe their subtle changes over time provides a rich bed of AI modelling analysis and interpretability for physicists to have deeper mathematical calculation insights to encode their observations more accurately.

Other physics terms that underlie neural networks include: compressibility or conductivity. What is even more exciting in bringing these two disciplines together is the area of quantum tomography, which equates to measuring the changes in a quantum state which has innovation relevance to quantum computing. Tomographyis an exciting field which analyzes images by sections or sectioning through the use of any kind of penetrating wave. This method is used in diverse areas including: radiology, atmospheric sciences, geophysics, oceanography, plasma physics, astrophysics, quantum information, and other science areas. Its applications are endless and very exciting.

Machine learning methods help to advance physics, as well as physics has value and relevance to machine learning. The high computational value of machine learning is allowing physicists to tackle even more complex problems, like in simulating global climatic change leveraging geometric surfaces and applying deep learning onto curved surfaces.

An Imperial College computer scientist, Michael Bronstein and his researchers, helped to advance geo-metric deep learning methods and determined that going beyond the Euclidean plane would require them to reimagine one of the basic computational procedures that made neural networks so effective at 2D image recognition in the first place. This procedure lets a layer of the neural network perform a mathematical operation on small patches of the input data and then pass the results to the next layer in the network.

Without going into too many details these researchers re-imagined these approaches and recognized that a 3D shape bent into two different poses like a bear standing up or a bear sitting down were all instances of the same objects vs two distinct objects.

Hence the term Convolutional Neural Networks (CNN) was born. This type of network specializes in processing data in a grid like topology, such as an image, and each neuron works in its own receptive reference field and is connected to other neurons in a way that they cover the entire visual field, so after analyzing thousands of images of a cat or a dog this problem is not as difficult as there is easy access to this data set.

CNNs can detect rotated or reflected features in flat images without having to train on specific examples of the features and spherical CNNs can create feature maps from data on the surface of a sphere without distorting them as flat projections. The applications are endless and very exciting to physicists where object surface detection is key in their research methods.

Unlike finding cancerous tumors from diverse lung photos, finding medically accurate, quality labelling validated is a more difficult challenge to achieve.

In a government and academic research project they used a convolutional network (CNN) to detect cyclones in the data using newer gauge CNN detection method which was able to detect cyclones at close to 98% accuracy. A gauge CNN would theoretically work on any curved surface of any dimensionalityThe implications for climate monitoring using physics and AI techniques is unprecedented with these advancements.

Summary

In summary, both physics and machine learning have some similarity. Both disciplines are focused on making accurate observations and both build models to predict future observations. One of the terms that often physicists use is co-variance which means that physics should be independent of which kind or rule is used or what kind of observers are involved which nets out to simply stresses independent thinking.

Einstein stated this best in 1916 when he said: The general laws of nature are to be expressed by equations which hold good for all systems of coordinates.

Analyzing diverse patterns

What key questions can Board Directors and CEOs ask to evaluate their depth of physics linkages to artificial intelligence relevance?

1.) How many resources do you have that have an undergraduate degree in physics versus a masters degree or a doctoral degree?

2.) Of these total resources trained in physics disciplines, how many also have a specialization in Artificial Intelligence?

3.) How many of your most significant AI projects have expertise in physics to ensure increased inter-disciplinary knowledge know-how?

4.) How many of the Board Directors or C-Suite have expertise in physics with a knowledge blend of AI to tackle the worlds most complex business problems?

These are some starting questions above to help guide leaders to understand their talent mix in appreciating the value of diverse science disciplines to augment the AI solution delivery teams in enterprises.

I believe that board directors and CEOs need to understand their talent depth in science disciplines in addition to AI disciplines to ensure that their complex AI programs are optimized more for success. The last three blogs, including this one looked at three disciplines 1) Computing Science 2.) Complexity Science and this one on Physics - all written to reinforce the important that science disciplines are key to ensuring AI investments are successful, and continued investments are made to help them evolve and achieve the value to support humans in augmenting their decision making, or improving their operating processes.

The next blog in this AI Brain Trust series will discuss a general foundation of the key AI terms and capabilities to provide more knowledge to advance the C-Suite to get AI right and achieve more sustaining success.

More Information:

To see the full AI Brain Trust Framework introduced in thefirst blog, reference here.

To learn more about Artificial Intelligence, and the challenges, both positive and negative, refer to my new book, The AI Dilemma, to guide leaders foreward.

Note:

If you have any ideas, please do advise as I welcome your thoughts and perspectives.

The rest is here:
Why Physics has Relevancy To Artificial Intelligence And Building AI Leadership Brain Trust? - Forbes

Read More..

Goldman Sachs is Betting on Artificial Intelligence to Dive Growth – Analytics Insight

Artificial Intelligence (AI) has taken a major role in acting as the main driver of upcoming hi-tech future in the world. It is shifting the information age to a completely new digital domain where upgraded machines help to solve critical decisions and assist in diverse sectors of a country. The banking and financial sector is very keen in investing in AI to protect their customers against the competitors in this market. Thus depending on the current scenario, Goldman Sachs, a leading American multinational investment bank, is betting on AI to dive growth in the economy. It has a fund of $72.5 million exclusively for the investment in Artificial Intelligence algorithms and data analytics.

There is a sudden surge in the cases against cyber-security, cyber-threats, phishing, spamming, hacking and many more unethical behaviours from dark web. It is due to the digital transformation in online banking through websites or mobile apps. Goldman Sachs deals with investment management, securities, asset management, prime brokerage and securities underwriting. This means banks need the best security possible against the online threats. Here comes AI to assist Goldman Sachs in the best possible way.

1. The constant fear of cyber-attacks has been reduced with the help of filtering application of AI. Yes, there are possibilities of receiving fraud applications with unethical motives. AI cyber-security detects and blocks these applications while collecting real-time data from the users on a large scale.

2. AI-enabled Investment Trust has already become a successful test for Goldman Sachs in enabling the best investment options for the customers. This has improved customer engagement while running through various kinds of analyst reports and news reports. The investment trust runs on NLP (Natural Language Processing) to assist asset managers in identifying undervalued shares and probable opportunities for profits.

3. Partnership with an AI-based startup, H20.ai, helps to focus on deep machine learning transparency and model interpretability to predict a better future. It assists in decision-making process for finance department and equity trading floor such as market making, providing liquidity to the bank and many more.

4. This is the best opportunity to increase growth in hyper-personalised banking through conversational AI. AI will help in 24*7 two-way communication with innumerable personalised responses and feedbacks to the users

Goldman Sachs prefers to focus on the net profit over the investment cost of the AI. The bank is eager to maintain its brand image through efficient customer experience, upgraded security and service enhancement.

Original post:
Goldman Sachs is Betting on Artificial Intelligence to Dive Growth - Analytics Insight

Read More..

Artificial intelligence in education today: The art of the possible – FE News

Further Education News

The FE News Channel gives you the latest education news and updates on emerging education strategies and the#FutureofEducation and the #FutureofWork.

Providing trustworthy and positive Further Education news and views since 2003, we are a digital news channel with a mixture of written word articles, podcasts and videos. Our specialisation is providing you with a mixture of the latest education news, our stance is always positive, sector building and sharing different perspectives and views from thought leaders, to provide you with a think tank of new ideas and solutions to bring the education sector together and come up with new innovative solutions and ideas.

FE News publish exclusive peer to peer thought leadership articles from our feature writers, as well as user generated content across our network of over 3000 Newsrooms, offering multiple sources of the latest education news across the Education and Employability sectors.

FE News also broadcast live events, podcasts with leading experts and thought leaders, webinars, video interviews and Further Education news bulletins so you receive the latest developments inSkills Newsand across the Apprenticeship, Further Education and Employability sectors.

Every week FE News has over 200 articles and new pieces of content per week. We are a news channel providing the latest Further Education News, giving insight from multiple sources on the latest education policy developments, latest strategies, through to our thought leaders who provide blue sky thinking strategy, best practice and innovation to help look into the future developments for education and the future of work.

In May 2020, FE News had over 120,000 unique visitors according to Google Analytics and over 200 new pieces of news content every week, from thought leadership articles, to the latest education news via written word, podcasts, video to press releases from across the sector.

We thought it would be helpful to explain how we tier our latest education news content and how you can get involved and understand how you can read the latest daily Further Education news and how we structure our FE Week of content:

Our main features are exclusive and are thought leadership articles and blue sky thinking with experts writing peer to peer news articles about the future of education and the future of work. The focus is solution led thought leadership, sharing best practice, innovation and emerging strategy. These are often articles about the future of education and the future of work, they often then create future education news articles. We limit our main features to a maximum of 20 per week, as they are often about new concepts and new thought processes. Our main features are also exclusive articles responding to the latest education news, maybe an insight from an expert into a policy announcement or response to an education think tank report or a white paper.

FE Voices was originally set up as a section on FE News to give a voice back to the sector. As we now have over 3,000 newsrooms and contributors, FE Voices are usually thought leadership articles, they dont necessarily have to be exclusive, but usually are, they are slightly shorter than Main Features. FE Voices can include more mixed media with the Further Education News articles, such as embedded podcasts and videos. Our sector response articles asking for different comments and opinions to education policy announcements or responding to a report of white paper are usually held in the FE Voices section. If we have a live podcast in an evening or a radio show such as SkillsWorldLive radio show, the next morning we place the FE podcast recording in the FE Voices section.

In sector news we have a blend of content from Press Releases, education resources, reports, education research, white papers from a range of contributors. We have a lot of positive education news articles from colleges, awarding organisations and Apprenticeship Training Providers, press releases from DfE to Think Tanks giving the overview of a report, through to helpful resources to help you with delivering education strategies to your learners and students.

We have a range of education podcasts on FE News, from hour long full production FE podcasts such as SkillsWorldLive in conjunction with the Federation of Awarding Bodies, to weekly podcasts from experts and thought leaders, providing advice and guidance to leaders. FE News also record podcasts at conferences and events, giving you one on one podcasts with education and skills experts on the latest strategies and developments.

We have over 150 education podcasts on FE News, ranging from EdTech podcasts with experts discussing Education 4.0 and how technology is complimenting and transforming education, to podcasts with experts discussing education research, the future of work, how to develop skills systems for jobs of the future to interviews with the Apprenticeship and Skills Minister.

We record our own exclusive FE News podcasts, work in conjunction with sector partners such as FAB to create weekly podcasts and daily education podcasts, through to working with sector leaders creating exclusive education news podcasts.

FE News have over 700 FE Video interviews and have been recording education video interviews with experts for over 12 years. These are usually vox pop video interviews with experts across education and work, discussing blue sky thinking ideas and views about the future of education and work.

FE News has a free events calendar to check out the latest conferences, webinars and events to keep up to date with the latest education news and strategies.

The FE Newsroom is home to your content if you are a FE News contributor. It also help the audience develop relationship with either you as an individual or your organisation as they can click through and box set consume all of your previous thought leadership articles, latest education news press releases, videos and education podcasts.

Do you want to contribute, share your ideas or vision or share a press release?

If you want to write a thought leadership article, share your ideas and vision for the future of education or the future of work, write a press release sharing the latest education news or contribute to a podcast, first of all you need to set up a FE Newsroom login (which is free): once the team have approved your newsroom (all content, newsrooms are all approved by a member of the FE News team- no robots are used in this process!), you can then start adding content (again all articles, videos and podcasts are all approved by the FE News editorial team before they go live on FE News). As all newsrooms and content are approved by the FE News team, there will be a slight delay on the team being able to review and approve content.

RSS Feed Selection Page

Read this article:
Artificial intelligence in education today: The art of the possible - FE News

Read More..

Forbes Recognizes Lilt As One of the Top Artificial Intelligence Companies For Third Straight Year – PRNewswire

SAN FRANCISCO, April 26, 2021 /PRNewswire/ --Lilt, the modern language service and technology provider, today announced that it has been named to the 2021 Forbes AI 50 for the third consecutive year. The Forbes AI 50 recognizes the most promising privately-held companies using artificial intelligence to build business applications and services to transform industries. Lilt is one of only seven companies that have been included every year since the list's inception in 2019.

"Our artificial intelligence and machine learning technologies enable our customers to provide exceptional global experiences to their customers around the world," said Lilt CEO Spence Green. "We're proud to be recognized by Forbes for the third year in a row alongside other leading companies developing AI-powered solutions."

Lilt's translation services are powered by the Lilt Platform, the world's most advanced translation technology that uses AI and automation to make every step of the localization process faster, more accurate, and simpler. Lilt's community of over 60,000 skilled human translators uses its AI-powered translation technology to translate content quickly, efficiently, and at higher quality than ever before. With Lilt, companies go-to-market faster, grow global revenues, and provide a personalized global experience to their customers in their language of choice.

Forbes partnered with Sequoia Capital and Meritech Capital to evaluate hundreds of promising, privately-held North American companies that are using AI in ways that are fundamental to their operations. The list, which nearly 400 companies qualified for, focused on companies utilizing machine learning, natural language processing, or computer vision technologies. Of the qualifying companies, 100 were selected based on their qualitative score created by Forbes' data partners, followed by evaluation by a panel of expert AI judges to narrow the list down to 50.

Along with the Forbes AI 50 list, Lilt was recently named to the CB Insights AI 100 list, showcasing the 100 most promising private artificial intelligence companies in the world, and was included in Gartner's recent Market Guide for AI-Enabled Translation Services.

About LiltHeadquartered in San Francisco, Lilt is the modern language service and technology provider enabling localized customer experiences. Lilt's mission is to make the world's information accessible to everyone regardless of where they were born or which language they speak. Lilt brings human-powered, technology-assisted translations to global enterprises, empowering product, marketing, support, e-commerce, and localization teams to deliver exceptional customer experiences to global audiences. Lilt gives industry-leading organizations like Intel, ASICS, WalkMe, DigitalOcean, and Canva everything they need to scale their localization programs and go-to-market faster. Lilt has additional global offices in Dublin, Berlin, Washington, D.C. and Indianapolis. Visit us online at http://www.lilt.com or contact us at [emailprotected].

SOURCE Lilt

https://www.lilt.com

View post:
Forbes Recognizes Lilt As One of the Top Artificial Intelligence Companies For Third Straight Year - PRNewswire

Read More..

Administration And Congressional Update On Artificial Intelligence In The US – Technology – United States – Mondaq News Alerts

29 April 2021

Akin Gump Strauss Hauer & Feld LLP

To print this article, all you need is to be registered or login on Mondaq.com.

On April 9, 2021, the Office of Management and Budget (OMB)submitted President Biden's discretionary funding request (the "Request") to Congressfor Fiscal Year (FY) 2022. The Request lays out the President'sdiscretionary funding recommendations across a wide range of policyareas, including a strategy for investing in emerging technologyareas, maintaining economic competitiveness and national securityand positioning the U.S. to out-compete China. The Request ishigh-level and did not include proposed legislative text.

The President's Request recommends:

On April 21, 2021, a group of bipartisan lawmakers reintroducedthe Endless Frontier Act (H.R.2731 and S.1260) to establish a new Directorate forTechnology (the "Directorate") at the NSF, a regionaltechnology hub program and require a strategy and report oneconomic security, research, innovation, manufacturing and jobcreation. The bill would authorize $100 billion over five years forthe Directorate to strengthen U.S. leadership in criticaltechnology areas through innovation, research, commercializationand education and ensure that the U.S. maintains its competitiveedge in technologies of the future.

The legislation identifies ten initial technology domains forthe new NSF Directorate to fund research, including AI and machinelearning, semiconductors, quantum computing, advancedcommunications technology, cybersecurity and synthetic biology.

Additionally, the Directorate is authorized to:

The Endless Frontier Act also establishes a novel Supply ChainResiliency and Crisis Response Program at the Department ofCommerce. The new program would monitor supply chainvulnerabilities and provide investments to diversify supply chainsfor products critical to national security. Lastly, the billproposes a $2.4 billion investment to enhance and expand theManufacturing USA network.

On April 21, 2021, Rep. Maxine Waters (D-CA), Chair of the HouseFinancial Services Committee, renewed the Committee's AI TaskForce. The Task Force was created during the 116th Congress toensure the responsible use of emerging and predictive technologiesin the financial sector. Rep. Bill Foster (D-IL) will continueleading the Task Force's work to examine whether emergingtechnologies in the financial services and housing industries servethe needs of consumers, investors, small businesses and thepublic.

Congress and the Biden-Harris administration continue to takeaction to ensure the U.S. maintains its global leadership intechnologies of the future, including AI. Additional investmentsand a new approach to accelerate U.S. science and technologydevelopments are beginning to materialize in light of growingconcerns that other countries are ready to challenge America'sposition on the innovation stage. The Akin Gump cross-practice AIteam continues to monitor forthcoming congressional, administrativeand private-stakeholder initiatives related to AI.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Technology from United States

Sheppard Mullin Richter & Hampton

Utah recently amended its breach notice law to provide certain defenses to companies who suffer a data breach.

Visit link:
Administration And Congressional Update On Artificial Intelligence In The US - Technology - United States - Mondaq News Alerts

Read More..