Page 2,939«..1020..2,9382,9392,9402,941..2,9502,960..»

If there’s one thing I DON’T disagree with Jordan Peterson about it’s that today’s Left must not be let anywhere near a revolution – RT

ByAshley Frawley,Senior Lecturer in Sociology and Social Policy at Swansea University and the author of Semiotics of Happiness: Rhetorical Beginnings of a Public Problem. Follow her on Twitter@Ashleyafrawley

I dont concur with the academic and bestselling author on many issues. But I share his horror at the idea of the contemporary Left seizing power theyve moved too far from the progressive values that fired old radicals.

In a recent interview, Jordan Peterson waxed lyrical on topics from the value of art to his usual spiel about the absurdity of Marxism and the dangers of communism. Yet, for someone whos made a name for himself criticising Marxism, he seems to know very little about the work of the actual Karl Marx. However, one thing jumped out at me in his rejection of the call for violent revolution. Here, I couldnt help but agree. What passes for the Left today needs to be kept as far as possible from revolution.

The truth is, todays Left has travelled quite far from its progressive roots. Past radical movements were guided by a fundamental belief in the rationality, and thus the capacity, for freedom held by all human beings. Indeed, human reason and ability to make history using that reason was a central belief that animated the work of Marx.

Contrast this with the widely held progressive prejudice today that what defines us as human beings is our irrationality and vulnerability. Many so-called progressives are highly suspicious of democracy and the ability of individuals to govern themselves. Instead, what animates them is a desire to protect the vulnerable. From this perspective, we do not need to be freed. We need to be protected most of all, from each other.

Another sharp contrast is a condemnation of capitalist selfishness. Indeed, many a socialist has wasted their breath arguing that people can be made to overcome their selfish individualism to live harmoniously in the fabled future utopia. But it is precisely this individualism that past socialists had sought to liberate. Consider, for example, Oscar Wildes opening lines in his famous essay, The Soul of Man Under Socialism, where the chief advantage of socialism would be to relieve us from that sordid necessity of living for others. By unleashing the wealth and abundance made possible by capitalism, human beings would at last be free to live for ourselves instead of working 9-to-5 for the boss.

But who believes in the liberatory force of wealth today? So many progressives are driven by a deep suspicion that the good things in society are coming too fast and too easily. We have even seen the rise of zero growth or degrowth put forward as desirable goals, where in the past we simply called these recessions and depressions. For Marx on the other hand, the existence of a world of wealth and culture was a progressive force. They make clear what is being kept from us.

Todays Left balks at so much consumerism and pities us for wanting the possessions of the upper crust. Theyve got higher values for which we are supposed to live. Revolution seems less about liberating the production of wealth and more about bettering our souls in the ways our betters see fit. No, thank you.

Perhaps most chillingly, however, is not simply the creepy misanthropy, the sneaking suspicion that maybe feudalism wasnt that bad, but the tendency to outright dehumanise opponents. As Hannah Arendt warned us, dehumanisation is one of the first steps toward totalitarianism. Yet online debates are rife with this tendency to excuse horrible behaviour because opponents are really less than human anyway. This leaks out into public life when we cancel peoples livelihoods for saying the wrong thing. Those said to transgress certain values are no longer seen as deserving of basic rights once thought due to all humans, for instance freedom of speech or not punishing people without due process.

Revolutions of the past did not seek to destroy the rights won by previous liberal revolutions, but to expand them and ensure that they were truly realised. This was borne of a belief in a fundamental equality of all human beings, which is slowly slipping away.

Finally, the Left that was inspired by Marx was driven not by a desire to distinguish themselves from the masses, but to empower the masses. They believed revolution was the progressive result of mass society realising that it could govern itself. Thus, communist revolution would be a mass movement of working peoples self-emancipation. It was supposed to be the continuation and culmination of mass democracy.

By contrast, many of todays self-proclaimed leftists style themselves as counter to mass society, mass consumption and mass prejudice. It has become a movement of outsiders seeking to transform the mass to meet what they deem to be right and progressive views. They work from the outside and seek to transform individual hearts and minds. But its supposed to be the other way around: the mass of people transforming society from the inside out.

I dont agree with Peterson on much. But I do share his horror at the thought of the contemporary Left living out a violent revolution. Theyve travelled too far from the values and goals that animated radicals of the past. As they exist today, they dont need to take control. They need to be defeated.

Think your friends would be interested? Share this story!

The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.

See the original post:
If there's one thing I DON'T disagree with Jordan Peterson about it's that today's Left must not be let anywhere near a revolution - RT

Read More..

Luke Skywalker Created Snoke According to Lucasfilm and Marvel – Pirates and Princesses

Weve told you before about the schism that has existed between Jon Favreaus vision for Star Wars and the Kathleen Kennedy Crew. Nowhere has that become more apparent than in the latest Star Wars comic as a joint venture between Lucasfilm and Marvel. To say theyre both malicious towards the character of Luke Skywalker, and fully incompetent in their understanding of the Star Wars narrative, is a vast understatement. Lets get straight to the headline (spoilers ahead):

According to Darth Vader #11, which is considered Star Wars canon, The Emperor appears to have created Snoke, and probably much more, using something called The Scalpel of Creation. Ignore for a moment just how stupid that title is. What is the Scalpel of Creation? you ask, while attempting to not laugh. Well, its none other than a severed, pale-skinned hand. Whose severed, pale-skinned hand might that be? Given that Anakins severed limbs were severely burned and damaged, theres just one severed hand in Star Wars lore that matches up: Luke Skywalker.

So what author Greg Pak is almost certainly presenting as official Star Wars canon is that Luke Skywalkers severed hand created Snoke. Talk about continuing to destroy the hero of Luke Skywalker!

So there you go Disney just keeps slaughtering the story of Luke Skywalker. Not only did he cause his own defeat by facing Darth Vader before he was ready in The Empire Strikes Back, but in doing so he gave the Emperor clones(?) his magic hand that would reverse everything he accomplishes in redeeming Darth Vader and defeating Palpatine.

Take that, George Lucas.

So that you can understand whats going on here, lets review the original mythos for Star Wars that made it the most popular franchise in the western world prior to George Lucas selling the property.

Heres now the George Lucas story structure went:

A messianic figure gives into understandable darkness due to his pain and becomes the puppet of pure evil, bringing grave damage and oppression to civilization. His son, unaware of his fathers plight, grows up idealizing what his father once was without knowing his downfall. His innocent son goes on an adventure with his friends, as well as his sister, attempting to right the wrongs of their world. In doing so, the son becomes aware of his fathers fallen state and thus his own potential for great evil. In spite of this, the son attempts to redeem the father of his sins, believing there is yet good in him, and is willing to sacrifice his own life to save even a glimmer of the ideal that might be inside his ancestry. In voluntarily taking this sacrifice, and overcoming the shadow that consumed his father, refusing to take on evil despite every reason to do so, the son redeems the father, defeats evil, and restores balance to his world.

Bounding Into Comics Take on Lucasfilm Blaming Luke for Snoke

Thats the Star Wars story before Disney. Thats why its more than space wizards and laser swords.

Heres the Disney Star Wars story:

An evil being, that may be one of many clones, uses a messianic figures downfall to conquer the world and essentially enslave the universe. After doing so, the evil being has sex with someone, resulting in a female child that is abandoned on a desert planet. Meanwhile, the downfallen messiah figure also had children abandoned on a desert planet. However, the son of the downfallen messiah figure decides to attack his father too soon, and as a result he loses his hand in battle. The evil being takes that magic hand and uses it to create more utter evil out of nothing (creation ex nihilo for you theology people). The son defeats the father, and the evil being, except that he doesnt because its just a trick. The son then goes into exile after a while, and everything he believed in his torn down. The evil beings own progeny much later comes onto the scene, finds the now-hermit son to be pathetic, and essentially defeats her own father all on her own. The end.

And lets get real, the Disney story is actually more convoluted and much worse than that but it would take many paragraphs to cover just how narratively flawed what theyve created has become.

But, hey, the good news is that according to the Lucasfilm Story Group, canon doesnt matter anyway, right? Right?

Honestly, this is just becoming predictable. Marvel just finished up creating a controversy by comparing Jordan Peterson to a Hitler-esque super villain, so they waited a bit before making some new ridiculously stupid decision. Maybe next month theyll have Han Solo turn out to be in a relationship with Geode, the sexual rock creature they came up with earlier this year. Theyre really good at writing stuff, everyone.

Theyd like you to believe that people who hate on Star Wars are of a particular political agenda, that the critics are really bad people, and that theyre anti-woman. None of that is remotely true. Star Wars is a laughing stock because the people running it are malicious and fully incompetent to a degree that is hard to comprehend.

At this point, Lucasfilm has become a parody of itself, and Marvel Comics has been there for a while (note theyre separate from the MCU which is somewhat protected from this). Almost anything Lucasfilm produces, outside of Jon Favreau, should be treated as fodder for mocking and lower than fan-fiction. I hate to say it, but thats the reputation theyve earned.

Pirates & Princesses (PNP) is an independent, opinionated fan-powered news blog that covers Disney and Universal Theme Parks, Themed Entertainment and related Pop Culture from a consumer's point of view. Opinions expressed by our contributors do not necessarily reflect the views of PNP, its editors, affiliates, sponsors or advertisers. PNP is an unofficial news source and has no connection to The Walt Disney Company, NBCUniversal or any other company that we may cover.

Read more from the original source:
Luke Skywalker Created Snoke According to Lucasfilm and Marvel - Pirates and Princesses

Read More..

AI in Healthcare Market Insights, Deep Analysis of Key Vendor in the Industry 2021-2030 | Nuance Communications, Inc., DeepMind Technologies Limited,…

insightSLICElatest study tittles AI in Healthcare Market, puts light on the different segments of the AI in Healthcare market. The report was designed to guide the readers through the period under investigation. AI in HealthcareX report highlights the markets growth rate.

Report was formed after a qualitative and quantitative examination of the AI in Healthcare market. Factors such as market penetration, product portfolios, end user industries and pricing structure were also added in the form of graphs, charts and tables to give a visual representation of the market numbers. Details about the main players of the market are also included to give an insights of their ways of functioning and strategies.

Latest market study helped in examination of all the segments associated with the AI in Healthcare market. This report serves the clients with details about the key regions and competitive analysis. The central component of the AI in Healthcare report gives a detailed explanation of the profits, growth rates on an individual basis, manufacturing costs and financial standing of the established players of the market. Also, the clients get to assess the strategies appointed by the new entrants of the AI in Healthcare market.

Requesta Sample Copy@ https://www.insightslice.com/request-sample/489

Competitive Dashboard:Nuance Communications, Inc., DeepMind Technologies Limited, IBM Corporation, Intel Corporation and Microsoft and NVIDIA Corporation.

Report has segments to demonstrate the arising openings, capital insights and item benchmarking to evaluate the market. Our specialists have added significant proposals toward the last segment of the report that can be utilized for making the most out of the arising industry opportunities.

There is another part in the report that gives data with respect to the significant central participants, industry realities, significant figures and pieces of the market. Our accomplished group of experts have formulated business systems that are made to suit various locales and requests.

Improvements related to the market are likewise given in the AI in Healthcare market report. Complete investigation of the market as far as socioeconomics, size, section and offer give a short review of the market players. It likewise gives data about the ways used by the big players of the market to evade unanticipated conditions.

Get Best Discount@ https://www.insightslice.com/request-discount/489

AI in Healthcare Market Segmentation Snapshot:

Geographical Segmentation:North America, Europe, Asia-Pacific, South America, Middle East & Africa, South East Asia

The report segments the global AI in Healthcare market based onapplication, type, service, technology, and region.A magnified look at the segment-based analysis is aimed at giving the readers a closer look at theopportunities and threatsin the market. It also addresses political scenarios that are expected to impact the market in both small and big ways. The report on the global AI in Healthcare Market examines changing regulatory scenarios to make accurateprojections about potential investments. It also evaluates the risk for new entrants and the intensity of the competitive rivalry.

What questions does the AI in Healthcare Market report answer about the regional reach of the industry?

Request For Customization @ https://www.insightslice.com/request-customization/489

About Us:

We are a team of research analysts and management consultants with a common vision to assist individuals and organizations in achieving their short and long term strategic goals by extending quality research services. The inception of insightSLICE was done to support established companies, start-ups as well as non-profit organizations across various industries including Packaging, Automotive, Healthcare, Chemicals & Materials, Industrial Automation, Consumer Goods, Electronics & Semiconductor, IT & Telecom and Energy among others. Our in-house team of seasoned analysts hold considerable experience in the research industry.

Contct Us:

422 Larkfield Ctr #1001Santa Rosa,CA 95403-1408info@insightslice.com+1 (707) 736-6633

Go here to see the original:
AI in Healthcare Market Insights, Deep Analysis of Key Vendor in the Industry 2021-2030 | Nuance Communications, Inc., DeepMind Technologies Limited,...

Read More..

Matisse & Sadko unveil the mind bending progressive tune ‘Heal Me’ – We Rave You

Superstar Russian duo Matisse & Sadko has been a crucial part of the evolution of modern progressive house as a genre over the past few years. They have inspired a totally new generation of producers through their exotic releases, euphoric live sets, and uplifting collaborations with industry heavyweights like Martin Garrix, Steve Angello, Arty, and Dimitri Vangelis & Wyman. After recently dropping their intro ID from Tomorrowland New Years Eve (played out by Martin Garrix),the progressive sorcerers have dropped yet another hypnotic masterpiece called Heal Me, which is out now viaSTMPD RCRDS.

Starting off with some soft hypnotic vocals from Alex Aris, Heal Me will keep you on the edge of your seat guessing whats next. When the soothing dust created by the vocals finally clears out, the build-up, and eventually the drop comes in, and it all seems like a dream or even a sedative state that serves as a nice little escape from reality into the mysterious lands of deep progressive house. The overall structure of the track also reminds us of Mistaken, their progressive gem from 3 years ago that was also presented as an intro ID by Martin Garrix at Ultra 2019.

Just before the release, Matisse & Sadko revealed their plans for the rest of the year and it looks like there will be a lot for the progressive house lovers to cherish in the upcoming months. Apart from in-progress collaborations with Martin Garrix & Dubvision, the duo has already finished working on another release for June which makes them one of the top acts to keep an eye on in the next few weeks.

In the meantime, dont forget to check out Heal Me below.

Image Credit: Matisse & Sadko (via Facebook)

More here:
Matisse & Sadko unveil the mind bending progressive tune 'Heal Me' - We Rave You

Read More..

Ethics of AI: Benefits and risks of artificial intelligence – ZDNet

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems.

Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised.

Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived."

Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers.

But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve.

Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life.

That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners.

Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens.

Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing.

As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?"

Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion.

Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December. Gebru made clear that she was fired by Google, a claim Mitchell backs up in her letter. Jeff Dean, head of AI at Google, wrote in an internal email to staff that the company accepted the resignation of Gebru. Gebru's former colleagues offer a neologism for the matter: Gebru was "resignated" by Google.

Margaret Mitchell [right], was fired on the heels of the removal of Timnit Gebru.

I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired 🙂

Timnit Gebru (@timnitGebru) December 3, 2020

Mitchell, who expressed outrage at how Gebru was treated by Google, was fired in February.

The departure of the top two ethics researchers at Google cast a pall over Google's corporate ethics, to say nothing of its AI scruples.

As reported by Wired's Tom Simonite last month, two academics invited to participate in a Google conference on safety in robotics in March withdrew from the conference in protest of the treatment of Gebru and Mitchell. A third academic said that his lab, which has received funding from Google, would no longer apply for money from Google, also in support of the two professors.

Google staff quit in February in protest of Gebru and Mitchell's treatment, CNN's Rachel Metz reported. And Sammy Bengio, a prominent scholar on Google's AI team who helped to recruit Gebru, resigned this month in protest over Gebru and Mitchell's treatment, Reuters has reported.

A petition on Medium signed by 2,695 Google staff members and 4,302 outside parties expresses support for Gebru and calls on the company to "strengthen its commitment to research integrity and to unequivocally commit to supporting research that honors the commitments made in Google'sAI Principles."

Gebru's situation is an example of how technology is not neutral, as the circumstances of its creation are not neutral, as MIT scholars Katlyn Turner, Danielle Wood, Catherine D'Ignazio discussed in an essay in January.

"Black women have been producing leading scholarship that challenges the dominant narratives of the AI and Tech industry: namely that technology is ahistorical, 'evolved', 'neutral' and 'rational' beyond the human quibbles of issues like gender, class, and race," the authors write.

During an online discussion of AI in December, AI Debate 2, Celeste Kidd, a professor at UC Berkeley, reflecting on what had happened to Gebru, remarked, "Right now is a terrifying time in AI."

"What Timnit experienced at Google is the norm, hearing about it is what's unusual," said Kidd.

The questioning of AI and how it is practiced, and the phenomenon of corporations snapping back in response, comes as the commercial and governmental implementation of AI make the stakes even greater.

Ethical issues take on greater resonance when AI expands to uses that are far afield of the original academic development of algorithms.

The industrialization of the technology is amplifying the everyday use of those algorithms. A report this month by Ryan Mac and colleagues at BuzzFeed found that "more than 7,000 individuals from nearly 2,000 public agencies nationwide have used technology from startup Clearview AI to search through millions of Americans' faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members."

Clearview neither confirmed nor denied BuzzFeed's' findings.

New devices are being put into the world that rely on machine learning forms of AI in one way or another. For example, so-called autonomous trucking is coming to highways, where a "Level 4 ADAS" tractor trailer is supposed to be able to move at highway speed on certain designated routes without a human driver.

A company making that technology, TuSimple, of San Diego, California, is going public on Nasdaq. In its IPO prospectus, the company says it has 5,700 reservations so far in the four months since it announced availability of its autonomous driving software for the rigs. When a truck is rolling at high speed, carrying a huge load of something, making sure the AI software safely conducts the vehicle is clearly a priority for society.

TuSimple says it has almost 6,000 pre-orders for a driverless semi-truck. When a truck is rolling at high speed, carrying a huge load of something, making sure the AI software safely conducts the vehicle is clearly a priority for society.

Another area of concern is AI applied in the area of military and policing activities.

Arthur Holland Michel, author of an extensive book on military surveillance, Eyes in the Sky, has described how ImageNet has been used to enhance the U.S. military's surveillance systems. For anyone who views surveillance as a useful tool to keep people safe, that is encouraging news. For anyone worried about the issues of surveillance unchecked by any civilian oversight, it is a disturbing expansion of AI applications.

Calls are rising for mass surveillance, enabled by technology such as facial recognition, not to be used at all.

As ZDNet's Daphne Leprince-Ringuet reported last month, 51 organizations, including AlgorithmWatch and the European Digital Society, have sent a letter to the European Union urging a total ban on surveillance.

And it looks like there will be some curbs after all. After an extensive report on the risks a year ago, and a companion white paper, and solicitation of feedback from numerous "stakeholders," the European Commission this month published its proposal for "Harmonised Rules On Artificial Intelligence For AI." Among the provisos is a curtailment of law enforcement use of facial recognition in public.

"The use of 'real time' remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply," the report states.

The backlash against surveillance keeps finding new examples to which to point. The paradigmatic example had been the monitoring of ethic Uyghurs in China's Xianxjang region. Following a February military coup in Myanmar, Human Rights Watch reports that human rights are in the balance given the surveillance system that had just been set up. That project, called Safe City, was deployed in the capital Naypidaw, in December.

As one researcher told Human Rights Watch, "Before the coup, Myanmar's government tried to justify mass surveillance technologies in the name of fighting crime, but what it is doing is empowering an abusive military junta."

Also: The US, China and the AI arms race: Cutting through the hype

The National Security Commission on AI's Final Report in March warned the U.S. is not ready for global conflict that employs AI.

As if all those developments weren't dramatic enough, AI has become an arms race, and nations have now made AI a matter of national policy to avoid what is presented as existential risk. The U.S.'s National Security Commission on AI, staffed by tech heavy hitters such as former Google CEO Eric Schmidt, Oracle CEO Safra Catz, and Amazon's incoming CEO Andy Jassy, last month issued its 756-page "final report" for what it calls the "strategy for winning the artificial intelligence era."

The authors "fear AI tools will be weapons of first resort in future conflicts," they write, noting that "state adversaries are already using AI-enabled disinformation attacks to sow division in democracies and jar our sense of reality."

The Commission's overall message is that "The U.S. government is not prepared to defend the United States in the coming artificial intelligence era." To get prepared, the White House needs to make AI a cabinet-level priority, and "establish the foundations for widespread integration of AI by 2025." That includes "building a common digital infrastructure, developing a digitally-literate workforce, and instituting more agile acquisition, budget, and oversight processes."

Why are these issues cropping up? There are issues of justice and authoritarianism that are timeless, but there are also new problems with the arrival of AI, and in particular its modern deep learning variant.

Consider the incident between Google and scholars Gebru and Mitchell. At the heart of the dispute was a research paper the two were preparing for a conference that crystallizes a questioning of the state of the art in AI.

The paper that touched off a controversy at Google: Gebru and Bender and Major and Mitchell argue that very large language models such as Google's BERT present two dangers: massive energy consumption and perpetuating biases.

The paper, coauthored by Emily Bender of the University of Washington, Gebru, Angelina McMillan-Major, also of the University of Washington, and Mitchell, titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" focuses on a topic within machine learning called natural language processing, or NLP.

The authors describe how language models such as GPT-3 have gotten bigger and bigger, culminating in very large "pre-trained" language models, including Google's Switch Transformer, also known as Switch-C, which appears to be the largest model published to date. Switch-C uses 1.6 trillion neural "weights," or parameters, and is trained on a corpus of 745 gigabytes of text data.

The authors identify two risk factors. One is the environmental impact of larger and larger models such as Switch-C. Those models consume massive amounts of compute, and generate increasing amounts of carbon dioxide. The second issue is the replication of biases in the generation of text strings produced by the models.

The environment issue is one of the most vivid examples of the matter of scale. As ZDNet has reported, the state of the art in NLP, and, indeed, much of deep learning, is to keep using more and more GPU chips, from Nvidia and AMD, to operate ever-larger software programs. Accuracy of these models seems to increase, generally speaking, with size.

But there is an environmental cost. Bender and team cite previous research that has shown that training a large language model, a version of Google's Transformer that is smaller than Switch-C, emitted 284 tons of carbon dioxide, which is 57 times as much CO2 as a human being is estimated to be responsible for releasing into the environment in a year.

It's ironic, the authors note, that the ever-rising cost to the environment of such huge GPU farms impacts most immediately the communities on the forefront of risk from change whose dominant languages aren't even accommodated by such language models, in particular the population of the Maldives archipelago in the Arabian Sea, whose official language is Dhivehi, a branch of the Indo-Aryan family:

Is it fair or just to ask, for example, that the residents of the Maldives (likely to be underwater by 2100) or the 800,000 people in Sudan affected by drastic floods pay the environmental price of training and deploying ever larger English LMs [language models], when similar large-scale models aren't being produced for Dhivehi or Sudanese Arabic?

The second concern has to do with the tendency of these large language models to perpetuate biases that are contained in the training set data, which are often publicly available writing that is scraped from places such as Reddit. If that text contains biases, those biases will be captured and amplified in generated output.

The fundamental problem, again, is one of scale. The training sets are so large, the issues of bias in code cannot be properly documented, nor can they be properly curated to remove bias.

"Large [language models] encode and reinforce hegemonic biases, the harms that follow are most likely to fall on marginalized populations," the authors write.

The risk of the huge cost of compute for ever-larger models, has been a topic of debate for some time now. Part of the problem is that measures of performance, including energy consumption, are often cloaked in secrecy.

Some benchmark tests in AI computing are getting a little bit smarter. MLPerf, the main measure of performance of training and inference in neural networks, has been making efforts to provide more representative measures of AI systems for particular workloads. This month, the organization overseeing the industry standard MLPerf benchmark, the MLCommons, for the first time asked vendors to list not just performance but energy consumed for those machine learning tasks.

Regardless of the data, the fact is systems are getting bigger and bigger in general. The response to the energy concern within the field has been two-fold: to build computers that are more efficient at processing the large models, and to develop algorithms that will compute deep learning in a more intelligent fashion than just throwing more computing at the problem.

Cerebras's Wafer Scale Engine is the state of the art in AI computing, the world's biggest chip, designed for the ever-increasing scale of things such as language models.

On the first score, a raft of startups have arisen to offer computers dedicate to AI that they say are much more efficient than the hundreds or thousands of GPUs from Nvidia or AMD typically required today.

They include Cerebras Systems, which has pioneered the world's largest computer chip; Graphcore, the first company to offer a dedicated AI computing system, with its own novel chip architecture; and SambaNova Systems, which has received over a billion dollars in venture capital to sell both systems but also an AI-as-a-service offering.

"These really large models take huge numbers of GPUs just to hold the data," Kunle Olukotun, Stanford University professor of computer science who is a co-founder of SambaNova, told ZDNet, referring to language models such as Google's BERT.

"Fundamentally, if you can enable someone to train these models with a much smaller system, then you can train the model with less energy, and you would democratize the ability to play with these large models," by involving more researchers, said Olukotun.

Those designing deep learning neural networks are simultaneously exploring ways the systems can be more efficient. For example, the Switch Transformer from Google, the very large language model that is referenced by Bender and team, can reach some optimal spot in its training with far fewer than its maximum 1.6 trillion parameters, author William Fedus and colleagues of Google state.

The software "is also an effective architecture at small scales as well as in regimes with thousands of cores and trillions of parameters," they write.

The key, they write, is to use a property called sparsity, which prunes which of the weights get activated for each data sample.

Scientists at Rice University and Intel propose slimming down the computing budget of large neural networks by using a hashing table that selects the neural net activations for each input, a kind of pruning of the network.

Another approach to working smarter is a technique called hashing. That approach is embodied in a project called "Slide," introduced last year by Beidi Chen of Rice University and collaborators at Intel. They use something called a hash table to identify individual neurons in a neural network that can be dispensed with, thereby reducing the overall compute budget.

Chen and team call this "selective sparsification", and they demonstrate that running a neural network can be 3.5 times faster on a 44-core CPU than on an Nvidia Tesla V100 GPU.

As long as large companies such as Google and Amazon dominate deep learning in research and production, it is possible that "bigger is better" will dominate neural networks. If smaller, less resource-rich users take up deep learning in smaller facilities, than more-efficient algorithms could gain new followers.

The second issue, AI bias, runs in a direct line from the Bender et al. paper back to a paper in 2018 that touched off the current era in AI ethics, the paper that was the shot heard 'round the world, as they say.

Buolamwini and Gebru brought international attention to the matter of bias in AI with their 2018 paper "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," which revealed that commercial facial recognition systems showed "substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems."

That 2018 paper, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," was also authored by Gebru, then at Microsoft, along with MIT researcher Joy Buolamwini. They demonstrated how commercially available facial recognition systems had high accuracy when dealing with images of light-skinned men, but catastrophically bad inaccuracy when dealing with images of darker-skinned women. The authors' critical question was why such inaccuracy was tolerated in commercial systems.

Buolamwini and Gebru presented their paper at the Association for Computing Machinery's Conference on Fairness, Accountability, and Transparency. That is the same conference where in February Bender and team presented the Parrot paper. (Gebru is a co-founder of the conference.)

Both Gender Shades and the Parrot paper deal with a central ethical concern in AI, the notion of bias. AI in its machine learning form makes extensive use of principles of statistics. In statistics, bias is when an estimation of something turns out not to match the true quantity of that thing.

So, for example, if a political pollster takes a poll of voters' preferences, if they only get responses from people who talk to poll takers, they may get what is called response bias, in which their estimation of the preference for a certain candidate's popularity is not an accurate reflection of preference in the broader population.

Also: AI and ethics: One-third of executives are not aware of potential AI bias

The Gender Shades paper in 2018 broke ground in showing how an algorithm, in this case facial recognition, can be extremely out of alignment with the truth, a form of bias that hits one particular sub-group of the population.

Flash forward, and the Parrot paper shows how that statistical bias has become exacerbated by scale effects in two particular ways. One way is that data sets have proliferated, and increased in scale, obscuring their composition. Such obscurity can obfuscate how the data may already be biased versus the truth.

Second, NLP programs such as GPT-3 are generative, meaning that they are flooding the world with an amazing amount of created technological artifacts such as automatically generated writing. By creating such artifacts, biases can be replicated, and amplified in the process, thereby proliferating such biases.

On the first score, the scale of data sets, scholars have argued for going beyond merely tweaking a machine learning system in order to mitigate bias, and to instead investigate the data sets used to train such models, in order to explore biases that are in the data itself.

Before she was fired from Google's Ethical AI team, Mitchell lead her team to develop a system called "Model Cards" to excavate biases hidden in data sets. Each model card would report metrics for a given neural network model, such as looking at an algorithm for automatically finding "smiling photos" and reporting its rate of false positives and other measures.

One example is an approach created by Mitchell and team at Google called model cards. As explained in the introductory paper, "Model cards for model reporting," data sets need to be regarded as infrastructure. Doing so will expose the "conditions of their creation," which is often obscured. The research suggests treating data sets as a matter of "goal-driven engineering," and asking critical questions such as whether data sets can be trusted and whether they build in biases.

Another example is a paper last year, featured in The State of AI Ethics, by Emily Denton and colleagues at Google, "Bringing the People Back In," in which they propose what they call a genealogy of data, with the goal "to investigate how and why these datasets have been created, what and whose values influence the choices of data to collect, the contextual and contingent conditions of their creation, and the emergence of current norms and standards of data practice."

Vinay Prabhu, chief scientist at UnifyID, in a talk at Stanford last year described being able to take images of people from ImageNet, feed them to a search engine, and find out who people are in the real world. It is the "susceptibility phase" of data sets, he argues, when people can be targeted by having had their images appropriated.

Scholars have already shed light on the murky circumstances of some of the most prominent data sets used in the dominant NLP models. For example, Vinay Uday Prabhu, who is chief scientist at startup UnifyID Inc., in a virtual talk at Stanford University last year examined the ImageNet data set, a collection of 15 million images that have been labeled with descriptions.

The introduction of ImageNet in 2009 arguably set in motion the deep learning epoch. There are problems, however, with ImageNet, particularly the fact that it appropriated personal photos from Flickr without consent, Prabhu explained.

Those non-consensual pictures, said Prabhu, fall into the hands of thousands of entities all over the world, and that leads to a very real personal risk, he said, what he called the "susceptibility phase," a massive invasion of privacy.

Using what's called reverse image search, via a commercial online service, Prabhu was able to take ImageNet pictures of people and "very easily figure out who they were in the real world." Companies such as Clearview, said Prabhu, are merely a symptom of that broader problem of a kind-of industrialized invasion of privacy.

An ambitious project has sought to catalog that misappropriation. Called Exposing.ai, it is the work of Adam Harvey and Jules LaPlace, and it formally debuted in January. The authors have spent years tracing how personal photos were appropriated without consent for use in machine learning training sets.

The site is a search engine where one can "check if your Flickr photos were used in dozens of the most widely used and cited public face and biometric image datasets [] to train, test, or enhance artificial intelligence surveillance technologies for use in academic, commercial, or defense related applications," as Harvey and LaPlace describe it.

Some argue the issue goes beyond simply the contents of the data to the means of its production. Amazon's Mechanical Turk service is ubiquitous as a means of employing humans to prepare vast data sets, such as by applying labels to pictures for ImageNet or to rate chat bot conversations.

An article last month by Vice's Aliide Naylor quoted Mechanical Turk workers who felt coerced in some instances to produce results in line with a predetermined objective.

The Turkopticon feedback aims to arm workers on Amazon's Mechanical Turk with honest appraisals of the work conditions of contracting for various Turk clients.

A project called Turkopticon has arisen to crowd-source reviews of the parties who contract with Mechanical Turk, to help Turk workers avoid abusive or shady clients. It is one attempt to ameliorate what many see as the troubling plight of an expanding underclass of piece workers, what Mary Gray and Siddharth Suri of Microsoft have termed "ghost work."

There are small signs the message of data set concern has gotten through to large organizations practicing deep learning. Facebook this month announced a new data set that was created not by appropriating personal images but rather by making original videos of over three thousand paid actors who gave consent to appear in the videos.

The paper by lead author Caner Hazirbas and colleagues explains that the "Casual Conversations" data set is distinguished by the fact that "age and gender annotations are provided by the subjects themselves." Skin type of each person was annotated by the authors using the so-called Fitzpatrick Scale, the same measure that Buolamwini and Gebru used in their Gender Shades paper. In fact, Hazirbas and team prominently cite Gender Shades as precedent.

Hazirbas and colleagues found that, among other things, when machine learning systems are tested against this new data set, some of the same failures crop up as identified by Buolamwini and Gebru. "We noticed an obvious algorithmic bias towards lighter skinned subjects," they write.

Read more:
Ethics of AI: Benefits and risks of artificial intelligence - ZDNet

Read More..

This Researcher Says AI Is Neither Artificial nor Intelligent – WIRED

Technology companies like to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is flawed. In her book Atlas of AI, she visits a lithium mine, an Amazon warehouse, and a 19th-century phrenological skull archive to illustrate the natural resources, human sweat, and bad science underpinning some versions of the technology. Crawford, a professor at the University of Southern California and researcher at Microsoft, says many applications and side effects of AI are in urgent need of regulation.

Crawford recently discussed these issues with WIRED senior writer Tom Simonite. An edited transcript follows.

WIRED: Few people understand all the technical details of artificial intelligence. You argue that some experts working on the technology misunderstand AI more deeply.

KATE CRAWFORD: It is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.

Buy this book at:

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

AI is made from vast amounts of natural resources, fuel, and human labor. And it's not intelligent in any kind of human intelligence way. Its not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, weve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence, and nothing could be further from the truth.

You take on that myth by showing how AI is constructed. Like many industrial processes it turns out to be messy. Some machine learning systems are built with hastily collected data, which can cause problems like face recognition services more error prone on minorities.

We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just raw material, reused across thousands of projects.

This evolved into an ideology of mass data extraction, but data isnt an inert substanceit always brings a context and a politics. Sentences from Reddit will be different from those in kids books. Images from mugshot databases have different histories than those from the Oscars, but they are all used alike. This causes a host of problems downstream. In 2021, there's still no industry-wide standard to note what kinds of data are held in training sets, how it was acquired, or potential ethical issues.

You trace the roots of emotion recognition software to dubious science funded by the Department of Defense in the 1960s. A recent review of more than 1,000 research papers found no evidence a persons emotions can be reliably inferred from their face.

Emotion detection represents the fantasy that technology will finally answer questions that we have about human nature that are not technical questions at all. This idea thats so contested in the field of psychology made the jump into machine learning because it is a simple theory that fits the tools. Recording people's faces and correlating that to simple, predefined, emotional states works with machine learningif you drop culture and context and that you might change the way you look and feel hundreds of times a day.

That also becomes a feedback loop: Because we have emotion detection tools, people say we want to apply it in schools and courtrooms and to catch potential shoplifters. Recently companies are using the pandemic as a pretext to use emotion recognition on kids in schools. This takes us back to the phrenological past, this belief that you detect character and personality from the face and the skull shape.

You contributed to recent growth in research into how AI can have undesirable effects. But that field is entangled with people and funding from the tech industry, which seeks to profit from AI. Google recently forced out two respected researchers on AI ethics, Timnit Gebru and Margaret Mitchell. Does industry involvement limit research questioning AI?

View post:
This Researcher Says AI Is Neither Artificial nor Intelligent - WIRED

Read More..

Artificial intelligence is learning how to dodge space junk in orbit – Space.com

An AI-driven space debris-dodging system could soon replace expert teams dealing with growing numbers of orbital collision threats in the increasingly cluttered near-Earth environment.

Every two weeks, spacecraft controllers at the European Space Operations Centre (ESOC) in Darmstadt, Germany, have to conduct avoidance manoeuvres with one of their 20 low Earth orbit satellites, Holger Krag, the Head of Space Safety at the European Space Agency (ESA) said in a news conference organized by ESA during the 8th European Space Debris Conference held virtually from Darmstadt Germany, April 20 to 23. There are at least five times as many close encounters that the agency's teams monitor and carefully evaluate, each requesting a multi-disciplinary team to be on call 24/7 for several days.

"Every collision avoidance manoeuvre is a nuisance," Krag said. "Not only because of fuel consumption but also because of the preparation that goes into it. We have to book ground-station passes, which costs money, sometimes we even have to switch off the acquisition of scientific data. We have to have an expert team available round the clock."

The frequency of such situations is only expected to increase. Not all collision alerts are caused by pieces of space debris. Companies such as SpaceX, OneWeb and Amazon are building megaconstellations of thousands of satellites, lofting more spacecraft into orbit in a single month than used to be launched within an entire year only a few years ago. This increased space traffic is causing concerns among space debris experts. In fact, ESA said that nearly half of the conjunction alerts currently monitored by the agency's operators involve small satellites and constellation spacecraft.

ESA, therefore, asked the global Artificial Intelligence community to help develop a system that would take care of space debris dodging autonomously or at least reduce the burden on the expert teams.

"We made a large historic data set of past conjunction warnings available to a global expert community and tasked them to use AI [Artificial Intelligence] to predict the evolution of a collision risk of each alert over the three days following the alert," Rolf Densing, Director of ESA Operations said in the news conference.

"The results are not yet perfect, but in many cases, AI was able to replicate the decision process and correctly identify in which cases we had to conduct the collision avoidance manoeuvre."

Related: Astronomers ask UN committee to protect night skies from megaconstellations

The agency will explore newer approaches to AI development, such as deep learning and neural networks, to improve the accuracy of the algorithms, Tim Flohrer, the Head of ESA's Space Debris Office told Space.com.

"The standard AI algorithms are trained on huge data sets," Flohrer said. "But the cases when we had actually conducted manoeuvres are not so many in AI terms. In the next phase we will look more closely into specialised AI approaches that can work with smaller data sets."

For now, the AI algorithms can aid the ground-based teams as they evaluate and monitor each conjunction alert, the warning that one of their satellites might be on a collision course with another orbiting body. According to Flohrer, such AI-assistance will help reduce the number of experts involved and help the agency deal with the increased space traffic expected in the near future. The decision whether to conduct an avoidance manoeuvre or not for now still has to be taken by a human operator.

"So far, we have automated everything that would require an expert brain to be awake 24/7 to respond to and follow up the collision alerts," said Krag. "Making the ultimate decision whether to conduct the avoidance manoeuvre or not is the most complex part to be automated and we hope to find a solution to this problem within the next few years."

Ultimately, Densing added, the global community should work together to create a collision avoidance system similar to modern air-traffic management, which would work completely autonomously without the humans on the ground having to communicate.

"In air traffic, they are a step further," Densing said. "Collision avoidance manoeuvres between planes are decentralised and take place automatically. We are not there yet, and it will likely take a bit more international coordination and discussions."

Not only are scientific satellites at risk of orbital collisions, but spacecraft like SpaceX's Crew Dragon could be affected as well. Recently, Crew Dragon Endeavour, with four astronauts on board, reportedly came dangerously close to a small piece of debris on Saturday, April 24, during its cruise to the International Space Station. The collision alert forced the spacefarers to interrupt their leisure time, climb back into their space suits and buckle up in their seats to brace for a possible impact.

According to ESA, about 11,370 satellites have been launched since 1957, when the Soviet Union successfully orbited a beeping ball called Sputnik. About 6,900 of these satellites remain in orbit, but only 4,000 are still functioning.

Follow Tereza Pultarova on Twitter @TerezaPultarova. Follow us on Twitter @Spacedotcom and on Facebook.

Read more:
Artificial intelligence is learning how to dodge space junk in orbit - Space.com

Read More..

The Computers Are Getting Better at Writing, Thanks to Artificial Intelligence – The New Yorker

At first, I was confused by this continuation from the machine. For one thing, Englander doesnt write with sentence fragments, but, upon rereading, the content seemed Englander-esque to me. Its a shocking and terrifying leap, he said, when I showed it to him. Yes, its off. But not in the sense that a computer wrote it but in the sense that someone just starting to write fiction wrote itsloppy but well-meaning. Its like it has the spark of life to it but just needs to sit down and focus and put the hours in. Although Englander doesnt feel the passage is something he would write, he doesnt hate it, either. It was like the work of someone aspiring to write, he said. Like maybe a well-meaning pre-med student or business student fulfilling a writing requirement because they have tothe work is there, but maybe without some of the hunger. But it definitely feels teachable. Id totally sit down and have a cup of coffee with the machine. You know, to talk things out.

Friendliness will not be the typical reaction, I fear. The first reaction to this technology will be dismissalthat the technology isnt really doing anything much at all, that it isnt writing, that its just a toy. The second reaction will be uneasethat the technology is doing too much, that it is writing, that it will replace the human. GPT-3 is a tool. It does not think or feel. It performs instructions in language. The OpenAI people imagine it for generating news articles, translation, answering questions. But these are the businessmans pedantic and vaguely optimistic approaches to the worlds language needs.

For those who choose to use artificial intelligence, it will alter the task of writing. The writers job becomes as an editor almost, Gupta said. Your role starts to become deciding whats good and executing on your taste, not as much the low-level work of pumping out word by word by word. Youre still editing lines and copy and making those words beautiful, but, as you move up in that chain, and youre executing your taste, you have the potential to do a lot more. The artist wants to do something with language. The machines will enact it. The intention will be the art, the craft of language an afterthought.

For writers who dont like writingwhich, in my experience, is nearly all of usSudowrite may well be a salvation. Just pop in what you have, whatever scraps of notes, and let the machine give you options. There are other, more obvious applications. Sudowrite was relatively effective when I asked it to continue Charles Dickenss unfinished novel The Mystery of Edwin Drood. I assume it will be used by publishers to complete unfinished works like Jane Austens Sanditon or P.G. Wodehouses Sunset at Blandings. With a competent technician and an editor-writer you could compose them now, rapidly, with the technology thats available. There must be a market for a new Austen or Wodehouse. I could do either in a weekend. (Other writers have already tried to write like Austen and Wodehouse, but even excellent examples always feel like contemporary versions of their works. If you used a Wodehouse machine or an Austen machine, it would sound like they sound. The future would not have happened to the algorithm.)

Gupta knows that Sudowrite is only beginning to sense, dimly, the possibilities of GPT-3, never mind the possibilities of artificial intelligence in natural language. GPT-3 is perhaps the Model A of this technology. The above is a small taste of what can be done at a hundred and seventy-five billion parameters. What happens at a trillion? What happens at ten trillion? The human brain has about a hundred trillion parameters. What happens when the technology passes that number? Its early days, Gupta said. I see a future where it gets super more sophisticated and it helps you realize ideas that you couldnt realize easily on your own.

The creative possibilities are exciting and terrifying. Englander didnt really see the machine as a competitor to himself but almost as a student that he could coax out of weakness. If it was desperately trying to tell me something about dreams, it would have the juice, Englander told me. But this was more like the work of somebody who admires writing but doesnt need to write. Someonea living, breathing someonetrying to sound like they think a writer should.

Already, what GPT-3 shows is that literary style is an algorithm, a complex series of instructions. The reason a passage from Kafka or Coleridge or Englander doesnt look like math is because the mind isnt capable of holding a hundred and seventy-five billion parameters at the same time. Very soon, when you read a text you will not be able to assume a person intended or wrote that language. Eventually, this technology will escape the confines of the scientific realm, to those who use language to manipulate and to control, and perhaps even those who use language to express and celebrate.

Meanwhile, there are other avenues of meaning to explore. If you put in the whole of The Metamorphosis into Sudowrite, it will compose an extension:

When Mr. Samsa thought about moving house and how they could all pull their way up again, he couldnt suppress a shiver running down his spineand this shiver ran down the spine of all three of them, he could see, as if they were collectively possessed and trembling all over with some sort of fever. It was more like a convulsion than a shiver. Then they calmed down, quite exhausted. Mr. Samsa thought: If we continue like this well soon be having a charwoman in our family. First thing tomorrow Ill make inquiries about suitable flats. And then he looked up: the two women were gazing ahead and did not see that he was looking at them. All of a sudden Grete burst out laughing. Whats that for?, asked her father. Well, just because, replied Grete. And Mr. Samsa did not insist.

But The Metamorphosis doesnt need extension. Its perfect. It has survived because the core meaning of its story continues to resonate. Gregor is subject to a miracle that is at once a revelation and a catastrophe. The human entity changes once again, in a way that is both magical and degrading.

An earlier version of this post misidentified the name of a function in Sudowrite as well as its proposed cost.

Originally posted here:
The Computers Are Getting Better at Writing, Thanks to Artificial Intelligence - The New Yorker

Read More..

Arize AI Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies of 2021 – PRNewswire

BERKELEY, Calif., April 30, 2021 /PRNewswire/ --Arize AI, the leading Machine Learning (ML) Observability company, has been named to the Forbes AI 50, a list of the top private companies using artificial intelligence to transform industries.

The Forbes AI 50 list, in its third year, includes a list of private North American companies using artificial intelligence in ways that are fundamental to their operations, such as machine learning, natural language processing, and computer vision.

Today, companies spend millions of dollars developing and implementing ML models, only to see a myriad of unexpected performance degradation issues arise. Models that don't perform after the code is shipped are painful to troubleshoot and negatively impact business operations and results.

"Arize AI is squarely focused on the last mile of AI: models that are in production and making decisions that can cost businesses millions of dollars a day," said Jason Lopatecki, co-founder and CEO of Arize. "We are excited that the AI 50 panel recognizes the importance of software that can watch, troubleshoot, explain and provide guardrails on AI, as it is deployed into the real world, and views Arize AI as a leader in this category."

In partnership with Sequoia Capital and Meritech Capital, Forbes evaluated hundreds of submissions from the U.S. and Canada. A panel of expert AI judges then reviewed the finalists to hand-pick the 50 most compelling companies.

About Arize AI Arize AI was founded by leaders in the Machine Learning (ML) Infrastructure and analytics space to bring better visibility and performance management over AI. Arize AI built the first ML Observability platform to help make machine learning models work in production. As models move from research to the real world, we provide a real-time platform to monitor, explain and troubleshoot model/data issues.

Media Contact: Krystal Kirkland [emailprotected]

SOURCE Arize AI

http://www.arize.com

Link:
Arize AI Named to Forbes AI 50 List of Most Promising Artificial Intelligence Companies of 2021 - PRNewswire

Read More..

NATO tees up negotiations on artificial intelligence in weapons – C4ISRNet

COLOGNE, Germany NATO officials are kicking around a new set of questions for member states on artificial intelligence in defense applications, as the alliance seeks common ground ahead of a strategy document planned for this summer.

The move comes amid a grand effort to sharpen NATOs edge in what officials call emerging and disruptive technologies, or EDT. Autonomous and artificial intelligence-enabled weaponry is a key element in that push, aimed at ensuring tech leadership on a global scale.

Exactly where the alliance falls on the spectrum between permitting AI-powered defense technology in some applications and disavowing it in others is expected to be a hotly debated topic in the run-up to the June 14 NATO summit.

We have agreed that we need principles of responsible use, but were also in the process of delineating specific technologies, David van Weel, the alliances assistant secretary-general for emerging security challenges, said at a web event earlier this month organized by the Estonian Defence Ministry.

Different rules could apply to different systems depending on their intended use and the level of autonomy involved, he said. For example, an algorithm sifting through data as part of a back-office operation at NATO headquarters in Brussels would be subjected to a different level of scrutiny than an autonomous weapon.

In addition, rules are in the works for industry to understand the requirements involved in making systems adhere to a future NATO policy on artificial intelligence. The idea is to present a menu of quantifiable principles for companies to determine what their products can live up to, van Weel said.

For now, alliance officials are teeing up questions to guide the upcoming discussion, he added.

Those range from basic introspections about whether AI-enabled systems fall under NATOs legal mandates, van Weel explained, to whether a given system is free of bias, meaning if its decision-making tilts in a particular direction.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

Accountability and transparency are two more buzzwords expected to loom large in the debate. Accidents with autonomous vehicles, for example, will the raise the question of who is responsible manufacturers or operators.

The level of visibility into of how systems make decisions also will be crucial, according to van Weel. Can you explain to me as an operator what your autonomous vehicle does, and why it does certain things? And if it does things that we didnt expect, can we then turn it off? he asked.

NATOs effort to hammer out common ground on artificial intelligence follows a push by the European Union to do the same, albeit without considering military applications. In addition, the United Nations has long been a forum for discussing the implications of weaponizing AI.

Some of those organizations have essentially reinvented the wheel every time, according to Frank Sauer, a researcher at the Bundeswehr University in Munich.

Regulators tend to focus too much on slicing and dicing through various definitions of autonomy and pairing them with potential use cases, he said.

You have to think about this in a technology-agnostic way, Sauer argued, suggesting that officials place greater emphasis on the precise mechanics of human control. Lets just assume the machine can do everything it wants what role are humans supposed to play?

Read the original here:
NATO tees up negotiations on artificial intelligence in weapons - C4ISRNet

Read More..