Page 2,885«..1020..2,8842,8852,8862,887..2,8902,900..»

Thales, Atos take on big data and artificial intelligence in new joint venture – DefenseNews.com

STUTTGART, Germany Two major French technology companies are joining forces in an effort to become the European nations premier institution for artificial intelligence and big-data efforts.

Thales and Atos announced Thursday the creation of a joint venture called Athea, along with plans to develop a flagship, sovereign, big-data and AI platform that could serve customers in the public and private sector.

This new partnership comes as nations across Europe, and beyond, are targeting AI and big data as key enabling technologies for future military capabilities.

With the exponential rise in the number of sources of information, and increased pressure to respond more quickly to potential issues, state agencies need to manage ever-greater volumes of heterogeneous data and accelerate the development of new AI applications where security and sovereignty are key, the companies said in a news release.

The two teams began discussing the potential of a joint venture several months ago, per a Thales spokesperson.

Together, we will capitalise on our respective areas of expertise to provide best-in-class big data and artificial intelligence solutions, Marc Darmon, executive vice president for secure communications and information systems at Thales, said in a statement.

Athea will draw on each companys work on Project Artemis meant to provide the French military with a big-data processing capability to build a system that securely handles sensitive data on a nationwide scale and that will also support the solutions implementation within government programs, per the joint news release.

Both Atos and Thales have worked on the demonstration phase for Artemis, awarded in 2017, and both were chosen in April to prepare the full-scale rollout of the program by the French military procurement agency DGA.

Sign up for our Early Bird Brief Get the defense industry's most comprehensive news and information straight to your inbox

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the Early Bird Brief.

Athea will generate huge potential for innovation, and stimulate the industrial and defence ecosystem, including innovative start-ups, to meet the needs of government agencies and other stakeholders in the sector, said Pierre Barnab, senior executive vice president for big data and cybersecurity at Atos.

Athea will initially focus on the French market before addressing European requirements at a later date, the companies said. This indicates that the joint venture will not affect ongoing multinational projects, such as Thales work on the Franco-German-Spanish Future Combat Air System or NATOs deployable combat cloud program.

For the air system, Thales is an industry partner for two of the programs seven technology pillars: the air combat cloud for which the industry lead is Airbus and the advanced sensors pillar, led by Indra. The company was also selected by the NATO Communications and Information Agency to develop and build the alliances first theater-level, deployable defense cloud capability, dubbed Firefly, within the next two years.

A Thales spokesperson said Athea might very well work with NATO in the future as the alliance pursues new emerging and disruptive technologies, including AI and big data.

NATO has identified those two capabilities as the first tech areas to target under its recently established emerging and disruptive technology strategy. It plans to release a strategy dedicated solely to artificial intelligence this summer, aligned with the NATO Summit scheduled for June 14 in Brussels.

Read the rest here:
Thales, Atos take on big data and artificial intelligence in new joint venture - DefenseNews.com

Read More..

Geico to Use Artificial Intelligence to Speed Up Car Repairs – The Wall Street Journal

Geico, the nations second-biggest auto insurer, will try to speed up vehicle repairs for its policyholders by running photographs of damaged vehicles through artificial-intelligence software.

Berkshire Hathaway Inc. -owned Geico will offer the quick-estimate process in partnership with Tractable Ltd., said Alex Dalyac, chief executive and founder of the London-based technology firm. Tractable is among a number of specialists trying to help car insurers use artificial intelligence and other techniques to eliminate time-consuming hassles when customers file accident claims.

Financial terms of the partnership werent disclosed.

Todd Combs, Geicos chief executive, said in a written statement that Tractables technology is a way to obtain accurate estimates for policyholders and get drivers back on the road faster.

Geico is second in the private-passenger auto-insurance market, with a 13.6% share, according to the Insurance Information Institute. Geicos size as a part of Warren Buffetts Berkshire conglomerate means its moves are often followed by other car insurers, so its use of artificial intelligence in handling claims could become standard industry practice.

View original post here:
Geico to Use Artificial Intelligence to Speed Up Car Repairs - The Wall Street Journal

Read More..

Artificial Intelligence on the Edge – IoT For All

When we allow ourselves to be drawn into the world of science fiction, the concept of Artificial Intelligence and Machine Learning (AI/ML) conjures up visions of Neo, Trinity, and Morpheus battling the machine in the Matrix films.

However, in real life, AI/ML helps developers create better and lower-cost IoT end nodes that will benefit an ecosystem where their products exist. The benefits of AI/ML are far deeper than simply that of better decision-making in the end node; some optimizations come about bringing valuable benefits to all involved, including the consumer, the developer, and the operator.

AI/ML isnt a new concept, but its use has traditionally been made available through power-hungry, more expensive platforms that many users share at once. Centralized data centers offered the tech sector a limited exposure to the rising CapEx and OpEx cost, as it started to build and use an ever-increasing reliance on storage and compute capability for its data. This is because the data center phenomenon allowed the tech sector to share servers, utilities, cooling, real estate, and security. Furthermore, it provided an ability to scale up and down resources as required, such as the amount of compute and storage needed. Due to the shared nature of cost, new technologies such as AI/ML could be made available faster.

The interconnection of globally distributed data centers also offered the tech sector the ability to use regional facilities. An IoT company based in the US could offer services to consumers in Europe without incurring a transatlantic delay. Data is transmitted and routed between the continents or falling foul to the nuances of regional privacy and data protection laws. Such requirements are important if you consider that a lighting switch with a two-second delay before lights are illuminated would not have aligned with consumer expectations and would therefore struggle to become a commercial success.

Datacenters and the cloud have made it possible for new domestic and international business opportunities. Developers have established new mechanisms to save the consumer and the business entity money.

An operator no longer needs to roll a maintenance truck to business because the ice machine in the hotel may need attention; the operator need only send a maintenance truck because they know it needs attention, therefore saving the company tens of thousands of dollars in operational expenses.

Using AI/ML to see these tiny signatures in a device before the failure happens can be complex because the associated signatures can be tiny and therefore subtle. These changes could be vibrational in the pumps motor or slight temperature changes in a heat exchanger or condenser: something an individual might not recognize or even see. The example of connected ice makers may not appear to drive the volumes that many developers would interpret as a concern but consider those same concerns or business models applied to a warehouse or hotel lighting. Thousands of lightbulbs may exist in a warehouse, each positioned over shelving or machinery that would need to be moved to replace a bulb, which in turn means stopping a production line at possibly the most critical moment.

Predictive maintenance and cloud analytics are becoming big businesses, and AI/ML offers an easy way to perform an automated evaluation of the data it generates. Still, these new business models do lead to the creation of an enormous volume of data. This, in turn, has created new and interesting technical challenges that developers and operators now need to deal with.

Those problems appear to be scaling problems on the surface add more servers, add more storage, and other data center-based consumables, but fixing these issues doesnt fix the increasing number of problems forming at the other end of the data pipe.

In most applications, the data is generated by some form of sensor, which requires power and bandwidth. The bandwidth is also consumed in terms of the facilities internet uplink and RF spectrum. Sending massive volumes of data that may represent no change is expensive; radios consume a lot of power, and in busy RF spectrums, they consume even more through transmission re-tries. More sensors lead to even busier RF environments and the need for more battery maintenance. In addition to the issues surrounding battery life and local bandwidth, some applications may be more susceptible to security concerns that come about. Massive quantities of data can form patterns that those with malicious intent could take advantage of if intercepted.

There is a growing trend to thwart these issues to return a lot of that decision-making to the end node, reducing the radioactivity to only data determined as more important. This reduces the power consumption, bandwidth, and digital signature. The caveat of returning that decision-making to the end node may mean an increase in end-node processing, storage, and, once again, power consumption. It seems that the IoT is caught in a vicious circle limiting its accessibility and market growth.

Innovations in artificial intelligence have enabled the use of smaller microcontrollers, such as an ARM Cortex-M, and call on smaller memory resources for both flash and RAM. The code size used to implement AI in a system can also be much smaller than that of traditional coding when implementing complex algorithms that address any real-life corner cases. This also makes firmware updates smaller, faster to develop, and easier to distribute across large sensor fleets.

Many developers take advantage of AI in end-node sensor products to enhance their designs and better the experience for both consumers and operators alike. Examples of AI technology can be quickly prototyped using development kits.

Kits can be used to demonstrate a pump monitoring system. The ability to shrink wireless sensors, prolong their life and adopt better security, all without destroying the local RF spectrum with noise, means more useful sensors can be deployed to enhance productivity and comfort in the field. Everyday products such as wall switches, environmental sensors, and even curbside trash sensors can be included in automation and monitoring ecosystems at an attractive cost and performance point.

Here is the original post:
Artificial Intelligence on the Edge - IoT For All

Read More..

UAEs lunar rover will use artificial intelligence to explore the Moon – The National

An advanced artificial intelligence flight computer will help the UAEs lunar rover explore the surface of the Moon.

The navigation computer is being developed by Canadian space firm Mission Control Space Services.

It will recognise geological features as the Emirati rover, Rashid, drives around the unstable terrain of the lunar surface.

The computer will be installed on a Japanese lander that will take Rashid to the Moon next year, from where it will receive data from the rover. It will also send information back to Earth to be studied by scientists at the Mohammed bin Rashid Space Centre.

With the support of the Canadian Space Agency, Canadian scientists and engineers will be able to participate in near-term missions to the lunar surface, said Ewan Reid, president and chief executive of Mission Control.

Hind Bin Jarsh AlFalasi, 24, is the Emirati woman who has designed the logo for UAE's lunar mission. Reem Mohammed/The National

The Emirates Lunar Mission logo as revealed by Sheikh Hamdan bin Mohammed, Crown Prince of Dubai. It features the signature of Sheikh Rashid, the late ruler of Dubai. Courtesy: Sheikh Hamdan bin Mohammed Twitter

The logo will be featured on the Rashid Rover, which is slated for launch in 2022. Courtesy: MBRSC

Ms AlFalasi wanted to personalise the logo and show the mission's importance, so she added Sheikh Rashid's signature and inscribed it on top of the Moon. Reem Mohammed/The National

Only three countries have been able to land missions on the Moon so far. Courtesy: MBRSC

China is attempting another lunar mission. Its Chang'e 5 spacecraft has entered lunar orbit and aims to bring back rock and soil samples. UAE's lunar mission also aims to study lunar soil, as well as dust. Here, a Long March-5 rocket carrying the Chinese spacecraft lifts off on November 24. AP Photo/Mark Schiefelbein

Ms Al Falasi's career as a graphic designer has taken off. She will also be helping design the logo for the MBZ-Sat satellite, UAE's first fully in-house built spacecraft. Reem Mohammed/The National

The Mohammed bin Rashid Space Centre is carrying out the Emirates Lunar Mission. Reem Mohammed/The National

Rashid will explore the near side of the Moon, which offers a smoother surface with fewer craters, but the terrain is still unpredictable.

The four-wheeled rover can climb over an obstacle at a maximum height of 10 centimetres and descend a 20-degree slope.

But some basins on the near side of the moon are so steep that it would be impossible for the rover to climb out, were it to fall into one.

The team at the Mohammed bin Rashid Space Centre has shortlisted unexplored landing locations. The final decision will be based on an area that offers the most scientific value and security for the Arab worlds first lunar rover.

The navigation computer by Mission Control will include an AI application that will use deep-learning algorithms to recognise geological features in images captured by the rover.

This research will explore techniques for more advanced rover navigation, said Dr Melissa Battler, Mission Controls chief science officer.

By demonstrating this new technology on the moon, we will not only unlock potential autonomous decision-making capabilities for future rovers, but better support planetary-science missions going forward.

The company secured a $3.04 million fund from the Canadian Space Agencys Lunar Exploration Accelerator Programme, part of which will be used to develop the computer.

Japanese firm iSpace is building the Hakuto-Reboot lander that will deliver the rover to the Moon. Both will take off on board a Falcon 9 rocket from the Kennedy Space Centre in Florida late next year.

The Emirates Lunar Mission will also be provided with wired communication and power during the cruise phase and wireless communication on the lunar surface by iSpace.

Rashid will study the properties of lunar soil, the geology of the moon, dust movement and its photoelectron sheath for one lunar day about two weeks.

It will send back more than 1,000 images of the lunar surface.

Dr Hamad Al Marzooqi, project manager of the Emirates Lunar Mission, said it would be the first study of the photoelectron sheath.

It is a phenomenon that is created on the lunar surface due to the continuous bombardment of solar wind and cosmic rays," he said.

Sheikh Mohamed bin Zayed and Sheikh Mohammed bin Rashid personally thank staff from mission control in Dubai after Hope probe's successful orbit entry on February 9. The National

A man celebrates at an event at Burj Park in Dubai to celebrate the Hope probe going into orbit around Mars. Chris Whiteoak / The National

People celebrate at an event at Burj Park to celebrate the Hope probe going into orbit around Mars. Chris Whiteoak / The National

An event at Burj Park to celebrate the Hope probe going into orbit around Mars. Chris Whiteoak / The National

People celebrate at an event at Burj Park to celebrate the Hope probe going into orbit around Mars. Chris Whiteoak / The National

Guests arrive at the Burj Park event to mark the arrival of the Hope probe to Mars. Chris Whiteoak / The National

A guest attending the Burj Park event to mark the arrival of the Hope probe to Mars. Chris Whiteoak / The National

Burj Park was set up for people to watch the Hope probe attempt its Mars orbit insertion. Courtesy: UAE Government Twitter

UAE Mars Mission engineer, Hessa Al Matroushi, was interviewed at a Burj Park event to mark the arrival of the Hope probe to Mars. Chris Whiteoak / The National

Dr Thani Al Zeyoudi, Minister of State for Foreign Trade, attended the event at Burj Park to mark the arrival of the Hope probe to Mars. Chris Whiteoak / The National

Engineer Hessa Al Matroushi attended the event at Burj Park to mark the arrival of the Hope probe to Mars. Chris Whiteoak / The National

TV crews get ready at an event at Burj Park in Dubai to celebrate the Hope probe going into orbit around Mars. Chris Whiteoak / The National

An event at Dubai's Burj Park to celebrate the Hope probe's Mars orbit insertion attempt. Chris Whiteoak / The National

Guests arrive at an event at Burj Park to mark the Hope probe's Mars orbit insertion attempt. Chris Whiteoak / The National

Guests and media arrive at an event at Burj Park to witness Hope probe's Mars orbit insertion attempt. Chris Whiteoak / The National

Guests arrive at an event at Burj Park to mark the Hope probe's Mars orbit insertion attempt. Chris Whiteoak / The National

Guests arrive at an event at Burj Park to mark the Hope probe's Mars orbit insertion attempt. Chris Whiteoak / The National

Guests arrive at an event at Burj Park to mark the Hope probe's Mars orbit insertion attempt. Chris Whiteoak / The National

The Burj Khalifa lights up at an event at Burj Park in Dubai to celebrate the Hope probe going into orbit around Mars. Chris Whiteoak / The National

The UAE Flag area on the Corniche in Abu Dhabi lights up in red to celebrate the success of the Hope probe going into orbit around Mars. Victor Besa / The National

The ADNOC Headquarters lights up in Abu Dhabi to celebrate the success of the Hope probe going into orbit around Mars. Victor Besa / The National

Sheikh Mohamed bin Zayed celebrates with Sheikh Mohammed bin Rashid at an event at Burj Park in Dubai to celebrate the Hope probe going into orbit around Mars. Chris Whiteoak / The National

People celebrate at an event at Burj Park in Dubai to mark the Hope probe going into orbit around Mars. Chris Whiteoak / The National

More:
UAEs lunar rover will use artificial intelligence to explore the Moon - The National

Read More..

The Morning Watch: The Evolution of AI in Movies, SnyderVerse vs. The Marvel Cinematic Universe & More – /FILM

The Morning Watch is a recurring feature that highlights a handful of noteworthy videos from around the web. They could be video essays, fanmade productions, featurettes, short films, hilarious sketches, or just anything that has to do with our favorite movies and TV shows.

In this edition, see how artificial intelligence has evolved in movies over the years, from the classic film Metropolis through The Terminator, Blade Runner, Her and beyond. Plus, see how The SnyderVerse approach to DC Comics compares to the films of the Marvel Cinematic Universe. And finally, listen as Emma Stone recites Steve Martins famous profanity-laden rant from Planes, Trains & Automobiles.

First up, Netflix Film Club takes a look back at the evolution of artificial intelligence in movies. They start back at the beginning with the famous robot from Metropolis, move through Terminator and Blade Runner, make stops at Ex Machina and the Marvel Cinematic Universe, and of course include Netflixs Outside the Wire and Oxygen.

Next, ScreenCrush digs deep into both the Marvel Cinematic Universe and Zack Snyders take on the DC Extended Universe to reveal the differences between the two. They break down the literary influences of Zack Snyders work and his philosophy on storytelling and also look closely at the formation of the MCU and how Marvel Studios built their own formula starting with Jon Favreaus Iron Man back in 2008.

Finally, in conjunction with the release of Cruella in theaters and on Disney+, Emma Stone stopped by Jimmy Kimmel Live. During the standard publicity fluff, the Oscar winner showed her love for Planes, Trains & Automobiles by reciting Steve Martins famous rant at the car rental desk, complete with all the f-bombs intact, even if theyre replaced by bleeps for network television. Im willing to bet thats not the only comedy bit that Emma Stone knows by heart.

Visit link:
The Morning Watch: The Evolution of AI in Movies, SnyderVerse vs. The Marvel Cinematic Universe & More - /FILM

Read More..

Adversarial attacks in machine learning: What they are and how to stop them – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Adversarial machine learning, a technique that attempts to fool models with deceptive data, is a growing threat in the AI and machine learning research community. The most common reason is to cause a malfunction in a machine learning model. An adversarial attack might entail presenting a model with inaccurate or misrepresentative data as its training, or introducing maliciously designed data to deceive an already trained model.

As the U.S. National Security Commission on Artificial Intelligences 2019 interim report notes, a very small percentage of current AI research goes toward defending AI systems against adversarial efforts. Some systems already used in production could be vulnerable to attack. For example, by placing a few small stickers on the ground, researchers showed that they could cause a self-driving car to move into the opposite lane of traffic. Other studies have shown that making imperceptible changes to an image can trick a medical analysis system into classifying a benign mole as malignant, and that pieces of tape can deceive a computer vision system into wrongly classifying a stop signas a speed limit sign.

The increasing adoption of AI is likely to correlate with a rise in adversarial attacks. Its a never-ending arms race, but fortunately, effective approaches exist today to mitigate the worst of the attacks.

Attacks against AI models are often categorized along three primary axes influence on the classifier, the security violation, and their specificity and can be further subcategorized as white box or black box. In white box attacks, the attacker has access to the models parameters, while in black box attacks, the attacker has no access to these parameters.

An attack can influence the classifier i.e., the model by disrupting the model as it makes predictions, while a security violation involves supplying malicious data that gets classified as legitimate. A targeted attack attempts to allow a specific intrusion or disruption, or alternatively to create general mayhem.

Evasion attacks are the most prevalent type of attack, where data are modified to evade detection or to be classified as legitimate. Evasion doesnt involve influence over the data used to train a model, but it is comparable to the way spammers and hackers obfuscate the content of spam emails and malware. An example of evasion is image-based spam in which spam content is embedded within an attached image to evade analysis by anti-spam models. Another example is spoofing attacks against AI-powered biometric verification systems..

Poisoning, another attack type, is adversarial contamination of data. Machine learning systems are often retrained using data collected while theyre in operation, and an attacker can poison this data by injecting malicious samples that subsequently disrupt the retraining process. An adversary might input data during the training phase thats falsely labeled as harmless when its actually malicious. For example, large language models like OpenAIs GPT-3 can reveal sensitive, private information when fed certain words and phrases, research has shown.

Meanwhile, model stealing, also called model extraction, involves an adversary probing a black box machine learning system in order to either reconstruct the model or extract the data that it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model stealing could be used to extract a proprietary stock-trading model, which the adversary could then use for their own financial gain.

Plenty of examples of adversarial attacks have been documented to date. One showed its possible to 3D-print a toy turtle with a texture that causes Googles object detection AI to classify it as a rifle, regardless of the angle from which the turtle is photographed. In another attack, a machine-tweaked image of a dog was shown to look like a cat to both computers and humans. So-called adversarial patterns on glasses or clothing have been designed to deceive facial recognition systems and license plate readers. And researchers have created adversarial audio inputs to disguise commands to intelligent assistants in benign-sounding audio.

In apaper published in April, researchers from Google and the University of California at Berkeley demonstrated that even the best forensic classifiers AI systems trained to distinguish between real and synthetic content are susceptible to adversarial attacks. Its a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric riseindeepfakecontent online.

One of the most infamous recent examples is Microsofts Tay, a Twitter chatbot programmed to learn to participate in conversation through interactions with other users. While Microsofts intention was that Tay would engage in casual and playful conversation, internet trolls noticed the system had insufficient filters and began feeding Tay profane and offensive tweets. The more these users engaged, the more offensive Tays tweets became, forcing Microsoft to shut the bot down just 16 hours after its launch.

As VentureBeat contributor Ben Dickson notes, recent years have seen a surge in the amount of research on adversarial attacks. In 2014, there were zero papers on adversarial machine learning submitted to the preprint server Arxiv.org, while in 2020, around 1,100 papers on adversarial examples and attacks were. Adversarial attacks and defense methods have also become a highlight of prominent conferences including NeurIPS, ICLR, DEF CON, Black Hat, and Usenix.

With the rise in interest in adversarial attacks and techniques to combat them, startups like Resistant AI are coming to the fore with products that ostensibly harden algorithms against adversaries. Beyond these new commercial solutions, emerging research holds promise for enterprises looking to invest in defenses against adversarial attacks.

One way to test machine learning models for robustness is with whats called a trojan attack, which involves modifying a model to respond to input triggers that cause it to infer an incorrect response. In an attempt to make these tests more repeatable and scalable, researchers at Johns Hopkins University developed a framework dubbed TrojAI, a set of tools that generate triggered data sets and associated models with trojans. They say that itll enable researchers to understand the effects of various data set configurations on the generated trojaned models and help to comprehensively test new trojan detection methods to harden models.

The Johns Hopkins team is far from the only one tackling the challenge of adversarial attacks in machine learning. In February, Google researchers released apaper describing a framework that either detects attacks or pressures the attackers to produce images that resemble the target class of images. Baidu, Microsoft, IBM, and Salesforce offer toolboxes Advbox, Counterfit, Adversarial Robustness Toolbox, and Robustness Gym for generating adversarial examples that can fool models in frameworks like MxNet, Keras, Facebooks PyTorch and Caffe2, Googles TensorFlow, and Baidus PaddlePaddle. And MITs Computer Science and Artificial Intelligence Laboratory recently released a tool called TextFoolerthat generates adversarial text to strengthen natural language models.

More recently, Microsoft, the nonprofit Mitre Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch releasedtheAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with Mitre to build a schema that organizes the approaches malicious actors employ in subverting machine learning models, bolstering monitoring strategies around organizations mission-critical systems.

The future might bring outside-the-box approaches, including several inspired by neuroscience. For example, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more robust to adversarial attacks. While adversarial AI is likely to become a never-ending arms race, these sorts of solutions instill hope that attackers wont always have the upper hand and that biological intelligence still has a lot of untapped potential.

See the rest here:
Adversarial attacks in machine learning: What they are and how to stop them - VentureBeat

Read More..

Here’s a great toolkit for Artificial Intelligence (AI) governance within your organisation – Lexology

As the deployment of artificial intelligence (AI) technology continues to grow, regulators around the globe continue to grasp with how best to encourage the responsible development and adoption of this technology. Many governments and regulatory bodies have released high level principles on AI ethics and governance, which while earnest leave you asking, where do I start?

However, the UKs Information Commissioners Office (ICO) has recently released a toolkit which takes a more practical how to do it approach. Its still in draft form and the ICO is seeking views to help shape and improve it. The toolkit builds upon the ICOs existing guidance on AI: The Guidance on AI and Data Protection and guidance on Explaining Decisions Made With AI (co-written with The Alan Turing Institute).

The toolkit is focused on assisting risk practitioners assess their AI systems against UK data protection law requirements, rather than AI ethics as a whole (although aspects such as discrimination, transparency, security, and accuracy are included). It is intended to help developers (and deployers) think about the risks of non-compliance with data protection law and offer practical support to organisations auditing compliance of their use of AI. While the toolkit is EU-centric, its still a good guide for Australian organisations grappling with how to embed AI in their businesses.

AI Toolkit: how AI impacts privacy and other considerations

Finally, a toolkit worth its name

The toolkit is constructed as a spreadsheet-based self-assessment tool which walks you through how AI impacts privacy and other considerations, helps you assess the risk in your business, and suggests some strategies. For example:

The toolkit covers 13 key areas including governance issues, contractual and third-party risk, risk of discrimination, maintenance of AI system and infrastructure security and integrity, assessing the need for human review, and other considerations.

To conduct the assessment, users of the toolkit are generally instructed to:

The toolkit is not intended to be used as a finite checklist or tick box exercise, but rather as a framework for analysis for your organisation to consider and capture the key risks and mitigation strategies associated with developing and/or using AI (depending on whether you are a developer, deployer, or both). This approach recognises that the diversity of AI applications, their ability to learn and evolve, and the range of public and commercial settings in which they are deployed, requires a more nuanced and dynamic approach to compliance than past technologies. There are no set and forget approaches to making sure your AI behaves and continues to meets community expectations which will be the ultimate test of accountability for organisations if something goes wrong.

Perhaps the most helpful part of the toolkit is a section reminding of Trade offs: i.e. where organisations will need to weigh up often competing values such as data minimisation and statistical accuracy in making AI design, development and deployment decisions. This brings a refreshingly honest and realistic acknowledgement of the challenges in developing and using AI responsibly typically lacking in the high level AI principles.

What about nearer to home?

Another useful how to guide is from the ever-practical Singaporeans. In early 2020, we saw Singapores Personal Data Protection Commission (PDPC) release the second edition of its Model AI Governance Framework and with it the Implementation and Self-Assessment Guide for Organisations (ISAGO) developed in collaboration with the World Economic Forum; another example of a practical method of encouraging responsible AI adoption.

In Australia, we are yet to see these practical tools released. However, a small start has been made with the Government and Industrys piloting of Australias AI ethics principles.

Read the original here:
Here's a great toolkit for Artificial Intelligence (AI) governance within your organisation - Lexology

Read More..

How will artificial intelligence change the way we work? Theres good and bad news – The Spinoff

Job loses caused by automation may grab the bulk of the headlines, but more of us may be affected by changes to recruitment and worker surveillance, writes Colin Gavaghan, director of the Centre for Law and Policy in Emerging Technologies

Until recently, a question such as that in the headline has led immediately to discussions about how many jobs will be lost to the technological revolution of artificial intelligence. Over the past few years, though, more of us have started looking at some other aspects of this question. Such as: for those of us still in work, how will things change? What will it be like to work alongside, or under, AI and robots? Or to have decisions about whether were hired, fired or promoted made by algorithms?

Those are some of the questions our multi-disciplinary team at Otago University, funded by the New Zealand Law Foundation, have been trying to answer. Last week, we set out our findings in a new report.

Theres a danger of getting a bit too Black Mirror about these sorts of things, of always seeing the most dystopian possibilities from any new technology. Thats a trap weve tried hard to avoid, because there really are potential benefits in the use of this sort of technology. For one thing, its possible that AI and robots could make some workplaces safer. ACC recently invested in Kiwi robotics company Robotics Plus, for example, whose products are intended to reduce the risk of accidents at ports, forestry sites and sawmills.

Of course, workplace automation can also increase danger. Weve already seen examples of workplace robots causing fatalities. One of our suggestions is that New Zealands work safety rules need to catch up with the sort of robots were likely to be working alongside in the future fencing them off from human workers and providing an emergency off-switch isnt going to be the answer for cobots that are designed to work with and around us.

Physical injuries from robots may present the most visceral image of the risks of workplace automation. Luckily, theyre probably likely to be fairly rare. Far more people, we think, will be affected by algorithmic management the growing range of techniques used to allocate shifts, direct workers and monitor performance.

As with workplace robots, theres potential here for the technology to improve things for workers. One report talked about how it could benefit workers by giving clearer advance notice of when shifts will be and making it easier to swap and change them. Theres no guarantee, though, that algorithmic management tools will be used to benefit workers. Our earlier warning aside, its hard not to feel just a bit Black Mirror when seeing images of Amazon warehouses where workers are micro-managed to an extent beyond the wildest dreams of Ford or Taylor.

An Amazon fulfillment centre in Illinois, USA (Photo: Scott Olson)

A particular concern thats grown during the Covid crisis is the apparently increasing prevalence of workplace surveillance. While by no means a new phenomenon, AI technologies could offer employers the opportunity to monitor their workers more closely and ubiquitously than ever before.

Of course, not all employers will treat their workers like drones. But workplace protection rules dont exist for the good employers. If we want to avoid the soul-crushing erosion of privacy, autonomy and dignity that could accompany the worst abuses of this technology, we think those rules will need to be tightened in various ways.

Concerns about AI in the workplace dont start with algorithmic management, though. A lot of them start before the employment relationship even begins. Increasingly, AI technology is being used in recruitment: from targeted job adverts, to shortlisting of applicants, even to the interview stage, where companies like Hirevue provide algorithms to analyse recorded interviews with candidates.

The use of algorithms in hiring poses a serious risk of reinforcing bias, or of rendering existing bias less visible. Most obviously, theres a risk that algorithms will base their profiles of a good fit for a particular role on the history of people whove occupied that role before. If those people happen to have been overwhelming white, male and middle class well, its not hard to guess how that will probably go. Also, affective recognition software thats been trained on mostly white, neurotypical people could make unfair adverse judgments about people who dont fit into those categories, even if they score highly in the sorts of attributes that really matter. (Hirevue recently stopped using visual analysis for their assessment models, but since these sorts of platforms will obviously have to rely on inferences from something maybe voice inflection or word choices questions about cultural, class or neurodiversity awareness remain.)

But doesnt New Zealand already have laws protecting us against workplace hazards, privacy violations and discrimination? It does indeed. Like almost every other new technology, workplace AI isnt emerging into a legal vacuum. Unfortunately, some of those laws were designed for a different time, which can lead to what tech lawyers call regulatory disconnection when theres a major change to the technologys form or use. For instance, the current rules around workplace robots seem to assume that they can be fenced off from human workers, whereas the cobots that are now coming into use will be working in close proximity to humans.

In other cases, the law seems fine, but the problem is spotting when the technology violates it. Our Human Rights Act prevents discrimination on a whole bunch of grounds, including sex, race and disability, but that wont be much help to someone who has no way of knowing why the algorithm has declined them. It may even be that employers themselves wont know who has been screened out at an early stage, or on what grounds.

As we argue, though, it doesnt have to be that way. Just as workplace robots could reduce injuries and fatalities, so could algorithmic auditing software help to detect and reduce bias in recruitment, promotion, etc. Its not as though humans are perfect at this! Maybe AI could make things better. What we cant do, though, is complacently assume that it will do so.

In April, the EU Commission published a draft law for Europe which would require scrutiny and transparency for certain uses of AI technology. That would include a range of functions related to employment, such as recruitment, promotion and termination; task allocation; and monitoring and evaluating performance. Last year, New York City Council introduced a bill that would provide for algorithmic hiring tools be audited for bias, and their use disclosed to candidates.

Our report calls for New Zealand to take the same kinds of steps. For instance, we propose that consideration should be given to following New Yorks example, and requiring manufacturers of hiring tools to ensure those tools include functionality for bias auditing, so that client companies can readily perform the relevant audits.

Algorithmic impact assessments looking at matters like privacy and worker safety and wellbeing should be conducted before any algorithm is used in high stakes contexts. And weve suggested that there should be important roles for the Office of the Privacy Commissioner and Worksafe NZ in overseeing surveillance technologies.

We think these steps would go some way to ensuring that New Zealand businesses and workers (actual and prospective) could enjoy the benefits of these technologies, while being protected from the worst of the risks.

Our report isnt a prediction about what the future workplace will look like when AI and robots are a regular part of it. How things turn out depends substantially on the sorts of choices we make about how to use these technologies. And were not proposing that we need fear the future or rage against the machines. But we do think we should be keeping a close watchful eye on them. Because you can bet theyll be keeping an eye on us.

Subscribe to Rec Room a weekly newsletter delivering The Spinoffs latest videos, podcasts and other recommendations straight to your inbox.

Link:
How will artificial intelligence change the way we work? Theres good and bad news - The Spinoff

Read More..

Artificial intelligence and privacy – The Nation

As the fourth industrial revolution (industry 4.o) began, robotics and artificial intelligence started gaining popularity. Artificial intelligence is undoubtedly an advanced concept that is going to bring about ease in the lives of people.

Because of artificial intelligence, efficient utilisation of resources and a decline in production cost will be observed. Amazon, a world-famous online selling platform leverages this technology and it enables its sellers to know their customers and their preferences.

On the other hand, artificial intelligence manipulates the private data and information of users. In terms of privacy, world-famous scandals can be viewed. Firstly Cambridge Analytica, Facebook data scandal proved to be a breach of privacy rules. Facebook users information was sold and misused for political purposes in the USA in 2016. Secondly, Julian Assange was convicted of cybercrime because he was involved in the leakage of inside information of a company that he got with the help of artificial intelligence. It is an alarming situation for every person who is using nodes and gadgets. From family information to our location, everything is available in the cloud (storage). Artificial intelligence has a much brighter side but its impacts on our lives cant be denied. Great care must be taken while inputting data on social media platforms.

SAGAR,

Shahdadkot.

More here:
Artificial intelligence and privacy - The Nation

Read More..

Infowars Article

Institutions may have been selling ahead of todays crypto rout, asJPM noted earlier

but as following chart of bitcoins historical drawdowns clearly shows

they will soon turn from sellers to buyers as bitcoins historical trading pattern clearly shows that every single previous dump has been followed by a sizable rebound. And since nothing has changed, and banks continue to debase currency with Weimarian abandon, the bullish thesis for digital gold remains the same.

Still, to many retail investors most of whom are new to the crypto space following the bitcoin winter hibernation todays action was nothing short of shocking as none other thanGoogle confirms, with the worlds largest search engine reporting that searches for Cryptocurrency just hit an all time high.

Some other notable searches:

And in what may potentially be actionable information, Google also reports that most energy efficient cryptocurrencyand environmentally friendly cryptocurrency are breakout searches.

Yet while for most retail investors who may have invested a sizable portion of their savings in the volatile asset class today was a jarring reminder that what goes up will eventually come down, for the ultra rich today was just an opportunity to buy the dip, as the laser-eyed Tom Brady who recently joined the bitcoin fan club, tweeted earlier today.

Read more here:
Infowars Article

Read More..