Page 3,504«..1020..3,5033,5043,5053,506..3,5103,520..»

Security Think Tank: Artificial intelligence will be no silver bullet for security – ComputerWeekly.com

By

Published: 03 Jul 2020

Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone.

Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system.

Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.

The above are all examples of where machine learning and AI can be useful. However, over-reliance and false assurance could present another problem: As AI improves at safeguarding assets, so too does it improve attacking them. As cutting-edge technologies are applied to improve security, cyber criminals are using the same innovations to get an edge over these defences.

Typical attacks can involve the gathering of information about a system or sabotaging an AI system by flooding it with requests.

Elsewhere, so-called deepfakes are proving a relatively new area of fraud that poses unprecedented challenges. We already know that cyber criminals can litter the web with fakes that can be almost impossible to distinguish real news from fake.

The consequences are such that many legislators and regulators are contemplating the establishment of rule and law to govern this phenomenon. For organisations, this means that deepfakes could lead to much more complex phishing in future, targeting employees by mimicking corporate writing styles or even individual writing style.

In a nutshell, AI can augment cyber security so long as organisations know its limitations and have a clear strategy focusing on the present while constantly looking at the evolving threat landscape.

Ivana Bartoletti is a cyber risk technical director at Deloitte and a founder of Women Leading in AI.

More here:
Security Think Tank: Artificial intelligence will be no silver bullet for security - ComputerWeekly.com

Read More..

How Artificial Intelligence will help Volkswagen boost production by 30 per cent – Hindustan Times

Volkswagen is looking to boost its production by as much as 30 per cent in next five years by using Artificial Intelligence at its facilities. The Industrial Computer Vision AI technology will help the carmaker in image recognition processes and speed up production time by reducing manual interventions

The process extracts information from optical data, such as the real environment at the plant, which it then evaluates using artificial intelligence (AI). The procedure is similar to the human capability of recognising, processing and analysing images. Volkswagen has been working with this technology for several years and is now intensifying its efforts.

The first application, which is to be rolled out via the new Volkswagen Industrial Cloud throughout the Group next year, is currently being tested by Porsche in Leipzig. The application functions as follows: several labels are attached to each vehicle produced, for example with vehicle information or notes on airbags. Many of these labels contain country-specific information and are written in the customers language. The proper application of these labels is ensured by Computer Vision.

At the Porsche plant in Leipzig, an employee on the production line now scans the vehicle identification number to ensure clear identification of the vehicle. Photos are taken of each label attached to the car. The app checks the images to ensure that the labels have the correct content and are written in the appropriate language on a real-time basis and provides the production line employee with feedback on whether everything is correct. This saves several minutes per vehicle.

Another solution currently being prepared for use throughout the Group comes from Ingolstadt, where Audi uses it for quality testing at the press shop. Cameras combined with software based on machine learning detect the finest cracks and defects in components.

Volkswagen has set up a team of about 60 Computer Vision experts for the further development of the technology and the evaluation of new utilisation possibilities. In addition to the use of the technology in production, Volkswagen plans applications along the entire value stream, for example in sales and after-sales. For development work on the optical procedure, Volkswagen is recruiting experts for this area in Berlin, Dresden, Munich and Wolfsburg. In addition, the Group continues to build up its skills in the fields of camera technology, machine learning and the operation of Computer Vision solutions.

Read more:
How Artificial Intelligence will help Volkswagen boost production by 30 per cent - Hindustan Times

Read More..

Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says – Nextgov

Vendors of artificial intelligence technology should not be shielded by intellectual property claims and will have to disclose elements of their designs and be able to explain how their offering works in order to establish accountability, according to a leading official from the Cybersecurity and Infrastructure Security Agency.

I dont know how you can have a black-box algorithm thats proprietary and then be able to deploy it and be able to go off and explain whats going on, said Martin Stanley, a senior technical advisor who leads the development of CISAs artificial intelligence strategy. I think those things are going to have to be made available through some kind of scrutiny and certification around them so that those integrating them into other systems are going to be able to account for whats happening.

Stanley was among the speakers on a recent Nextgov and Defense One panel where government officials, including a member of the National Security Commission on Artificial Intelligence, shared some of the ways they are trying to balance reaping the benefits of artificial intelligence with risks the technology poses.

Experts often discuss the rewards of programming machines to do tasks humans would otherwise have to labor onfor both offensive and defensive cybersecurity maneuversbut the algorithms behind such systems and the data used to train them into taking such actions are also vulnerable to attack. And the question of accountability applies to users and developers of the technology.

Artificial intelligence systems are code that humans write, but they exercise their abilities and become stronger and more efficient using data that is fed to them. If the data is manipulated, or poisoned, the outcomes can be disastrous.

Changes to the data could be things that humans wouldnt necessarily recognize, but that computers do.

Weve seen ... trivial alterations that can throw off some of those results, just by changing a few pixels in an image in a way that a person might not even be able to tell, said Josephine Wolff, a Tufts University cybersecurity professor who was also on the panel.

And while its true that behind every AI algorithm is a human coder, the designs are becoming so complex, that youre looking at automated decision-making where the people who have designed the system are not actually fully in control of what the decisions will be, Wolff says.

This makes for a threat vector where vulnerabilities are harder to detect until its too late.

With AI, theres much more potential for vulnerabilities to stay covert than with other threat vectors, Wolff said. As models become increasingly complex it can take longer to realize that something is wrong before theres a dramatic outcome.

For this reason, Stanley said an overarching factor CISA uses to help determine what use cases AI gets applied to within the agency, is to assess the extent to which they offer high benefits and low regrets.

We pick ones that are understandable and have low complexity, he said.

Among other things federal personnel need to be mindful of is who has access to the training data.

You can imagine you get an award done, and everyone knows how hard that is from the beginning, and then the first thing that the vendor says is OK, send us all your data, hows that going to work so we can train the algorithm? he said. Those are the kinds of concerns that we have to be able to address.

Were going to have to continuously demonstrate that we are using the data for the purpose that it was intended, he said, adding, Theres some basic science that speaks to how you interact with algorithms and what kind of access you can have to the training data. Those kinds of things really need to be understood by the people who are deploying them.

A crucial but very difficult element to establish is liability. Wolff said ideally, liability wouldbe connected to a potential certification program where an entity audits artificial intelligence systems for factors like transparency and explainability.

Thats important, she said, for answering the question of how can we incentivize companies developing these algorithms to feel really heavily the weight of getting them right and be sure to do their own due diligence knowing that there are serious penalties for failing to secure them effectively.

But this is hard, even in the world of software development more broadly.

Making the connection is still very unresolved. Were still in the very early stages of determining what would a certification process look like, who would be in charge of issuing it, what kind of legal protection or immunity might you get if you went through it, she said. Software developers and companies have been working for a very long time, especially in the U.S., under the assumption that they cant be held legally liable for vulnerabilities in their code, and when we start talking about liability in the machine learning and AI context, we have to recognize that thats part of what were grappling with, an industry that for a very long time has had very strong protections from any liability.

View from the Commission

Responding to this, Katharina McFarland, a member of the National Security Commission on Artificial Intelligence, referenced the Pentagons Cybersecurity Maturity Model Certification program.

The point of the CMMC is to establish liability for Defense contractors, Defense Acquisitions Chief Information Security Officer Katie Arrington has said. But McFarland highlighted difficulties facing CMMC that program officials themselves have acknowledged.

Im sure youve heard of the [CMMC], theres a lot of thought going on, the question is the policing of it, she said. When you consider the proliferation of the code thats out there, and the global nature of it, you really will have a challenge trying to take a full thread and to pull it through a knothole to try to figure out where that responsibility is. Our borders are very porous and machines that we buy from another nation may not be built with the same biases that we have.

McFarland, a former head of Defense acquisitions, stressed that AI is more often than not viewed with fear and said she wanted to see more of a balance in procurement considerations for the technology.

I found that we had a perverse incentive built into our system and that was that we took, sometimes, I think extraordinary measures to try to creep into the one percent area for failure, she said, In other words, we would want to 110% test a system and in doing so, we might miss the venue of where its applicability in a theater to protect soldiers, sailors, airmen and Marines is needed.

She highlighted upfront a need for testing a verification but said it shouldnt be done at the expense of adoption. To that end, she asks that industry help by sharing the testing tools they use.

I would encourage industry to think about this from the standpoint of what tools would we needbecause theyre using themin the department, in the federal space, in the community, to give us transparency and verification, she said, so that we have a high confidence in the utility, in the data that were using and the AI algorithms that were building.

See the rest here:
Artificial Intelligence Systems Will Need to Have Certification, CISA Official Says - Nextgov

Read More..

Letters to the editor – The Economist

Jul 4th 2020

Artificial intelligence is an oxymoron (Technology quarterly, June 13th). Intelligence is an attribute of living things, and can best be defined as the use of information to further survival and reproduction. When a computer resists being switched off, or a robot worries about the future for its children, then, and only then, may intelligence flow.

I acknowledge Richard Suttons bitter lesson, that attempts to build human understanding into computers rarely work, although there is nothing new here. I was aware of the folly of anthropomorphism as an AI researcher in the mid-1980s. We learned to fly when we stopped emulating birds and studied lift. Meaning and knowledge dont result from symbolic representation; they relate directly to the visceral motives of survival and reproduction.

Great strides have been made in widening the applicability of algorithms, but as Mr Sutton says, this progress has been fuelled by Moores law. What we call AI is simply pattern discovery. Brilliant, transformative, and powerful, but just pattern discovery. Further progress is dependent on recognising this simple fact, and abandoning the fancy that intelligence can be disembodied from a living host.

ROB MACDONALDRichmond, North Yorkshire

I agree that machine learning is overhyped. Indeed, your claim that such techniques are loosely based on the structure of neurons in the brain is true of neural networks, but these are just one type among a wide array of different machine- learning methods. In fact, machine learning in some cases is no more than a rebranding of existing processes. If by machine learning we simply mean building a model using large amounts of data, then good old ordinary least squares (line of best fit) is a form of machine learning.

TOM ARMSTRONGToronto

The scope of your research into green investing was too narrow to condemn all financial services for their woolly thinking (Hotting up, June 20th). You restricted your analysis to microeconomic factors and to the ability of investors to engage with companies. It overlooked the bigger picture: investors can also shape the macro environment by structured engagement with the system itself.

For example, the data you used largely originated from the investor-led Carbon Disclosure Project (for which we hosted the first ever meeting, nearly two decades ago). In addition, investors have also helped shape sustainable-finance plans in Britain, the EU and UN. Investors also sit on the industry-led Taskforce on Climate-related Financial Disclosure, convened by the Financial Stability Board, which has proved effective.

It is critical that governments apply a meaningful carbon price. But if we are to move money at the pace and scale required to deal with climate risk, governments need to reconsider the entire architecture of markets. This means focusing a wide-angled climate lens on prudential regulation, listing rules, accounting standards, investor disclosure standards, valuation conventions and stewardship codes, as well as building on new interpretations of legal fiduciary duty. This work is done most effectively in partnership with market participants. Green-thinking investors can help.

STEVE WAYGOODChief responsible investment officerAviva InvestorsLondon

Estimating indirectly observable GDP in real time is indeed a hard job for macro-econometricians, or wonks, as you call us (Crisis measures, May 30th). Most of the components are either highly lagged, as your article mentioned, or altogether unobservable. But the textbook definition of GDP and its components wont be changing any time soon, as the reader is led to believe. Instead what has always and will continue to change are the proxy indicators used to estimate the estimate of GDP.

MICHAEL BOERMANWashington, DC

Reading Lexingtons account of his garden adventures (June 20th) brought back memories of my own experience with neighbours in Twinsburg, Ohio, in the late 1970s. They also objected to vegetables growing in our front yard (the only available space). We were doing it for the same reasons as Lexington: pleasure, fresh food to eat, and a learning experience for our young children. The neighbours, recently arrived into the suburban middle class, saw it as an affront. They no longer had to grow food for their table. They could buy it at the store and keep it in the deep freeze. Our garden, in their face every day, reminded them of their roots in Appalachian poverty. They called us hillbillies.

Arthur C. Clarke once wrote: Any sufficiently advanced technology is indistinguishable from magic. Our version read, Any sufficiently advanced lifestyle is indistinguishable from hillbillies.

PHILIP RAKITAPhiladelphia

Bartleby (May 30th) thinks the benefits of working from home will mean that employees will not want to return to the office. I am not sure that is the case for many people. My husband is lucky. He works for a company that already expected its staff to work remotely, so had the systems and habits in place. He has a spacious room to work in, with an adjustable chair, large monitor and a nice view. I do not work so he is not responsible for child care or home schooling.

Many people are working at makeshift workspaces which would make an occupational therapist cringe. Few will have a dedicated room for their home office, so their work invades their mental and physical space.

My husband has noticed that meetings are being set up both earlier and later in the day because there is an assumption that, as people are not commuting, it is fine to extend their work day. Colleagues book a half-hour meeting instead of dropping by someones desk to ask a quick question. Any benefit of not commuting is lost. My husband still struggles to finish in time to have dinner with our children. People with especially long commutes now have more time, but even that was a change of scenery and offered some incidental exercise.

JENNIFER ALLENLondon

As Bartleby pointed out, the impact of pandemic working conditions wont be limited to the current generation. By exacerbating these divides, will covid-19 completely guarantee a future dominated by the baby-Zoomers?

MALCOLM BEGGTokyo

The transition away from the physical office engenders a lackadaisical approach to the work day for many workers. It brings to mind Ignatius Reillys reasoning for his late start at the office from A Confederacy of Dunces:

I avoid that bleak first hour of the working day during which my still sluggish senses and body make every chore a penance. I find that in arriving later, the work which I do perform is of a much higher quality.

ROBERT MOGIELNICKIArlington, Virginia

This article appeared in the Letters section of the print edition under the headline "On artificial intelligence, green investing, GDP, gardens, working from home"

Continue reading here:
Letters to the editor - The Economist

Read More..

Increasing Transparency at the National Security Commission on Artificial Intelligence – Lawfare

In 2018, Congress established the National Security Commission on Artificial Intelligence (NSCAI)a temporary, independent body tasked with reviewing the national security implications of artificial intelligence (AI). But two years later, the commissions activities remain little known to the public. Critics have charged that the commission has conducted activities of interest to the public outside of the public eye, only acknowledging that meetings occurred after the fact and offering few details on evolving commission decision-making. As one commentator remarked, Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.

That perceived lack of transparency may soon change. In June, the U.S. District Court for the District of Columbia handed down its decision in Electronic Privacy Information Center v. National Security Commission on Artificial Intelligence, holding that Congress compelled the NSCAI to comply with the Federal Advisory Committee Act (FACA). Under FACA, the commission must hold open meetings and proactively provide records and other materials to the public. This decision follows a ruling from December 2019, holding that the NSCAI must also provide historical documents upon request under the Freedom of Information Act (FOIA). As a result of these decisions, the public is likely to gain increased access to and insight into the once-opaque operations of the commission.

Lawmakers established the NSCAI in the John S. McCain National Defense Authorization Act (NDAA) for fiscal 2019 1051, which tasked the commission with consider[ing] the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States. The commissions purview includes an array of issues related to the implications and uses of artificial intelligence and machine learning for national security and defense, including U.S. competitiveness and leadership, research and development, ethics, and data standards.

The NSCAI is currently chaired by Eric Schmidt, the former executive chairman of Googles parent company, Alphabet. The commissions 15 membersappointed by a combination of Congress, the secretary of defense and the secretary of commercereceive classified and unclassified briefings, meet in working groups and engage with industry. They report their findings and recommendations to the president and Congress, including in an annual report.

The Electronic Privacy Information Center (EPIC), a research center focused on privacy and civil liberties issues in the digital age, submitted a request to the NSCAI in September 2019, seeking access to upcoming meetings and records prepared by the commission under FACA and FOIA. In the six-month period prior to the request, the NSCAI held more than a dozen meetings and received over 100 briefings, according to EPIC. At the time it filed the lawsuit, EPIC noted that the commissions first major report was also one month overdue for release. When the commission did not comply with the requests under FOIA and FACA, EPIC brought suit under the two laws.

EPICs complaint alleged that the NSCAI had conducted its operations opaquely in its short lifespan. Since its establishment, the commission has operated almost entirely in secret with meetings behind closed doors[,] and has failed to publish or disclose any notices, agendas, minutes, or materials. If Congress had intended the NSCAI to comply with FOIA and FACA, such activity would not satisfy the statutes requirements. Given the potential implications of federal artificial intelligence decisions for privacy, cybersecurity, human rights, and algorithmic bias, EPIC argued that [p]ublic access to the records and meetings of the AI Commission is vital to ensure government transparency and democratic accountability. The complaint also noted the potential ramifications of commission activities for the government, private sector, and public, as well as the importance of artificial intelligence safeguards in the national security context due to limited public oversight. According to EPIC, increasing public participation would permit greater input into the development of national AI policy by those whose privacy and data security could potentially be affected.

The U.S. District Court for the District of Columbia addressed EPICs FOIA claim in a December 2019 decision. FOIA requires agencies to disclose their records to a party upon request, barring exemptions (including for information classified to protect national security). EPIC alleged that the NSCAI failed to uphold its obligations under FOIA to process FOIA requests in a timely fashion; to process EPICs FOIA requests in an expedited manner, in accordance with EPICs claims of urgency; and to make available for public inspection and copying its records, reports, transcripts, minutes, appendixes, working papers, drafts, studies, agenda, or other documents. The commission, which at the time did not have a FOIA processing mechanism in place or other pending FOIA requests, argued that it was not an agency subject to FOIA.

The courts inquiry centered on whether the NSCAI is an agency under FOIA. Comparing the language establishing the NSCAI with FOIAs definition of agency, the court held that the NSCAI is subject to FOIA. In his decision, District Judge Trevor McFadden noted that Congress could have hardly been clearer. As a result, since that time, the commission has had to produce historical documents in response to FOIA requests.

FACA, by contrast, applies forward-looking requirements specifically to federal advisory committees. These mandates include requiring committees to open meetings to the public and announce them in the Federal Register, and to make reports, transcripts and other commission materials publicly available. The measures aim to inform the public about and invite public engagement with the committees that provide expertise to the executive branch. EPIC alleged that the NSCAI violated FACA by failing to hold open meetings and provide notice of them, and by failing to make records available to the public. EPIC sought mandamus relief pursuant to the alleged FACA violations.

In its June decision, the district court ruled that FACA applies to the NSCAI. The commission had filed a motion to dismiss the FACA claims, arguing that it could not be subject to both FOIA and FACA. Since the court had previously held the NSCAI to be an agency for purposes of FOIA, the commission reasoned that it could not simultaneously be an advisory committee under FACA. District Judge McFadden disagreed. Invoking the Roman God Januss two facesone forward-looking and the other backward-facinghe wrote, [L]ike Janus, the Commission does indeed have two faces, and ... Congress obligated it to comply with FACA as well as FOIA. The court could not identify a conflict between the requirements of the two statutes, despite differences in their obligations and exceptions. Rather, it noted that if such conflicts arise, it will be incumbent on the parties and the Court to resolve any difficulties. The court dismissed additional claims under the Administrative Procedure Act (APA) for lack of subject matter jurisdiction, as it determined that the commission is not an agency under the APA definition.

The courts decision turned on whether the NSCAI is an advisory committee subject to FACA. The court determined that the statutory text of the 2019 NDAA establishing the NSCAI fit[s] the [FACA] definition of advisory committee like a glove. Furthermore, turning to the full text of the 2019 NDAA, the court noted that the law contains at least two instances in which it explicitly exempts a government body from FACA. The court read the 2019 NDAA as silent when FACA applies and explicit when FACA does not apply. Given Congresss silence on the applicability of FACA to the NSCAI in the 2019 NDAAand again in the 2020 NDAAthe court reasoned that Congress intended the NSCAI to be subject to FACA.

In determining the NSCAI to be subject to FACA, in addition to FOIA, the court has compelled the commission to adopt a more transparent operating posture going forward. Since the December 2019 decision on FOIA, the NSCAI has produced a number of historical records in response to FOIA requests. The recent ruling on FACA grounds requires the NSCAI to hold open meetings, post notice of meetings in advance and make documents publicly available. As a result, the commissions process of compiling findings and developing recommendations for government action related to artificial intelligence and machine learning will likely become more accessible to the public.

The two court decisions come in time to have a noticeable impact on the remaining term of the temporary commission. While the NSCAI was previously due to disband later in 2020, the NDAA for fiscal 2020 1735 extended the commissions lifespan by one year, to October 1, 2021. Citing federal budgetary timelines and the pace of AI development, the commission released its first set of recommendations in March 2020 and expressed its intent to publish additional recommendations on a quarterly basis thereafter. The commission is due to submit its final report to Congress by March 1, 2021. As the NSCAI prepares to enter its final year of operations and develop its closing recommendations, the public will have a clearer window into the commissions work.

View original post here:
Increasing Transparency at the National Security Commission on Artificial Intelligence - Lawfare

Read More..

Getting Girls Into the Artificial Intelligence Pipeline – Medium

Closing the Imagination Gap for GirlsWhy this is a critical step for creating an equitable future

The term artificial intelligence (AI) was coined 64 years ago at a scholarly conference. The AI field hasnt remained the theoretical province of computer scientists and mathematicians; it now is a pervasive part of everyday life. With a technology this powerful, it is critical to include the perspectives of all women, including those from underrepresented communities.

AI applications based on algorithms are found in robotics, machine learning, natural language processing, machine vision, speech recognition and more. These applications are found in homes, vehicles and myriad other aspects of daily life. Examples include facial recognition; robots helping older people live more independently at home; autonomous vehicles; smart watches; and drone safety systems.

AI applications must be able to reach conclusions and offer information. Some require the capacity to sense emotions in order to relate to people.

Today, women are making their way into AI and leading the way for more girls to enter AI careers. Theyre helping this burgeoning industry progress and innovate in ways that otherwise might not be possible. In essence, adding women to the teams creating components of AI fundamentally changes the suitability and functionality of a product or service by eliminating biases and better reflecting the needs of a wider group of users.

Taniya Mishra is director of Artificial Intelligence Research and lead speech scientist at Affectiva, which originated at MIT. The companys technology calibrates peoples speech patterns to recognize emotions.

Mishra offers some concrete examples of machine learning algorithms.

Algorithms are a set of rules logic or a set of instructions that you can give to a machine in order to get it to accomplish a goal to make it behave like a human being, Mishra says. It could be any goal. It could be lifting a block from one place to another. It could be understanding human emotion. All of these could be the goals for designing a machine learning algorithm.

The basic algorithm recipe tells the computer when to do x, then when to do y and then z. For this process to work right, the programmer must give the right instructions. For it to be inclusive, the programmer must think of the entire humanity of users, Mishra notes.

When it comes to diversity, AI benefits from including women and other underrepresented people. These voices must be included when writing instructions or algorithms to power machine learning or other elements of AI. The data gathered to support AI must also come from diverse groups of people, if the resulting algorithm is going to fully meet its potential.

For example, a small homogenous group designing a facial recognition program for a large heterogeneous group will miss the target if data about a variety of faces from the larger group is not represented. In other words, the algorithm is only as bias-free as the sources of data and the data sets.

To be effective, creators of AI-related applications need to be as diverse as the people using them.

Eighteen-year-old Betelhem Dessie is founder and chief executive officer of iCog-Anyone Can Code in Ethiopia. She also co-founded Solve IT, which provides technical resources to develop local solutions for community problems.

As different AI tools were being developed, I observed a lack of contributions from people of color and women, Dessie notes. The solution, I thought, was having early childhood tech education but also inspiring girls who are already in the workforce to pursue these types of career paths. The most rewarding part of my work is inspiring others particularly women and girls to pursue careers in technology.

But gender and diversity issues remain.

A 2019 article written by Kari Paul for The Guardian states the lack of diversity in the AI field has reached a moment of reckoning, according to findings by a New York University research center. The survey of more than 150 studies and reports, published by AI Now Institute, found that diversity disaster has contributed to flawed systems that perpetuate gender and racial biases, Paul writes.

One remedy is educating girls including girls of color sooner and more widely about the field and making appropriate educational opportunities and career guidance accessible to them early on.

Mastery of complex subjects is required, so girls must continue building on their basic math and science education, and intensify their focus as early as seventh grade. High school and certainly college may be too late to capture their interest so they can acquire the needed foundation.

Girls interested in AI will need to write code, algorithms and source data sets. Beyond that, they will need to understand and eliminate bias in data sets, as well as in applications designed to serve humanity.

Along with a rigorous early academic foundation, girls must develop social and emotional learning skills to help fuel their careers. These skills will prove beneficial whether they are leading a team or a company or programming soft skills into a robot.

A proven method for inspiring girls is to bring female role models working in AI into your classroom. Give girls a chance to ask these experts questions about their careers and personal stories. One way to start your search for experts is to inquire at universities and businesses from your local community; network with those professionals to build your sources.

Girls visions for the future are boosted when theyre introduced to female role models who demonstrate rewarding careers in the AI field and show that girls can excel in this arena.

As women enter the profession and assume leadership roles, society is seeing the advantages of perspectives they bring to AI systems.

For example, Mishra builds new systems that enhance peoples lives and give them a positive experience of interacting with technology. AI is ingrained into every aspect of our lives now and will be even more so in the future, says Mishra. Her advice to girls is to dream big: ambition is attractive and inspires those around you.

See original here:
Getting Girls Into the Artificial Intelligence Pipeline - Medium

Read More..

Artificial Intelligence Can’t Deal With Chaos, But Teaching It Physics Could Help – ScienceAlert

While artificial intelligence systems continue to make huge strides forward, they're still not particularly good at dealing with chaos or unpredictability. Now researchers think they have found a way to fix this, by teaching AI about physics.

To be more specific, teaching them about the Hamiltonian function, which gives the AI information about the entirety of a dynamic system: all the energy contained within it, both kinetic and potential.

Neural networks, designed to loosely mimic the human brain as a complex, carefully weighted type of AI, then have a 'bigger picture' view of what's happening, and that could open up possibilities for getting AI to tackle harder and harder problems.

"The Hamiltonian is really the special sauce that gives neural networks the ability to learn order and chaos," says physicist John Lindner, from North Carolina State University.

"With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems."

The researchers compare the introduction of the Hamiltonian function to a swinging pendulum it's giving AI information about how fast the pendulum is swinging and its path of travel, rather than just showing AI a snapshot of the pendulum at one point in time.

If neural networks understand the Hamiltonian flow so where the pendulum is, in this analogy, where it might be going, and the energy it has then they are better able to manage the introduction of chaos into order, the new study found.

Not only that, but they can also be built to be more efficient: better able to forecast dynamic, unpredictable outcomes without huge numbers of extra neural nodes. It helps AI to quickly get a more complete understanding of how the world actually works.

A representation of the Hamiltonian flow, with rainbow colours coding a fourth dimension. (North Carolina State University)

To test their newly improved AI neural network, the researchers put it up against a commonly used benchmark called the Hnon-Heiles model, initially created to model the movement of a star around a sun.

The Hamiltonian neural network successfully passed the test, correctly predicting the dynamics of the system in states of order and of chaos.

This improved AI could be used in all kinds of areas, from diagnosing medical conditions to piloting autonomous drones.

We've already seen AI simulate space, diagnose medical problems, upgrade movies and develop new drugs, and the technology is, relatively speaking, just getting started there's lots more on the way. These new findings should help with that.

"If chaos is a nonlinear 'super power', enabling deterministic dynamics to be practically unpredictable, then the Hamiltonian is a neural network 'secret sauce', a special ingredient that enables learning and forecasting order and chaos," write the researchers in their published paper.

The research has been published in Physical Review E.

See the rest here:
Artificial Intelligence Can't Deal With Chaos, But Teaching It Physics Could Help - ScienceAlert

Read More..

Artificial intelligence is on the rise – Independent Australia

New developments and opportunities are opening up in artificial intelligence, says Paul Budde.

I RECENTLY followed a "lunch box lecture", organised by the University of Sydney.In thetalk, Professor Zdenka Kuncic explored the very topical issue of artificial intelligence.

The world is infatuated with artificial intelligence (AI), and understandably so, given its super-human ability to find patterns in big data as we all notice when using Google, Facebook, Amazon, eBay and so on. But the so-called general intelligence that humans possess remains elusive forAI.

Interestingly, Professor Kuncic approached this topic from a physics perspective. By viewing the brains neural network as a physical hardware system, rather than the algorithm-based software as for example AI-based research used insocial media.

Her approach reveals clues that suggest the underlying nature of intelligence is physical.

Basically, what this means is that a software-based system will require ongoing input from software specialists to make updates based on new developments.Her approach, however, is to look at a physical system based on nanotechnology and use these networks as self-learning systems, where human intervention is no longer required.

Imagine the implications of the communications technologies that are on the horizon, where basically billions of sensors and devices will be connected to networks.

The data from these devices need to be processed in real-time and dynamic decisions will have to be made without human intervention. The driverless car is, of course, a classic example of such an application.

The technology needed to make such a system work will have to be based on edge technology in the device out there in the field. It is not going to work in any scaled-up situation if the data from these devices will first have to be sent to the cloud for processing.

Nano networks are a possible solution for such situations. A nanonetwork or nanoscale network is a set of interconnected nanomachines (devices a few hundred nanometers or a few micrometres at most in size), which at the moment can perform only very simple tasks such as computing, data storing, sensing and actuation.

However, Professor Kuncik expects that new developments will see expanded capabilities of single nanomachines both in terms of complexity and range of operation by allowing them to coordinate, share and fuse information.

Professor Kuncik concentrates, in her work, on electromagnetics for communication in the nanoscale.

This is commonly defined as the 'transmission and reception of electromagnetic radiation from components based on novel nanomaterials'.

Professor Kuncik mentioned this technology was still in its infancy. She was very upbeat about the future, based on the results of recent research and international collaboration. Advancements in carbon and molecular electronics have opened the door to a new generation of electronic nanoscale components such as nanobatteries, nanoscale energy harvesting systems, nano-memories, logical circuitry in the nanoscale and even nano-antennas.

From a communication perspective, the unique properties observed in nanomaterials will decide on the specific bandwidths for the emission of electromagnetic radiation, the time lag of the emission, or the magnitude of the emitted power forinput energy.

The researchers are looking at the output of these nanonetworks rather than the input. The process is analogue rather than digital. In other words, the potential output provides a range of possible choices, rather than one (digital) outcome.

The trick is to understand what choices are made in a nanonetwork and why.

There are two main alternatives for electromagnetic communication in the nanoscale the one as pursued by Professor Kuncik the other one being based on molecular communication.

Nanotechnology could have an enormous impact on for example the future of 5G. If nanotechnology can be included in the various Internet of Things (IoT) sensors and devices than this will open an enormous amount of new applications.

It has been experimentally demonstrated that is possible to receive and demodulate an electromagnetic wave by means of a nano radio.

Second, graphene-based nano-antennas have been analysed as potential electromagnetic radiators in the terahertz band.

Once these technologies are further developed and commercialised, we can see a revolution in edge-computing.

Paul Buddeis an Independent Australia columnist and managing director ofPaul Budde Consulting, an independent telecommunications research and consultancy organisation. You can follow Paul on Twitter@PaulBudde.

Support independent journalism Subscribeto IA.

See the original post here:
Artificial intelligence is on the rise - Independent Australia

Read More..

Powerful optical sorting technologies, artificial intelligence and robotics reduce contamination for recycling operations – Recycling Product News

"With the combination of technologies in the new AUTOSORT, we are able to see a different kind of chemical (makeup) of the same material," explains Atienza. For example, for different PET bottles and trays, "It's all PET for NIR, but combined with the other technologies, we are able to see the difference.

"We are able to identify and sort, for example, different kinds of PET, such as trays, bottles, etc., which is very important for recyclers, because the quality of the end product can change when you have a lot of trays on the PET line. We definitely see more and more PET trays on the market," which he points to as one of the drivers for the development of this technology. Key to taking material detection to the next level is DEEP LAISER, which is an optional add-on to the system.

According to TOMRA, the DEEP LAISER is one of the first fully integrated systems of its kind, and stands out for its compactness and flexible range of uses, with object recognition that enables a deeper sorting sharpness. This includes the use of Artificial Intelligence (AI) via Deep Learningwhich TOMRA first introduced in 2019.

In basic terms, FLYING BEAM's NIR/VIS (Visual Spectrum) technology makes an overall determination of what kind of plastic and colour is on the belt, and SHARP EYE looks more closely at the chemical composition, to give a deeper sort, differentiating between different types of PET, for example. When you add on the DEEP LAISER component, it detects material that other optical sorters cannot detect.

"It is a laser combined with an artificially intelligent camera, using our IOR software," explains Atienza. "With DEEP LAISER we are sorting material on the conveyor that the FLYING BEAM and SHARP EYE cannot detect, such as black plastic or glass." He says it detects and sorts these materials out as an impurity, and that due to the IOR, the system sees, and is able to learn about, a very wide range of different types of material, detecting chemical composition, shapes and colour.

"With our AUTOSORT technology we can see types of materials that many other technologies cannot see. For example, our machine can differentiate LDPE, (low density polyethylene) from linear LLDPE. It's mostly the same material, because it is all LDPE, but they have differences, and we are able to see it, even at 6m/second."

Another example is LDPE and HDPE film, he says. "With films, you have low-density and high-density polyethylene, and we can see that difference, because our sensor has very high detection sensitivity."

In addition to its vastly increased sorting capacity overall, the new AUTOSORT ValveBlock geometry system is also 25% more powerful, with a reduction in air consumption of about 15%, compared to the previous version. This contributes to lower operating costs, including less electrical consumption, and notably this new generation AUTOSORT also auto calibrates every millisecond for optimal operation through changing conditions.

"This is very important," says Atienza, especially because NIR is very sensitive to temperature. "The auto calibration every single millisecond allows us to operate efficiently through a wide range of temperatures during a day," he says.

Go here to see the original:
Powerful optical sorting technologies, artificial intelligence and robotics reduce contamination for recycling operations - Recycling Product News

Read More..

Artificial intelligence helping NASA design the new Artemis moon suit – SYFY WIRE

Last fall, NASA unveiled the new suits that Artemis astronauts will wear when they take humanitys first steps on the lunar surface for the first time since way back in 1972. The look of theA7LB pressure suit variants that accompanied those earlierastronauts to the Moon, and later to Skylab, has since gone on to signify for many the definitive, iconic symbol of humanitys most ambitiously-realized space dreams.

With Artemis 2024 launch target approaching, NASAs original Moon suit could soon be supplanted in the minds of a new generation of space dreamers with the xEMU, the first ground-up suit made for exploring the lunar landscape since Apollo 17s Eugene Cernan and Harrison Schmitt took humanitys last Moon walk (to date). Unlike those suits, the xEMUs design is getting an assist from a source of "brain" power that simply wasnt available back then: artificial intelligence.

Specifically, AI is reportedly crunching numbers behind the scenes to help engineer support components for the new, more versatile life support system thatll be equipped to the xEMU (Extravehicular Mobility Unit) suit. WIRED reports that NASA is using AI to assist the new suits life support system in carrying out its more vital functions while streamlining its weight, component size, and tolerances for load-bearing pressure, temperature, and the other physical demands that a trip to the Moon (and back) imposes.

Recruiting AI isnt just about speed though speed is definitely one of the perks to meeting NASAs ambitious 2024 timeline and all that lies beyond. The machines iterative process is 100 or 1,000 times more than we could do on our own, and it comes up with a solution that is ideally optimized within our constraints, Jesse Craft, a senior design engineer at a Texas-based contractor working on the upgraded version of the xEMU suit, told WIRED.

But in some instances, AI even raises the bar for quality, as Craft also noted. Were using AI to inspire design, he explained. We have biases for right angles, flat surfaces, and round dimensions things youd expect from human design. But AI challenges your biases and allows you to see new solutions you didnt see before.

So far, NASA is relying on AI only to design physical brackets and supports for the life support system itself in other words, not the kind of stuff that might spell life or death in the event of failure. But that approach is already paying off by cutting mass without sacrificing strength, yielding component weight reductions of up to 50 percent, according to the report.

Even at 1/6 the gravity that astronauts experience back on Earth, that kind of small weight savings here and there can add up to make a big difference on the Moon. And even a slight slimming down cant hurt the xEMUs chances at perhaps becoming a new standard bearer in space fashion, as Artemis captivates a new generation with its sights set on the stars.

Continued here:
Artificial intelligence helping NASA design the new Artemis moon suit - SYFY WIRE

Read More..