Page 3,422«..1020..3,4213,4223,4233,424..3,4303,440..»

The future of artificial intelligence and quantum computing – Military & Aerospace Electronics

NASHUA, N.H. -Until the 21st Century, artificial intelligence (AI) and quantum computers were largely the stuff of science fiction, although quantum theory and quantum mechanics had been around for about a century. A century of great controversy, largely because Albert Einstein rejected quantum theory as originally formulated, leading to his famous statement, God does not play dice with the universe.

Today, however, the debate over quantum computing is largely about when not if these kinds of devices will come into full operation. Meanwhile, other forms of quantum technology, such as sensors, already are finding their way into military and civilian applications.

Quantum technology will be as transformational in the 21st Century as harnessing electricity was in the 19th, Michael J. Biercuk, founder and CEO of Q-CTRL Pty Ltd in Sydney, Australia, and professor of Quantum Physics & Quantum Technologies at the University of Sydney, told the U.S. Office of Naval Research in a January 2019 presentation.

On that, there is virtually universal agreement. But when and how remains undetermined.

For example, asked how and when quantum computing eventually may be applied to high-performance embedded computing (HPEC), Tatjana Curcic, program manager for Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) of the U.S. Defense Advanced Research Projects Agency in Arlington, Va., says its an open question.

Until just recently, quantum computing stood on its own, but as of a few years ago people are looking more and more into hybrid approaches, Curcic says. Im not aware of much work on actually getting quantum computing into HPEC architecture, however. Its definitely not mainstream, probably because its too early.

As to how quantum computing eventually may influence the development, scale, and use of AI, she adds:

Thats another open question. Quantum machine learning is a very active research area, but is quite new. A lot of people are working on that, but its not clear at this time what the results will be. The interface between classical data, which AI is primarily involved with, and quantum computing is still a technical challenge.

Quantum information processing

According to DARPAs ONISQ webpage, the program aims to exploit quantum information processing before fully fault-tolerant quantum computers are realized.This quantum computer based on superconducting qubits is inserted into a dilution refrigerator and cooled to a temperature less than 1 Kelvin. It was built at IBM Research in Zurich.

This effort will pursue a hybrid concept that combines intermediate-sized quantum devices with classical systems to solve a particularly challenging set of problems known as combinatorial optimization. ONISQ seeks to demonstrate the quantitative advantage of quantum information processing by leapfrogging the performance of classical-only systems in solving optimization challenges, the agency states. ONISQ researchers will be tasked with developing quantum systems that are scalable to hundreds or thousands of qubits with longer coherence times and improved noise control.

Researchers will also be required to efficiently implement a quantum optimization algorithm on noisy intermediate-scale quantum devices, optimizing allocation of quantum and classical resources. Benchmarking will also be part of the program, with researchers making a quantitative comparison of classical and quantum approaches. In addition, the program will identify classes of problems in combinatorial optimization where quantum information processing is likely to have the biggest impact. It will also seek to develop methods for extending quantum advantage on limited size processors to large combinatorial optimization problems via techniques such as problem decomposition.

The U.S. government has been the leader in quantum computing research since the founding of the field, but that too is beginning to change.

In the mid-90s, NSA [the U.S. National Security Agency at Fort Meade, Md.] decided to begin on an open academic effort to see if such a thing could be developed. All that research has been conducted by universities for the most part, with a few outliers, such as IBM, says Q-CTRLs Biercuk. In the past five years, there has been a shift toward industry-led development, often in cooperation with academic efforts. Microsoft has partnered with universities all over the world and Google bought a university program. Today many of the biggest hardware developments are coming from the commercial sector.

Quantum computing remains in deep space research, but there are hardware demonstrations all over the world. In the next five years, we expect the performance of these machines to be agented to the point where we believe they will demonstrate a quantum advantage for the first time. For now, however, quantum computing has no advantages over standard computing technology. quantum computers are research demonstrators and do not solve any computing problems at all. Right now, there is no reason to use quantum computers except to be ready when they are truly available.

AI and quantum computing

Nonetheless, the race to develop and deploy AI and quantum computing is global, with the worlds leading military powers seeing them along with other breakthrough technologies like hypersonics making the first to successfully deploy as dominant as the U.S. was following the first detonations of atomic bombs. That is especially true for autonomous mobile platforms, such as unmanned aerial vehicles (UAVs), interfacing with those vehicles onboard HPEC.

Of the two, AI is the closest to deployment, but also the most controversial. A growing number of the worlds leading scientists, including the late Stephen Hawking, warn real-world AI could easily duplicate the actions of the fictional Skynet in the Terminator movie series. Launched with total control over the U.S. nuclear arsenal, Skynet became sentient and decided the human race was a dangerous infestation that needed to be destroyed.

The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldnt compete and would be superseded. Stephen Hawking (2014)

Such dangers have been recognized at least as far back as the publication of Isaac Asimovs short story, Runabout, in 1942, which included his Three Laws of Robotics, designed to control otherwise autonomous robots. In the story, the laws were set down in 2058:

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Whether it would be possible to embed and ensure unbreakable compliance with such laws in an AI system is unknown. But limited degrees of AI, known as machine learning, already are in widespread use by the military and advanced stages of the technology, such as deep learning, almost certainly will be deployed by one or more nations as they become available. More than 50 nations already are actively researching battlefield robots.

Military quantum computing

AI-HPEC would give UAVs, next-generation cruise missiles, and even maneuverable ballistic missiles the ability to alter course to new targets at any point after launch, recognize counter measures, avoid, and misdirect or even destroy them.

Quantum computing, on the other hand, is seen by some as providing little, if any, advantage over traditional computer technologies, by many as requiring cooling and size, weight and power (SWaP) improvements not possible with current technologies to make it applicable to mobile platforms and by most as being little more than a research tool for perhaps decades to come.

Perhaps the biggest stumbling block to a mobile platform-based quantum computing is cooling it currently requires a cooling unit, at near absolute zero, the Military trusted computing experts are considering new generations of quantum computing for creating nearly unbreakable encryption for super-secure defense applications.size of a refrigerator to handle a fractional piece of quantum computing.

A lot of work has been done and things are being touted as operational, but the most important thing to understand is this isnt some simple physical thing you throw in suddenly and it works. That makes it harder to call it deployable youre not going to strap a quantum computing to a handheld device. A lot of solutions are still trying to deal with cryogenics and how do you deal with deployment of cryo, says Tammy Carter, senior product manager for GPGPUs and software products at Curtiss-Wright Defense Solutions in Ashburn, Va.

AI is now a technology in deployment. Machine learning is pretty much in use worldwide, Carter says. Were in a migration of figuring out how to use it with the systems we have. quantum computing will require a lot of engineering work and demand may not be great enough to push the effort. From a cryogenically cooled electronics perspective, I dont think there is any insurmountable problem. It absolutely can be done, its just a matter of decision making to do it, prioritization to get it done. These are not easily deployed technologies, but certainly can be deployed.

Given its current and expected near-term limitations, research has increased on the development of hybrid systems.

The longer term reality is a hybrid approach, with the quantum system not going mobile any time soon, says Brian Kirby, physicist in the Army Research Laboratory Computational & Informational Sciences Directorate in Adelphi, Md. Its a mistake to forecast a timeline, but Im not sure putting a quantum computing on such systems would be valuable. Having the quantum computing in a fixed location and linked to the mobile platform makes more sense, for now at least. There can be multiple quantum computers throughout the country; while individually they may have trouble solving some problems, networking them would be more secure and able to solve larger problems.

Broadly, however, quantum computing cant do anything a practical home computer cant do, but can potentially solve certain problems more efficiently, Kirby continues. So youre looking at potential speed-up, but there is no problem a quantum computing can solve a normal computer cant. Beyond the basics of code-breaking and quantum simulations affecting material design, right now we cant necessarily predict military applications.

Raising concerns

In some ways similar to AI, quantum computing raises nearly as many concerns as it does expectations, especially in the area of security. The latest Thales Data Threat Report says 72 percent of surveyed security experts worldwide believe quantum computing will have a negative impact on data security within the next five years.

At the same time, quantum computing is forecast to offer more robust cryptography and security solutions. For HPEC, that duality is significant: quantum computing can make it more difficult to break the security of mobile platforms, while simultaneously making it easier to do just that.

Quantum computers that can run Shors algorithm [leveraging quantum properties to factor very large numbers efficiently] are expected to become available in the next decade. These algorithms can be used to break conventional digital signature schemes (e.g. RSA or ECDSA), which are widely used in embedded systems today. This puts these systems at risk when they are used in safety-relevant long-term applications, such as automotive systems or critical infrastructures. To mitigate this risk, classical digital signature schemes used must be replaced by schemes secure against quantum computing-based attacks, according to the August 2019 proceedings of the 14th International Conference on Availability, Reliability & Securitys Post-Quantum Cryptography in Embedded Systems report.

The security question is not quite so clean-cut as armor/anti-armor, but there is a developing bifurcation between defensive and offensive applications. On the defense side, deployed quantum systems are looked at to provide encoded communications. Experts say it seems likely the level of activity in China about quantum communications, which has been a major focus for years, runs up against the development of quantum computing in the U.S. The two aspects are not clearly one-against-one, but the two moving independently.

Googles quantum supremacy demonstration has led to a rush on finding algorithms robust against quantum attack. On the quantum communications side, the development of attacks on such systems has been underway for years, leading to a whole field of research based on identifying and exploiting quantum attacks.

Quantum computing could also help develop revolutionary AI systems. Recent efforts have demonstrated a strong and unexpected link between quantum computation and artificial neural networks, potentially portending new approaches to machine learning. Such advances could lead to vastly improved pattern recognition, which in turn would permit far better machine-based target identification. For example, the hidden submarine in our vast oceans may become less-hidden in a world with AI-empowered quantum computers, particularly if they are combined with vast data sets acquired through powerful quantum-enabled sensors, according to Q-CTRLs Biercuk.

Even the relatively mundane near-term development of new quantum-enhanced clocks may impact security, beyond just making GPS devices more accurate, Biercuk continues. Quantum-enabled clocks are so sensitive that they can discern minor gravitational anomalies from a distance. They thus could be deployed by military personnel to detect underground, hardened structures, submarines or hidden weapons systems. Given their potential for remote sensing, advanced clocks may become a key embedded technology for tomorrows warfighter.

Warfighter capabilities

The early applications of quantum computing, while not embedded on mobile platforms, are expected to enhance warfighter capabilities significantly.

Jim Clark, director of quantum hardware at Intel Corp. in Santa Clara, Calif., shows one of the companys quantum processors.There is a high likelihood quantum computing will impact ISR [intelligence, surveillance and reconnaissance], solving logistics problems more quickly. But so much of this is in the basic research stage. While we know the types of problems and general application space, optimization problems will be some of the first where we will see advantages from quantum computing, says Sara Gamble, quantum information sciences program manager at ARL.

Biercuk says he agrees: Were not really sure there is a role for quantum computing in embedded computing just yet. quantum computing is right now very large systems embedded in mainframes, with access by the cloud. You can envision embedded computing accessing quantum computing via the cloud, but they are not likely to be very small, agile processors you would embed in a SWAP-constrained environment.

But there are many aspects of quantum technology beyond quantum computing; the combination of quantum sensors could allow much better detection in the field, Biercuk continues. The biggest potential impact comes in the areas of GPS denial, which has become one of the biggest risk factors identified in every blueprint around the world. quantum computing plays directly into this to perform dead reckoning navigation in GPS denial areas.

DARPAs Curcic also says the full power of quantum computing is still decades away, but believes ONISQ has the potential to help speed its development.

The main two approaches industry is using is superconducting quantum computing and trapped ions. We use both of those, plus cold atoms [Rydberg atoms]. We are very excited about ONISQ and seeing if we can get anything useful over classical computing. Four teams are doing hardware development with those three approaches, she says.

Because these are noisy systems, its very difficult to determine if there will be any advantages. The hope is we can address the optimization problem faster than today, which is what were working on with ONISQ. Optimization problems are everywhere, so even a small improvement would be valuable.

Beyond todays capabilities

As to how quantum computing and AI may impact future warfare, especially through HPEC, she adds: I have no doubt quantum computing will be revolutionary and well be able to do things beyond todays capabilities. The possibilities are pretty much endless, but what they are is not crystal clear at this point. Its very difficult, with great certainly, to predict what quantum computing will be able to do. Well just have to build and try. Thats why today is such an exciting time.

Curtiss Wrights Carter says he believes quantum computing and AI will be closely linked with HPEC in the future, once current limitations with both are resolved.

AI itself is based on a lot of math being done in parallel for probability answers, similar to modeling the neurons in the brain highly interconnected nodes and interdependent math calculations. Imagine a small device trying to recognize handwriting, Carter says. You run every pixel of that through lots and lots of math, combining and mixing, cutting some, amplifying others, until you get a 98 percent answer at the other end. quantum computing could help with that and researchers are looking at how you would do that, using a different level of parallel math.

How quantum computing will be applied to HPEC will be the big trick, how to get that deployed. Imagine were a SIGINT [signals intelligence] platform land, air or sea there are a lot of challenges, such as picking the right signal out of the air, which is not particularly easy, Carter continues. Once you achieve pattern recognition, you want to do code breaking to get that encrypted traffic immediately. Getting that on a deployed platform could be useful; otherwise you bring your data back to a quantum computing in a building, but that means you dont get the results immediately.

The technology research underway today is expected to show progress toward making quantum computing more applicable to military needs, but it is unlikely to produce major results quickly, especially in the area of HPEC.

Trapped ions and superconducting circuits still require a lot of infrastructure to make them work. Some teams are working on that problem, but the systems still remain room-sized. The idea of quantum computing being like an integrated circuit you just put on a circuit board were a very long way from that, Biercuk says. The systems are getting smaller, more compact, but there is a very long way to go to deployable, embeddable systems. Position, navigation and timing systems are being reduced and can be easily deployed on aircraft. Thats probably where the technology will remain in the next 20 years; but, eventually, with new technology development, quantum computing may be reduced to more mobile sizes.

The next 10 years are about achieving quantum advantage with the systems available now or iterations. Despite the acceleration we have seen, there are things that are just hard and require a lot of creativity, Biercuk continues. Were shrinking the hardware, but that hardware still may not be relevant to any deployable system. In 20 years, we may have machines that can do the work required, but in that time we may only be able to shrink them to a size that can fit on an aircraft carrier local code-breaking engines. To miniaturize this technology to put it on, say, a body-carried system, we just dont have any technology basis to claim we will get there even in 20 years. Thats open to creativity and discovery.

Even with all of the research underway worldwide, one question remains dominant.

The general challenge is it is not clear what we will use quantum computing for, notes Rad Balu, a computer scientist in ARLs Computational & Informational Sciences Directorate.

See the rest here:
The future of artificial intelligence and quantum computing - Military & Aerospace Electronics

Read More..

BBVA Uncovers The Promise Of Quantum Computing For Banking And Financial Services – Forbes

Computers have underpinned the digital transformation of the banking and financial services sector, and quantum computing promises to elevate this transformation to a radically new level. BBVA, the digital bank for the 21st centuryestablished in 1857 and today the second largest bank in Spainis at the forefront of investigating the benefits of quantum computing.

Will quantum computing move banking to a new level of digital transformation?

We are trying to understand the potential impact of quantum computing over the next 5 years, says Carlos Kuchkovsky, global head of research and patents at BBVA. Last month, BBVA announced initial results from their recent exploration of quantum computings advantage over traditional computer methods. Kuchkovskys team looked at complex financial problems with many dimensions or variables that require computational calculations that sometimes take days to complete. In the case of investment portfolio optimization, for example, they found that the use of quantum and quantum-inspired algorithms could represent a significant speed-up compared to traditional techniques when there are more than 100 variables.

Carlos Kuchkovsky, Global Head of Research and Patents, BBVA

After hiring researchers with expertise in quantum computing, BBVA identified fifteen challenges that could be solved better with quantum computing, faster and with greater accuracy, says Kuchkovsky. The results released last month were for six of these challenges, serving as proofs-of-concept for, first and foremost, the development of quantum algorithms and also for their application in the following five financial services tasks: Static and dynamic portfolio optimization, credit scoring process optimization, currency arbitrage optimization, and derivative valuations and adjustments.

Another important dimension of BBVAs quantum computing journey is developing an external network. The above six proofs-of-concept were pursued in collaboration with external partners bringing to the various investigations their own set of skills and expertise: The Spanish National Research Council (CSIC), the startups Zapata Computing and Multiverse, the technology firm Fujitsu, and the consulting firm Accenture.

Kuchkovsky advises technology and business executives in other companies, in any industry, to follow BBVAs initial stepssurveying the current state of the technology and the major players, developing internal expertise and experience with quantum computing and consolidating the internal team, identifying specific business problems, activities and opportunities where quantum computing could provide an advantage over todays computers, and develop an external network by connecting to and collaborating with relevant research centers and companies.

As for how to organize internally for quantum computing explorations, Kuchkovsky thinks there could be different possibilities, depending on the level of maturity of the research and technology functions of the business. In BBVAs case, the effort started in the research function and he thinks will evolve in a year or two to a full-fledged quantum computing center of excellence.

Quantum computing is evolving rapidly and Kuchkovsky predicts that in five years, companies around the world will enjoy full access to quantum computing as a service and will benefit from the application of quantum algorithms, also provided as a service. Specifically, he thinks we will see the successful application of quantum computing to machine learning (e.g., improving fraud detection in the banking sector). With the growing interest in quantum computing, Kuchkovsky believes that in five years there will be a sufficient supply of quantum computing talent to satisfy the demand for quantum computing expertise.

The development of a talent pool of experienced and knowledgeable quantum computing professionals depends among other things on close working relationships between academia and industry. These relationships tend to steer researchers towards practical problems and specific business challenges and, in turn, helps in upgrading the skills of engineers working in large corporations and orient them toward quantum computing.

In Kuchocvskys estimation, the connection between academia and industry is relatively weaker in Europe compared to the United States. But there are examples of such collaboration, such as BBVAs work with CSIC and the European Unions Quantum Technologies Flagship, bringing together research centers, industry, and public funding agencies.

On July 29, Fujitsu announced a new collaboration with BBVA, to test whether a quantum computer could outperform traditional computing techniques in optimizing asset portfolios, helping minimize risk while maximizing returns, based on a decades worth of historical data. In the release, Kuchkovsky summarized BBVAs motivation for exploring quantum computing: Our research is helping us identify the areas where quantum computing could represent a greater competitive advantage, once the tools have sufficiently matured. At BBVA, we believe that quantum technology will be key to solving some of the major challenges facing society this decade. Addressing these challenges dovetails with BBVAs strategic priorities, such as fostering the more efficient use of increasingly greater volumes of data for better decision-making as well as supporting the transition to a more sustainable future.

View original post here:
BBVA Uncovers The Promise Of Quantum Computing For Banking And Financial Services - Forbes

Read More..

Has the world’s most powerful computer arrived? – The National

The quest to build the ultimate computer has taken a big step forward following breakthroughs in ensuring its answers can be trusted.

Known as a quantum computer, such a machine exploits bizarre effects in the sub-atomic world to perform calculations beyond the reach of conventional computers.

First proposed almost 40 years ago, tech giants Microsoft, Google and IBM are among those racing to exploit the power of quantum computing, which is expected to transform fields ranging from weather forecasting and drug design to artificial intelligence.

The power of quantum computers comes from their use of so-called qubits, the quantum equivalent of the 1s and 0s bits used by conventional number-crunchers.

Unlike bits, qubits exploit a quantum effect allowing them to be both 1s and 0s at the same time. The impact on processing power is astonishing. Instead of processing, say, 100 bits in one go, a quantum computer could crunch 100 qubits, equivalent to 2 to the power 100, or a million trillion trillion bits.

At least, that is the theory. The problem is that the property of qubits that gives them their abilities known as quantum superposition is very unstable.

Once created, even the slightest vibration, temperature shift or electromagnetic signal can disturb the qubits, causing errors in calculations. Unless the superposition can be maintained long enough, the quantum computer either does a few calculations well or a vast amount badly.

For years, the biggest achievement of any quantum computer involved using a few qubits to find the prime factors of 15 (which every schoolchild knows are 3 and 5).

Using complex shielding methods, researchers can now stabilise around 50 qubits long enough to perform impressive calculations.

Last October, Google claimed to have built a quantum computer that solved in 200 seconds a maths problem that would have taken an ultra-fast conventional computer more than 10,000 years.

Yet even this billion-fold speed-up is just a shadow of what would be possible if qubits could be kept stable for longer. At present, many of the qubits have their powers wasted being used to spot and fix errors.

Now two teams of researchers have independently found new ways of tackling the error problem.

Physicists at the University of Chicago have found a way of keeping qubits stable for longer not by blocking disturbances, but by blurring them.

It is like sitting on a merry-go-round with people yelling all around you

Dr Kevin Miao, computing expert

In some quantum computers, the qubits take the form of electrons whose direction of spin is a superposition of both up and down. By adding a constantly flipping magnetic field, the team found that the electrons rotated so quickly that they barely noticed outside disturbances. The researchers explain the trick with an analogy: It's like sitting on a merry-go-round with people yelling all around you, says team member Dr Kevin Miao. When the ride is still, you can hear them perfectly, but if you're rapidly spinning, the noise blurs into a background.

Describing their work in the journal Science, the team reported keeping the qubits working for about 1/50th of a second - around 10,000 times longer than their lifetime if left unshielded. According to the team, the technique is simple to use but effective against all the standard sources of disturbance. Meanwhile, researchers at the University of Sydney have come up with an algorithm that allows a quantum computer to work out how its qubits are being affected by disturbances and fix the resulting errors. Reporting their discovery in Nature Physics, the team says their method is ready for use with current quantum computers, and could work with up to 100 qubits.

These breakthroughs come at a key moment for quantum computing. Even without them, the technology is already spreading beyond research laboratories.

In June, the title of worlds most powerful quantum computer was claimed not by a tech giant but by Honeywell a company perhaps best known for central heating thermostats.

Needless to say, the claim is contested by some, not least because the machine is reported to have only six qubits. But Honeywell points out that it has focused its research on making those qubits ultra-stable which allows them to work reliably for far longer than rival systems. Numbers of qubits alone, in other words, are not everything.

And the company insists this is just the start. It plans to boost the performance of its quantum computer ten-fold each year for the next five years, making it 100,000 times more powerful still.

But apart from bragging rights, why is a company like Honeywell trying to take on the tech giants in the race for the ultimate computer ?

A key clue can be found in remarks made by Honeywell insiders to Forbes magazine earlier this month. These reveal that the company wants to use quantum computers to discover new kinds of materials.

Doing this involves working out how different molecules interact together to form materials with the right properties. Thats something conventional computers are already used for. But quantum computers wont just bring extra number-crunching power to bear. Crucially, like molecules themselves, their behaviour reflects the bizarre laws of quantum theory. And this makes them ideal for creating accurate simulations of quantum phenomena like the creation of new materials.

This often-overlooked feature of quantum computers was, in fact, the original motivation of the brilliant American physicist Richard Feynman, who first proposed their development in 1981.

Honeywell already has plans to use quantum computers to identify better refrigerants. These compounds were once notorious for attacking the Earths ozone layer, but replacements still have unwanted environmental effects. Being relatively simple chemicals, the search for better refrigerants is already within the reach of current quantum computers.

But Honeywell sees a time when far more complex molecules such as drugs will also be discovered using the technology.

For the time being, no quantum computer can match the all-round number-crunching power of standard computers. Just as Honeywell made its claim, the Japanese computer maker Fujitsu unveiled a supercomputer capable of over 500 million billion calculations a second.

Even so, the quantum computer is now a reality and before long it will make even the fastest supercomputer seem like an abacus.

Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK

Updated: August 21, 2020 12:06 PM

Link:
Has the world's most powerful computer arrived? - The National

Read More..

Explore the best free cloud backup services on the market – TechTarget

With the recent increase in remote work, now is a great time to try out the cloud for backup.

One of the best ways to get started with cloud backups is to try one of the many available free services. You can sample a variety of providers to get a feel for their strengths and weaknesses before investing in one specific paid product.

The best free cloud backup options are somewhat limited. Even so, there are some providers you should consider.

IDrive might be the best known of the free cloud backup providers. It offers a free basic plan that includes up to 5 GB of storage. The company also offers personal plans starting at $52.12 per year for 5 TB or $74.62 per year for 10 TB.

IDrive also provides business plans, but these plans cost more than the personal plans and generally include less storage. The 250 GB plan starts at $74.62 per year, and a 1.25 TB plan starts at $374.62 per year.

Jottacloud is another popular option for users looking for free cloud backup services. Jottacloud's free personal plan comes with 5 GB of storage and allows for a single user account. The company also offers a personal plan with unlimited cloud storage for 7.5 Euros (about $9) per month, but it throttles upload speeds if the storage consumption exceeds 5 TB. The company also has multiuser personal plans available, starting at 6.5 Euros per month for 1 TB of storage.

Jottacloud has a free business subscription that includes 5 GB of space and supports up to two users. Other business plans range between 8.99 Euros and 29.99 Euros per month. These plans include 1 TB of storage and vary in terms of the number of users that they support. Extra storage costs 6.5 Euros per month per TB.

Those who are looking for a 100% free and practical platform should consider a different approach. Rather than looking for a provider that offers free storage space, the best free cloud backup may ultimately be an open source or "community edition" backup application that you pair with free cloud storage. This could provide you with more storage space and fewer limitations than you would encounter with a free cloud backup product.

Rather than looking for a provider that offers free storage space, the best free cloud backup may ultimately be an open source or 'community edition' backup application that you pair with free cloud storage.

Several backup providers offer open source products -- options such as Duplicity. Duplicity is a command-line backup utility that creates encrypted tarballs, which you can write to free cloud storage. Those who prefer a GUI-based tool might consider the Community Edition of Veeam Backup & Replication. It can protect up to 10 workloads and perform an unlimited number of ad hoc backups.

A number of cloud providers offer free storage, which you can pair with these and other tools. Google, for instance, offers 15 GB of free storage on Google Drive. Similarly, Microsoft provides 5 GB of free storage on OneDrive, and Box offers 10 GB of free storage.

Even the best free cloud backup services almost always have significant limitations. The most common of these limitations is capacity. Backup providers usually only give their customers a relatively small amount of free backup storage before requiring them to pay for additional space.

Other limitations vary widely from one provider to the next. Some providers, for example, license their free products only for personal use, requiring business users to adopt a paid offering. Other providers might restrict the types of data that you can back up to their free product. Providers also sometimes throttle their free cloud backups to keep bandwidth available for their paying customers.

Read the original:
Explore the best free cloud backup services on the market - TechTarget

Read More..

Integrated Media Technologies Joins the Active Archive Alliance – Sports Video Group

Integrated Media Technologies (IMT) has joined the Active Archive Alliance, which promotes modern strategies to solve data growth challenges. IMTs software division recently introduced SoDA, an intelligent data management software application that provides real-time, actionable insights for data management strategies and cloud storage spend. IMT joins a growing number of industry-leading storage and IT vendors that support the use of active archiving strategies, technologies, and use cases to unlock the value of archival data.

We are pleased to welcome IMT to the Active Archive Alliance, says Peter Faulhaber, chairman of the Active Archive Alliance and President and CEO of FUJIFILM Recording Media U.S.A., Inc. The fast pace of data growth is increasing demand for active archives as organizations seek ways to cost-effectively manage digital archives while keeping archival data accessible and secure. IMTs software solutions bring new tools and insights to the Alliance and our pursuit to help end-users manage large unstructured data growth with innovative archiving solutions.

Rising unstructured data volumes are intensifying the need for active archives, which manage data for rapid search, retrieval, and analytics. Efficient and cost-effective, active archives leverage an intelligent data management layer that enables online access to data throughout its lifecycle and regardless of which storage tier it resides. Active archive solutions support a wide variety of vertical markets and applications, including media and entertainment, high-res media, rich data streaming, healthcare, telecommunications, life sciences, the internet of things (IoT), artificial intelligence (AI), machine learning, data lakes, and surveillance, among others.

IMT SoDA predicts the cost and speed of data movement between on-prem and cloud solutions, enabling customers to control their spend and understand the time it takes to store and restore archived data in the cloud. SoDAs policy engine tracks archival data for easy accessibility and provides direct access to public cloud storage tiers.

IMT SoDA delivers a level of simplicity, insight and control that helps customers achieve an ultra-efficient and cost-effective archival solution, says Brian Morsch, senior vice president of worldwide sales, IMT Software. With SoDA, customers gain actionable insight into the costs associated for each storage offering in the cloud and can archive content easily between tiers.

Follow this link:
Integrated Media Technologies Joins the Active Archive Alliance - Sports Video Group

Read More..

Storj Labs and FileZilla Collaborate to Offer Secure File Storage in the Remote Work Era – Database Trends and Applications

FileZilla users can now use Storj Labs Tardigrade decentralized cloud storage as the storage service for their files.

FileZilla is a fast, secure, and reliable file transfer tool, making it a great pairing with Tardigrade, according to the vendors. Because Tardigrade is decentralized, all data stored on the service is private by default through end-to-end encryption, ensuring data confidentiality.

On top of that, Tardigrade provides multi-region redundancy for each file at no cost and no complicated configuration.

Tardigrade makes it easy to securely store and transfer data using end-to-end encryption so only the user can access it. No need for complicated setups and configurations to ensure confidentiality. Every file on Tardigrade is encrypted by default.

For developers and engineers, Tardigrades p2p architecture means it delivers better performanceespecially if downloading a file from half-way around the world. Its decentralized architecture also ensures the integrity of the file, as the system is constantly auditing files stored on the platform to ensure its not changed, corrupted, or modified.

This makes it great for multi-cloud environments, as your data is globally distributed without any extra cost, effort, or configuration.

For more information about this news, visit https://storj.io/.

Excerpt from:
Storj Labs and FileZilla Collaborate to Offer Secure File Storage in the Remote Work Era - Database Trends and Applications

Read More..

Cloud Compliance Frameworks: What You Need to Know – Security Boulevard

For those who thought data security was hard when business was primarily on-sitewelcome to a new age of complexity. Todays business is mobile with data stored everywhere in the cloud. However, one thing hasnt changed: customers are still demanding that organizations keep their data safe. Failure isnt an option, and non-compliance with todays strict regulations brings stiff penalties and, most importantly, the loss of customer trust, something no business can afford.

In this article we will examine the key components of cloud compliance frameworks, introduce examples, and explain why aligning your data security policies and procedures to these compliance frameworks is critical for organizations looking to protect data and maintain customer trust in a mobile world.

Cloud storage and SaaS solutions bring unprecedented speed, agility, and flexibility to a business. However, trusting third-party vendors with sensitive data comes with numerous inherent risks. Here are some challenges to consider when securing your data in the cloud:

Cloud deployments deliver accessibility, but they also create open, decentralized networks with increased vulnerability. This is where cloud compliance frameworks come in. Modern enterprises need the holistic guidance and structure provided by these frameworks to keep data safe in todays dispersed business landscape.

When an organization understands the inherent risks they are exposed to through the use of cloud services, develops policies and processes to manage these risks, and, most importantly, follows through on these policies and processes, they can have higher confidence in their security posture.

Cloud security experts have identified key control categories to mitigate the inherent risk of using cloud services. These are formalized through frameworks such as the Cloud Security Alliance Cloud Controls Matrix (CCM).

Below are the components compliance frameworks utilize to drive a higher level of security in the cloud.

Governance

These preset controls protect your sensitive data from dangerous public exposure. The following are essential areas of cloud governance:

Change Control

Two of the clouds biggest advantages, speed and flexibility, make controlling change more difficult. Inadequate change control often results in problematic misconfigurations in the cloud. Organizations should consider leveraging automation to continuously check configurations for issues and ensure successful change processes.

Identity and access management (IAM) controls often experience multiple changes in the cloud. Below are a few IAM best practices to keep in mind for your cloud environment:

Continuous Monitoring

The complexity and dispersed nature of the cloud make monitoring and logging all activity extremely important. Capturing the who, what, when, where, and how of events keeps organizations audit-ready and is the backbone of compliance verification. When monitoring and logging data in your cloud environment, its essential to:

Vulnerability Management

Effectively managing vulnerability starts with a comprehensive knowledge of your environments and identifying potential risks. Smart organizations analyze all software for known weaknesses and watch for the introduction of third-party entities with potential vulnerabilities. Identifying and remediating vulnerabilities is central to any security platform and plays a major role in meeting regulatory requirements.

Reporting

Reporting provides current and historical proof of compliance. Think of these reports as your compliance footprint and very handy come audit time. A complete timeline of all events before and after an incident can provide critical evidence should your compliance ever be questioned. How long youre required to keep these records depends on the individual regulation requirementsome want only a month or two, while others require much longer. Your team must keep all files in a secure, independent location in the event of an on-site system crash or natural disaster.

These frameworks speak specifically to cloud compliance requirements. Both cloud vendors and customers should be well versed on the specifics of these three frameworks.

Cloud Security Alliance Controls Matrix: This foundational grouping of security controls, created by the Cloud Security Alliance, provides a basic guideline for security vendors, boosting the strength of security control environments and simplifying audits. Additionally, this framework helps potential customers appraise the risk posture of prospective cloud vendors.

The Cloud Security Alliance has developed a certification program called STAR. The value-added CSA STAR certification verifies an above and beyond cloud security stance that carries weight with customers. This overachievers set of standards may be the best asset for customers looking to assess a vendors commitment to security, and a must for all organizations looking to cement customer trust. Further, The STAR registry documents the security and privacy controls provided by popular cloud computing offerings, so cloud customers can assess their security providers to make good purchasing decisions.

FedRAMP: Meeting this set of cloud-specific data security regulations is a must for organizations looking to do business with any Federal agency. FedRAMPs purpose is to ensure all cloud deployments used by the Federal government have the minimum level of required protection for data and applications. Be preparedbecoming FedRAMP compliant can be a long, detailed, and exhaustive process even for well-staffed organizations. A System Security Plan documenting controls must be submitted to the Joint Authorization Board (JAB), followed by an assessment and authorization. Organizations must then demonstrate continuous compliance to retain FedRAMP status.

Sarbanes-Oxley (SOX): We can thank well-publicized financial scandals like Enron for this set of financial regulatory requirements. SOX is a set of guidelines governing how publicly-traded companies report financial data to protect customers from errors in reporting or fraud. SOX regulations arent security-specific, but a variety of IT security controls are included within the scope of SOX because they support data integrity. However, SOX audits cover just a small portion of cloud security and IT infrastructure. SOX shouldnt be taken lightly, as violators can expect harsh penalties, including fines up to five million dollars or up to twenty years in jail.

Organizations handling sensitive data can benefit from adhering to the standards set by the following security-specific regulations. These frameworks provide the methodology and structure to help avoid damaging security incidents. Here are four frameworks that organizations should have on their radar.

ISO 27001: Developed by the International Organization for Standards, this international set of standards for information security management systems demonstrates that your organization operates within the best practices of information security and takes data protection seriously. Any company handling sensitive data should seriously consider adding ISO 27001 to their compliance resume. ISO 27002 supports this regulation by detailing the specific controls required for compliance under ISO 27001 standards.

NIST Cybersecurity Framework: This foundational policy and procedure standard for private sector organizations appraises their ability to manage and mitigate cyber-attacks. A best practice guide for security pros, this framework assists in understanding and managing risk and should be mandatory reading for those on the first line of defense. NIST Cybersecurity Framework is built around five core functions: identifying, protecting, detecting, responding, and recovering. Back in 2015, Gartner estimated that 50% of United States organizations will use the NIST Security Framework by 2020.

CIS Controls: The Center for Internet Security created this guideline of best practices for cyber defense. This framework delivers actionable defense practices based on a list of 20 Critical Security Controls which focus on tightening access controls, defense system hardening, and continuous monitoring of environments. The first six are described as basic controls, the middle ten as foundational controls, and the remaining four as organizational controls.

These frameworks can be considered best practice guidelines for cloud architects, commonly addressing operational efficiency, security, and cost-value considerations. Here are three for cloud architects to keep front of mind.

AWS Well-Architected Framework: This best practice guideline helps Amazon Web Services architects design workloads and applications in the Amazon cloud. This framework operates around a set of questions for the critique of cloud environments and provides customers with a solid resource for architecture evaluation. Five key principles guide Amazon architectsoperational excellence, security, reliability, performance efficiency, and cost optimization.

Google Cloud Architected Framework: This best practice guideline provides a foundation for constructing and enhancing Google cloud offerings. This framework guides architects by focusing on four key principlesoperational excellence, security and compliance, reliability, and performance cost optimization.

Azure Architecture Framework: This set of best practice guidelines assists architects constructing cloud-based offerings in Microsoft Azure. This guide helps maximize architecture workloads and is based on similar principles as those found in the AWS and Google Cloud Frameworks, including cost optimization to drive increased value, operational excellence and performance efficiency to keep systems functional, reliability to recover from failures, and security for data protection.

Customers want to know they can trust your organization to keep their data safe. If your organization wants to conduct business with the federal government, achieving certain cloud security certifications is the procurement gate.

Cloud compliance frameworks provide the guidelines and structure necessary for maintaining the level of security your customers demand.

Additionally, these frameworks will help you navigate a regulatory minefield and avoid the steep financial and reputational cost of non-compliance. Most importantly, implementing a compliance framework will allow your organization to verify your commitment to privacy and data protection. This will keep you out of trouble with regulators and boost credibility and trust with your customers.

Security and compliance, though different, are interrelated and have significant overlap. These areas of overlap can create dangerous gaps in your defense. Innovative, continuous compliance solutions, such as those provided by Hyperproof, can help organizations identify and manage overlaps between security and compliance risk mitigation strategies to create safer environments.

Hyperproof makes the process of gaining cloud security certifications (e.g. ISO 27001, FedRAMP) and maintaining them faster and easier . Our compliance operations software allows you to see and understand all the requirements of a compliance framework. You can create controls to meet the requirements and assign controls to your team to operate or monitor. Ultimately, this will help your compliance team save time gathering evidence to verify the operating effectiveness of internal controls so compliance and security leaders can spend more time on controls testing. Hyperproof also has a Crosswalks feature that clearly identifies the overlapping requirement areas across multiple security frameworks. This allows you to leverage your existing compliance efforts to achieve certification in additional frameworks faster. Hyperproofs compliance solution provides analytics and dashboards to run a continuous monitoring program to verify your compliance status and drive remediation efforts.

To see how Hyperproof helps you gain control of your compliance efforts, sign up for a personalized demo.

MarkKnowlesis a freelance content marketing writer specializing in articles, e-books, and whitepapers on cybersecurity, automation, and artificial intelligence.Markhas experience creating fresh content, engaging audiences, and establishing thought leadership for many top tech companies. He is based in the sunny state of Arizona but enjoys traveling the world and writing remotely.

Banner photo byChristina MorillofromPexels

The post Cloud Compliance Frameworks: What You Need to Know appeared first on Hyperproof.

Recent Articles By Author

*** This is a Security Bloggers Network syndicated blog from Hyperproof authored by Hyperproof Team. Read the original post at: https://hyperproof.io/resource/cloud-compliance-frameworks/?utm_source=rss&utm_medium=rss&utm_campaign=cloud-compliance-frameworks

Here is the original post:
Cloud Compliance Frameworks: What You Need to Know - Security Boulevard

Read More..

Reevert Unveils Advanced Tools to Enhance Network Security and Efficiency for Remote Workforces – PRNewswire

LOS ANGELES, Aug. 26, 2020 /PRNewswire/ --As widespread remote working places unprecedented strain on IT networks, reevert, an intelligent hybrid data backup and storage solution, announces powerful new features to help managers keep systems operating safely and efficiently. Designed to ITAR standards for defense contractors, reevert's new tools provide the highest level of protection against data loss and threats such as ransomware, which can cost companies millions of dollars in payments and lost productivity.

reevert stores, secures and backs up the data of high-profile organizations, including the Rose Bowl stadium and financial service and healthcare companies that require sophisticated security protocols. reevert'sclients can now benefit from features that include a new monitoring system that provides detailed information on any computer in a network and alerts IT managers to issues before they escalate. Also, an intuitive new secure VPN system protects devices when staff are working on unsecured personal WiFi, while back-end improvements enhance data upload speeds on slower home networks.

Ara Aslanian, co-founder and CEO of reevert, said the company had accelerated development and deployment of these new capabilities to meet the rapidly evolving needs of network managers and the growing list of cybersecurity threats caused by remote working.

"Companies know that their data is a competitive advantage and work hard to secure it," said Aslanian. "But the switch to remote working happened so suddenly that many firms had little time to prepare their networks. These upgrades will enable customers to protect themselves from cybercriminals trying to exploit vulnerabilities in remote workforces and help them maintain the integrity of the data on which their businesses depend."

reevert 1.14.4.0 highlights:

A complete list of upgrades and more information is available here. The enhancements are available immediately to reevert customers and on Amazon Marketplace.

About reevert:reevert is an intelligent hybrid backup and storage solution, designed from the ground up specifically to protect businesses against ransomware and data loss. It features fast hourly snapshots, safeguards your data and backups, and allows quick recovery. reevert can image servers and computers, protect network shares and local files, and offers offsite cloud data backups.

For more information please visit reevert.com

Media Contact:David PatersonVenture PR424-230-3770[emailprotected]

SOURCE reevert

https://www.reevert.com

See the original post here:
Reevert Unveils Advanced Tools to Enhance Network Security and Efficiency for Remote Workforces - PRNewswire

Read More..

Enhancing Network Visibility for SD-WAN in the Era of Cloud and SaaS – The Fast Mode

As COVID-19 continues to plague all crevices and corners of the world, people have turned to social media, gaming, OTT video and even collaborative workout apps to maintain some sort of social interaction and normalcy in their lives. According to Infinera, Facebooks daily website traffic has increased by 27%, the number of WhatsApp calls and messages have doubled and on one occasion, 50,000 years of content had been streamed by Americans in just one day.

Its not just social interaction thats gone online, working from home has also soared in popularity as lockdowns and health concerns make it nearly impossible for enterprises to host all their employees in central offices. This took place just as enterprises were intensifying the shift to cloud platforms and Software-as-a-Service (SaaS), a move which saw enterprise applications for conferencing, messaging, emailing, accounting, customer relationship management, database management and much more being delivered from the web.

The shift to Cloud and SaaS

Cloud platforms such as Google Cloud Platform Compute Engine, Google Cloud Storage, Microsoft Azure and IBM Cloud are computing infrastructures managed virtually, on which enterprises can configure and run their own operating systems, middleware, and applications. According to Gartner, Infrastructure-as-a-Service (IaaS) is forecast to grow 24% year over year to USD 74.1 billion in 2022 and this can be attributed to the fact that IaaS is scalable on-demand, cost efficient, secure and reduces the time taken to deploy apps or services.

SaaS is software that is centrally hosted, rented out to enterprises for a monthly or annual subscription fee and is often multi-tenanted. According to BetterCloud, 78% of organizations expect nearly all their apps to be SaaS by 2022. As an example, many enterprises have moved from legacy on-premise internal communication systems such as Oracle Beehive and IBM MQ to either their SaaS counterparts such as IBM MQ on Cloud or to more recently released applications such as Zoom, Skype for Business and Microsoft Teams. As of April 9 this year, Microsoft Teams had a new daily record of 2.7 billion meeting minutes in one day, a 200 percent increase from 900 million on March 16. Total video calls on the platform had grown by over 1,000 percent in the same month, indicating a rising preference among enterprises.

Such a major shift in how enterprises now manage and deliver their business applications has created new demands on enterprise networks, pushing for Software Defined Wide Area Network (SD-WAN). An SD-WAN is essentially a software-controlled, responsive, flexible WAN, aggregating and delivering bandwidth using multiple transport modes including MPLS, broadband/Internet, 4G/5G and even satellite.

With enterprises now largely dependent on Cloud and SaaS applications, SD-WAN offers path optimization that allows efficient management of network traffic. So for example, an HSBC employee connecting to HSBCs apps on Google Cloud Platform or processing data with an API provided by Google Apps, will no longer be clogging the dedicated MPLS lines backhauling to the enterprise secured data center, but will have their traffic routed securely to the Internet through centrally controlled firewalls at the branch node.

The intelligent network

At this point, the biggest challenge for every enterprise is to create an intelligent network - one that is able to leverage bandwidth and network resources to deliver traffic most efficiently. Part of efficient management of traffic goes back to the trade-off between network costs and end user experience, and this is where application performance monitoring comes into place. With each application boasting its own architecture - either a monolithic stack hosted in the enterprises own data center or built on a distributed cloud architecture or simply delivered as a web application in a SaaS model, the application traffic has to be managed in ways that is optimized to its build, and also its criticality and performance requirements.

By monitoring application performance metrics, for example, average response times under peak load, transaction execution times and bandwidth consumption, enterprises are able to decide the best-suited traffic management policies for each application.

One of the most touted benefits of SD-WAN is the execution of network policies via dynamic provisioning of network services such as firewalls, load balancers and session controllers. Today, the use of virtualized Customer Premises Equipments (vCPE) and universal Customer Premises Equipments (uCPE) at branch nodes enable network services to be deployed as Virtualized Network Functions (VNFs) on Commercial Off-the-Shelf servers (COTs). With SD-WAN, centrally controlled orchestrators can now control these network functions remotely, allowing network services to respond instantaneously to the type of application that is being delivered.

The need for application awareness

However, to respond to the demands of the traffic and dynamically provision network services by application types, networks require application awareness. Identifying an application, its attribute or its application family allows networks to enforce corresponding policies. Over time, the use of Artificial Intelligence (AI) and Machine Learning (ML) will allow automated responses to traffic types based on past responses to different applications and security threats.

While this may sound straightforward in theory, the implementation is beset with a myriad of new challenges. On one hand, there is a continuous rise in the number of applications, their intensity of use and the change in their security vulnerabilities. On the other hand, enterprise networks are expanding to cover 5G network slices and IoT networks with 10G Ethernet connectivity becoming a viable offload option. Past policies built on outdated network data are losing relevance in the face of surges in usage of specific applications and with an emerging breed of cybersecurity threats such as deepfakes, phishing and AI-enhanced cyberattacks.

This is where deep packet inspection (DPI) technology such as R&SPACE 2 comes into play. DPI analyzes IP traffic in real-time, extracts content and metadata and classifies applications. With a constantly updated library of traffic signatures, networks are able to embed intelligence at both traffic and application layers to identify the type of traffic traversing their networks and institute the right policies. Matching types of applications to the network conditions and overlaying this on all available network options and resources enables enterprises to steer each application in the most efficient way. Just recently, a leading Indian cybersecurity provider, Nubewell, developed a Smart SD-WAN that builds on the network analytics, traffic management and traffic monitoring provided by our DPI software R&SPACE 2 to enforce enterprises security policies and prevent any network misuse resulting from obfuscation. With accurate high-speed DPI-based classifications as well as weekly signature updates from Rohde & Schwarz, Nubewells entry to the SD-WAN market was swift, secure and successful. To find out more, download our case study with Nubewell.

In the aftermath of the pandemic, traffic and application awareness will become an indispensable feature within SD-WAN as enterprises grapple with thousands of users trying to access thousands of applications from dispersed locations on various devices and connectivity. Prioritization of business critical applications and applications that are latency-sensitive over regular file backups and email applications, as well as the continuous optimization of networks will become an essential part of IT teams daily routine. This in turn will lead to an increasing demand for SD-WAN solutions with embedded intelligence, and that are able to provide real-time analytics on both application and network performance. At the end of the day, the collective experience on each application is what determines the overall verdict of both internal and external users on the ability of the enterprise to deliver on its promise.

To learn more:

Download our whitepaperSD-WAN and DPI - A powerful combination

Download our customer case studySD-WAN application security through DPI

Visit link:
Enhancing Network Visibility for SD-WAN in the Era of Cloud and SaaS - The Fast Mode

Read More..

The term ‘ethical AI’ is finally starting to mean something – Report Door

Earlier this year, the independent research organisation of which I am the Director, London-based Ada Lovelace Institute, hosted a panel at the worlds largest AI conference, CogX, called The Ethics Panel to End All Ethics Panels. The title referenced both a tongue-in-cheek effort at self-promotion, and a very real need to put to bed the seemingly endless offering of panels, think-pieces, and government reports preoccupied with ruminating on the abstract ethical questions posed by AI and new data-driven technologies. We had grown impatient with conceptual debates and high-level principles.

And we were not alone. 2020 has seen the emergence of a new wave of ethical AI one focused on the tough questions of power, equity, and justice that underpin emerging technologies, and directed at bringing about actionable change. It supersedes the two waves that came before it: the first wave, defined by principles and dominated by philosophers, and the second wave, led by computer scientists and geared towards technical fixes. Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology. It is taking us beyond the principled and the technical, to practical mechanisms for rectifying power imbalances and achieving individual and societal justice.

Between 2016 and 2019, 74 sets of ethical principles or guidelines for AI were published. This was the first wave of ethical AI, in which we had just begun to understand the potential risks and threats of rapidly advancing machine learning and AI capabilities and were casting around for ways to contain them. In 2016, AlphaGo had just beaten Lee Sedol, promoting serious consideration of the likelihood that general AI was within reach. And algorithmically-curated chaos on the worlds duopolistic platforms, Google and Facebook, had surrounded the two major political earthquakes of the year Brexit, and Trumps election.

In a panic for how to understand and prevent the harm that was so clearly to follow, policymakers and tech developers turned to philosophers and ethicists to develop codes and standards. These often recycled a subset of the same concepts and rarely moved beyond high-level guidance or contained the specificity of the kind needed to speak to individual use cases and applications.

This first wave of the movement focused on ethics over law, neglected questions related to systemic injustice and control of infrastructures, and was unwilling to deal with what Michael Veale, Lecturer in Digital Rights and Regulation at University College London, calls the question of problem framing early ethical AI debates usually took as a given that AI will be helpful in solving problems. These shortcomings left the movement open to critique that it had been co-opted by the big tech companies as a means of evading greater regulatory intervention. And those who believed big tech companies were controlling the discourse around ethical AI saw the movement as ethics washing. The flow of money from big tech into codification initiatives, civil society, and academia advocating for an ethics-based approach only underscored the legitimacy of these critiques.

At the same time, a second wave of ethical AI was emerging. It sought to promote the use of technical interventions to address ethical harms, particularly those related to fairness, bias and non-discrimination.The domain of fair-ML was born out of an admirable objective on thepart of computer scientists to bake fairness metrics or hard constraints into AI models to moderate their outputs.

This focus on technical mechanisms for addressing questions of fairness, bias, and discrimination addressed the clear concerns about how AI and algorithmic systems were inaccurately and unfairly treating people of color or ethnic minorities. Two specific cases contributed important evidence to this argument. The first was the Gender Shades study, which established that facial recognition software deployed by Microsoft and IBM returned higher rates of false positives and false negatives for the faces of women and people ofcolor. The second was the 2016 ProPublica investigation into the COMPAS sentencing algorithmic tool, whichfound that Black defendants were far more likely than White defendants to be incorrectly judged to be at a higher risk of recidivism, while White defendants were more likely than Black defendants to be incorrectly flagged as low risk.

Second-wave ethical AI narrowed in on these questions of bias and fairness, and explored technical interventions to solve them. In doing so, however, it may have skewed and narrowed the discourse, moving it away from the root causes of bias and even exacerbating the position of people of color and ethnic minorities. As Julia Powles, Director of the Minderoo Tech and Policy Lab at the University of Western Australia, argued, alleviating the problems with dataset representativeness merely co-opts designers in perfecting vast instruments of surveillance and classification. When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Some also saw the fair-ML discourse as a form of co-option of socially conscious computer scientists by big tech companies. By framing ethical problems as narrow issues of fairness and accuracy, companies could equate expanded data collection with investing in ethical AI.

The efforts of tech companies tochampion fairness-related codes illustrate this point: In January 2018, Microsoft published its ethical principles for AI, starting with fairness; in May2018, Facebook announced a tool to search for bias called Fairness Flow; and in September2018, IBM announced a tool called AI Fairness 360, designed to check for unwanted bias in datasets and machine learning models.

What was missing from second-wave ethical AI was an acknowledgement that technical systems are, in fact, sociotechnical systems they cannot be understood outside of the social context in which they are deployed, and they cannot be optimised for societally beneficial and acceptable outcomes through technical tweaks alone. As Ruha Benjamin, Associate Professor of African American Studies at Princeton University, argued in her seminal text, Race After Technology: Abolitionist Tools for the New Jim Code, the road to inequity is paved with technical fixes. The narrow focus on technical fairness is insufficient to help us grapple with all of the complex tradeoffs, opportunities, and risks of an AI-driven future; it confines us to thinking only about whether something works, but doesnt permit us to ask whether it should work. That is, it supports an approach that asks, What can we do? rather than What should we do?

On the eve of the new decade, MIT Technology Reviews Karen Hao published an article entitled In 2020, lets stop AI ethics-washing and actually do something. Weeks later, the AI ethics community ushered in 2020 clustered in conference rooms at Barcelona, for the annual ACM Fairness, Accountability and Transparency conference. Among the many papers that had tongues wagging was written by Elettra Bietti, Kennedy Sinclair Scholar Affiliate at the Berkman Klein Center for Internet and Society. It called for a move beyond the ethics-washing and ethics-bashing that had come to dominate the discipline. Those two pieces heralded a cascade of interventions that saw the community reorienting around a new way of talking about ethical AI, one defined by justice social justice, racial justice, economic justice, and environmental justice. It has seen some eschew the term ethical AI in favor of just AI.

As the wild and unpredicted events of 2020 have unfurled, alongside them third-wave ethical AI has begun to take hold, strengthened by the immense reckoning that the Black Lives Matter movement has catalysed. Third-wave ethical AI is less conceptual than first-wave ethical AI, and is interested in understanding applications and use cases. It is much more concerned with power, alive to vested interests, and preoccupied with structural issues, including the importance of decolonising AI.An article published by Pratyusha Kalluri, founder of the Radical AI Network, in Nature in July 2020, has epitomized the approach, arguing that When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people.

What has this meant in practice? We have seen courts begin to grapple with, and political and private sector players admit to, the real power and potential of algorithmic systems. In the UK alone, the Court of Appeal found the use by police of facial recognition systems unlawful and called for a new legal framework; a government department ceased its use of AI for visa application sorting; the West Midlands police ethics advisory committee argued for the discontinuation of a violence-prediction tool; and high school students across the country protested after tens of thousands of school leavers had their marks downgraded by an algorithmic system used by the education regulator, Ofqual. New Zealand published an Algorithm Charter and Frances Etalab a government task force for open data, data policy, and open government has been working to map the algorithmic systems in use across public sector entities and to provide guidance.

The shift in gaze of ethical AI studies away from the technical towards the socio-technical has brought more issues into view, such as the anti-competitive practices of big tech companies, platform labor practices, parity in negotiating power in public sector procurement of predictive analytics, and the climate impact of training AI models. It has seen the Overton window contract in terms of what is reputationally acceptable from tech companies; after years of campaigning by researchers like Joy Buolamwini and Timnit Gebru, companies such as Amazon and IBM have finally adopted voluntary moratoria on their sales of facial recognition technology.

The COVID crisis has been instrumental, surfacing technical advancements that have helped to fix the power imbalances that exacerbate the risks of AI and algorithmic systems. The availability of the Google/Apple decentralised protocol for enabling exposure notification prevented dozens of governments from launching invasive digital contact tracing apps. At the same time, governments response to the pandemic has inevitably catalysed new risks, as public health surveillance has segued into population surveillance, facial recognition systems have been enhanced to work around masks, and the threat of future pandemics is leveraged to justify social media analysis. The UKs attempt to operationalize a weak Ethics Advisory Board to oversee its failed attempt at launching a centralized contact-tracing app was the death knell for toothless ethical figureheads.

Research institutes, activists, and campaigners united by the third-wave approach to ethical AI continue to work to address these risks, with a focus on practical tools for accountability (we at the Ada Lovelace Institute, and others such as AI Now, are working on developing audit and assessment tools for AI; and the Omidyar Network has published itsEthical Explorer toolkit for developers and product managers), litigation, protest and campaigning for moratoria, and bans.

Researchers are interrogating what justice means in data-driven societies, and institutes such as Data & Society, the Data Justice Lab at Cardiff University, JUST DATA Lab at Princeton, and the Global Data Justice project at the Tilberg Institute for Law, Technology and Society in the Netherlands are churning out some of the most novel thinking. The Mindaroo Foundation has just launched its new future says initiative with a $3.5 million grant, with aims to tackle lawlessness, empower workers, and reimagine the tech sector. The initiative will build on the critical contribution of tech workers themselves to the third wave of ethical AI, from AI Now co-founder Meredith Whittakers organizing work at Google before her departure last year, to walk outs and strikes performed by Amazon logistic workersand Uber and Lyft drivers.

But the approach of third-wave ethical AI is by no means accepted across the tech sector yet, as evidenced by the recent acrimonious exchange between AI researchers Yann LeCun and Timnit Gebru about whether the harms of AI should be reduced to a focus on bias. Gebru not only reasserted well established arguments against a narrow focus on dataset bias but also made the case for a more inclusive community of AI scholarship.

Mobilized by social pressure, the boundaries of acceptability are shifting fast, and not a moment too soon. But even those of us within the ethical AI community have a long way to go. A case in point: Although wed programmed diverse speakers across the event, the Ethics Panel to End All Ethics Panels we hosted earlier this year failed to include a person of color, an omission for which we were rightly criticized and hugely regretful. It was a reminder that as long as the domain of AI ethics continues to platform certain types of research approaches, practitioners, and ethical perspectives to the exclusion of others, real change will elude us. Ethical AI can not only be defined from the position of European and North American actors; we need to work concertedly to surface other perspectives, other ways of thinking about these issues, if we truly want to find a way to make data and AI work for people and societies across the world.

Carly Kind is a human rights lawyer, a privacy and data protection expert, and Director of the Ada Lovelace Institute.

Visit link:
The term 'ethical AI' is finally starting to mean something - Report Door

Read More..