Page 2,436«..1020..2,4352,4362,4372,438..2,4502,460..»

Nunez finds her match with affirmation candles – The Kaplan Herald

As a small reminder to be grateful and stay positive, Tammy Nunez and her daughter Jordan decided to expand their business after Tammy began burning candles and dedicating prayers for those she felt the need to.She said since times had been trying for many in the age of COVID, she found herself creating a personal prayer space with her candles.Eventually, many requested prayer candles, Tammy said, so thats how these candles began.What started as making soap for her daughters who had sensitive skin and eczema turned into a business that is as pure and natural as it comes.Nunez said she wanted to go the natural route. By controlling what went into her products, she did her research, adjusted her recipes based on how her daughters skin responded, and now has it down to a science. Im a huge science nerd, Nunez said. The teacher in me didnt mind doing all the research that needed to go into making soaps, so when I began to make different items, such as the candles, it wasnt really that different.Having started making her products around the year 2000, Nunez couldnt find candles that met her expectations, which prompted her to spend hours, weeks, and months testing waxes, wicks, and fragrances to perfect her candle product.Wanting to share the love, affirmations, and positivity, Nunez perfected her product by researching information on various prayer candles for angels and saints. This led me to heal and affirmation candles. Colors, scents, and crystals associate with particular angels and saints, thats when I immediately connected with affirmation candles.As a mother and retired teacher, Nunez knows that words matter. So beginning each day with a positive affirmation or prayer makes things just seem a little brighter. Her mission? To bring a little light to others.Affirmations and prayers of hope, gratitude, protection, love, prosperity, courage, and confidence are found on each candle. Whether it is a gift to yourself or someone else, each time you light the candle, the prayer is lit and nature takes its course through all of the natural ingredients and love put into each candle.The small family-owned business, JoMamas Naturals, is located deep in the heart of Cajun country, outside of Erath. The company stems from a deep connection to Mother Earth and the joy of creating.Every creation is handmade and sent from our home to yours, said Nunez.But, whats behind the name? Jo, for Jordan, Tammys daughter who has her hands in wax as deep as her mothers, and Mama (Tammy). So, JoMamas Naturals was officially born.No mass production happens in their kitchen, though. Instead, each ingredient is chosen with a specific goal in mind. Everything is made and packaged by our hands, in our home, in small batches and delivered to yours, Nunez said.There are many different fragrance descriptions on all of JoMamas products. For example, the herbal candles use the power of your intentions (and hers) to guide your actions toward achieving the results you project in your thoughts, which are popular for those who meditate or do yoga.The now-retired teacher is proud of her business. Having taught for 30 years, Nunez still finds herself fulfilled since she can still feed so many people. Teaching is a food source for the mind, but so are the candles she makes.All of JoMamas Naturals products are vegan and cruelty-free. Many of the products are created with essential oils, some without scent. These can be found, along with detailed information about the business and exactly what is in their products, on their website jomama311@jomamasnaturals.com.

More:
Nunez finds her match with affirmation candles - The Kaplan Herald

Read More..

How a quantum computer could break 2048-bit RSA encryption …

But quantum computers change this thinking. These machines are far more powerful than classical computers and should be able to break these codes with ease.

That raises an important questionwhen will quantum computers be powerful enough to do this? After that date, any information protected by this form of encryption becomes insecure.

So computer scientists have attempted to calculate the resources such a quantum computer might need and then work out how long it will be until such a machine can be built. And the answer has always been decades.

Today, that thinking needs to be revised thanks to the work of Craig Gidney at Google in Santa Barbara and Martin Eker at the KTH Royal Institute of Technology in Stockholm, Sweden. These guys have found a more efficient way for quantum computers to perform the code-breaking calculations, reducing the resources they require by orders of magnitude.

Consequently, these machines are significantly closer to reality than anyone suspected. The result will make uncomfortable reading for governments, military and security organizations, banks, and anyone else who needs to secure data for 25 years or longer.

First some background. Back in 1994, the American mathematician Peter Shor discovered a quantum algorithm that outperformed its classical equivalent. Shors algorithm factors large numbers and is the crucial element in the process for cracking trapdoor-based codes.

Trapdoor functions are based on the process of multiplication, which is easy to perform in one direction but much harder to do in reverse. For example, it is trivial to multiply two numbers together: 593 times 829 is 491,597. But it is hard to start with the number 491,597 and work out which two prime numbers must be multiplied to produce it.

And it becomes increasingly difficult as the numbers get larger. Indeed, computer scientists consider it practically impossible for a classical computer to factor numbers that are longer than 2048 bits, which is the basis of the most commonly used form of RSA encryption.

Shor showed that a sufficiently powerful quantum computer could do this with ease, a result that sent shock waves through the security industry.

And since then, quantum computers have been increasing in power. In 2012, physicists used a four-qubit quantum computer to factor 143. Then in 2014 they used a similar device to factor 56,153.

Its easy to imagine that at this rate of progress, quantum computers should soon be able to outperform the best classical ones.

Not so. It turns out that quantum factoring is much harder in practice than might otherwise be expected. The reason is that noise becomes a significant problem for large quantum computers. And the best way currently to tackle noise is to use error-correcting codes that require significant extra qubits themselves.

Taking this into account dramatically increases the resources required to factor 2048-bit numbers. In 2015, researchers estimated that a quantum computer would need a billion qubits to do the job reliably. Thats significantly more than the 70 qubits in todays state-of-the-art quantum computers.

On that basis, security experts might well have been able to justify the idea that it would be decades before messages with 2048-bit RSA encryption could be broken by a quantum computer.

Now Gidney and Eker have shown how a quantum computer could do the calculation with just 20 million qubits. Indeed, they show that such a device would take just eight hours to complete the calculation. [As a result], the worst case estimate of how many qubits will be needed to factor 2048 bit RSA integers has dropped nearly two orders of magnitude, they say.

Their method focuses on a more efficient way to perform a mathematical process called modular exponentiation. This is the process of finding the remainder when a number is raised to a certain power and then divided by another number.

This process is the most computationally expensive operation in Shors algorithm. But Gidney and Eker have found various ways to optimize it, significantly reducing the resources needed to run the algorithm.

Thats interesting work that should have important implications for anyone storing information for the future. A 20-million-qubit quantum computer certainly seems a distant dream today. But the question these experts should be asking themselves is whether such a device could be possible within the 25 years they want to secure the information. If they think it is, then they need a new form of encryption.

Indeed, security experts have developed post-quantum codes that even a quantum computer will not be able to crack. So it is already possible to safeguard data today against future attack by quantum computers. But these codes are not yet used as standard.

For ordinary people, there is little risk. Most people use 2048-bit encryption, or something similar, for tasks like sending credit card details over the internet. If these transactions are recorded today and broken in 25 years, little will be lost.

But for governments, there is more at stake. The messages they send todaybetween embassies or the military, for examplemay well be significant in 20 years and so worth keeping secret. If such messages are still being sent via 2048-bit RSA encryption, or something similar, then these organizations should start worryingquickly.

Ref: arxiv.org/abs/1905.09749 : How To Factor 2048 Bit RSA Integers In 8 Hours Using 20 Million Noisy Qubits

See original here:
How a quantum computer could break 2048-bit RSA encryption ...

Read More..

Atos confirms role as global leader in quantum hybridization technologies at its 8th Quantum Advisory Board – Yahoo Finance

Paris, France - December 3, 2021 - At the meeting of the 8th Atos Quantum Advisory Board, a group of international experts, mathematicians and physicists, authorities in their fields, Atos reaffirms its position as a global leader in quantum computing technologies. The quantum hybridization axis (convergence of high-performance computing (HPC) and quantum computing) in particular positions the company at the forefront of quantum research, converging its expertise. Atos has invested, along with partner start-ups Pasqal and IQM, in two major quantum hybridization projects in France and Germany.

Held at Atos' R&D centre, dedicated to research in quantum computing and high-performance computing, in Clayes-sous-Bois, in the presence of Atos next CEO, Rodolphe Belmer, and under the chairmanship of Pierre Barnab, Chair of the Quantum Advisory Board, Interim co-CEO and Head of Big Data and Cybersecurity, this meeting of the Quantum Advisory Board was an opportunity to review Atos recent work and to take stock of future prospects.

Artur Ekert, Professor of Quantum Physics at the Mathematical Institute, University of Oxford, Founding Director of the Centre for Quantum Technologies in Singapore and member of the Quantum Advisory Board said We are truly impressed by the work and the progress that Atos has made over the past year. The company takes quantum computing seriously and it gives us great pleasure to see it becoming one of the key players in the field. It is a natural progression for Atos. As a world leader in High Performance Computing (HPC), Atos is in a unique position to combine its existing, extensive, expertise in HPC with quantum technology and take both fields to new heights. We are confident that Atos will shape the quantum landscape in years to come, both with research and applications that have long-lasting impact.

In the field of quantum hybridization Atos is the only player and the company is already enabling several applications - in the areas of chemistry, such as catalysis design for nitrogen fixation, and for the optimization of smart grids. Atos is also involved in two additional quantum hybridization projects, which are currently being launched:

Story continues

The European HPC-QS (Quantum Simulation) project, which starts this December 2021, aims to build the first European hybrid supercomputer with an integrated quantum accelerator by the end of 2023. It is intended to be a first major brick of the French quantum plan. Atos is involved in this project alongside national partners including the CEA, GENCI, Pasqal and the Julich Supercomputing Centre. Pasqal will provide its analog quantum accelerator and Atos, with its quantum simulator, the Quantum Learning Machine (QLM), will ensure the hybridization with the HPCs at the two datacenters at GENCI and Julich.

The Q-EXA project, part of the German Government quantum plan, will see a consortium of partners, including Atos, work together to integrate a German quantum computer into an HPC supercomputer for the first time. Atos QLM will be instrumental in connecting the quantum computer, from start-up IQM (also part of the Atos Scaler program) to the Leibniz Supercomputing-LRZ centre.

The European Organization for Nuclear Research (CERN), one of the worlds largest and most respected research centres, based in Geneva, has recently acquired an Atos Quantum Learning Machine (QLM) appliance and joined the Atos User Club. The Atos QLM, delivered to CERN in October, will be made available to the CERN scientific community to support research activities in the framework of the CERN Quantum Technology Initiative (CERN QTI), thus accelerating the investigation of quantum advantage for high-energy physics (HEP) and beyond.

Building on CERNs unique expertise and strong collaborative culture, co-development efforts are at the core of CERN QTI. As we explore the fast-evolving field of quantum technologies, access to the Atos Quantum Learning Machine and Atos expertise can play an important role in our quantum developments roadmap in support of the high-energy physics community and beyond, says Alberto Di Meglio, Coordinator of the CERN Quantum Technology Initiative. A dedicated training workshop is being organized with Atos to investigate the full functionality and potential of the quantum appliance, as well as its future application for some of the CERN QTI activities.

"Atos is the world leader in the convergence of supercomputing and quantum computing, as shown by these two major and strategic projects we are involved in in France and Germany. At a time when the French government is expected to announce its plan for quantum computing, the durability of our Quantum Board, the quality of the work carried out and the concrete applications of this research in major projects reinforce this position," comments Pierre Barnab, interim co-CEO and head of Big Data and Cybersecurity at Atos.

The Quantum Advisory Board is made up of universally recognized quantum physicists and includes:

Alain Aspect, Professor at the Institut dOptique Graduate School, Universit Paris-Saclay and at Ecole Polytechnique, Institut Polytechnique de Paris

David DiVincenzo, Alexander von Humboldt Professor, Director of the Institute for Quantum Information at RWTH Aachen University, Director of the Institute for Theoretical Nanoelectronics at the Juelich Research Center;

Artur Ekert, Professor of Quantum Physics at the Mathematical Institute, University of Oxford and Founding Director of the Centre for Quantum Technologies in Singapore;

Daniel Esteve, Research Director, CEA Saclay, Head of Quantronics;

Serge Haroche, Professor emeritus at the Collge de France, Nobel laureate in Physics.

As a result of Atos ambitious program to anticipate the future of quantum computing and to be prepared for the opportunities and challenges that come with it - Atos Quantum - Atos was the first organization to offer a quantum noisy simulation module which can simulate real Qubits, the Atos QLM and to propose Q-score, the only universal metrics to assess quantum performance and superiority. Atos is also the first European patent holder in quantum computing.

***

About Atos

Atos is a global leader in digital transformation with 107,000 employees and annual revenue of over 11 billion. European number one in cybersecurity, cloud and high performance computing, the Group provides tailored end-to-end solutions for all industries in 71 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea), listed on Euronext Paris and included in the CAC 40 ESG and Next 20 Paris Stock indexes.

The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space.

Contact :

Laura Fau | laura.fau@atos.net | +33 6 73 64 04 18 |

@laurajanefau

Attachment

View original post here:
Atos confirms role as global leader in quantum hybridization technologies at its 8th Quantum Advisory Board - Yahoo Finance

Read More..

Artificial intelligence that understands object relationships – MIT News

When humans look at a scene, they see objects and the relationships between them. On top of your desk, there might be a laptop that is sitting to the left of a phone, which is in front of a computer monitor.

Many deep learning models struggle to see the world this way because they dont understand the entangled relationships between individual objects. Without knowledge of these relationships, a robot designed to help someone in a kitchen would have difficulty following a command like pick up the spatula that is to the left of the stove and place it on top of the cutting board.

In an effort to solve this problem, MIT researchers have developed a model that understands the underlying relationships between objects in a scene. Their model represents individual relationships one at a time, then combines these representations to describe the overall scene. This enables the model to generate more accurate images from text descriptions, even when the scene includes several objects that are arranged in different relationships with one another.

This work could be applied in situations where industrial robots must perform intricate, multistep manipulation tasks, like stacking items in a warehouse or assembling appliances. It also moves the field one step closer to enabling machines that can learn from and interact with their environments more like humans do.

When I look at a table, I cant say that there is an object at XYZ location. Our minds dont work like that. In our minds, when we understand a scene, we really understand it based on the relationships between the objects. We think that by building a system that can understand the relationships between objects, we could use that system to more effectively manipulate and change our environments, says Yilun Du, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.

Du wrote the paper with co-lead authors Shuang Li, a CSAIL PhD student, and Nan Liu, a graduate student at the University of Illinois at Urbana-Champaign; as well as Joshua B. Tenenbaum, a professor of computational cognitive science in the Department of Brain and Cognitive Sciences and a member of CSAIL; and senior author Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science and a member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems in December.

One relationship at a time

The framework the researchers developed can generate an image of a scene based on a text description of objects and their relationships, like A wood table to the left of a blue stool. A red couch to the right of a blue stool.

Their system would break these sentences down into two smaller pieces that describe each individual relationship (a wood table to the left of a blue stool and a red couch to the right of a blue stool), and then model each part separately. Those pieces are then combined through an optimization process that generates an image of the scene.

The researchers used a machine-learning technique called energy-based models to represent the individual object relationships in a scene description. This technique enables them to use one energy-based model to encode each relational description, and then compose them together in a way that infers all objects and relationships.

By breaking the sentences down into shorter pieces for each relationship, the system can recombine them in a variety of ways, so it is better able to adapt to scene descriptions it hasnt seen before, Li explains.

Other systems would take all the relations holistically and generate the image one-shot from the description. However, such approaches fail when we have out-of-distribution descriptions, such as descriptions with more relations, since these model cant really adapt one shot to generate images containing more relationships. However, as we are composing these separate, smaller models together, we can model a larger number of relationships and adapt to novel combinations, Du says.

The system also works in reverse given an image, it can find text descriptions that match the relationships between objects in the scene. In addition, their model can be used to edit an image by rearranging the objects in the scene so they match a new description.

Understanding complex scenes

The researchers compared their model to other deep learning methods that were given text descriptions and tasked with generating images that displayed the corresponding objects and their relationships. In each instance, their model outperformed the baselines.

They also asked humans to evaluate whether the generated images matched the original scene description. In the most complex examples, where descriptions contained three relationships, 91 percent of participants concluded that the new model performed better.

One interesting thing we found is that for our model, we can increase our sentence from having one relation description to having two, or three, or even four descriptions, and our approach continues to be able to generate images that are correctly described by those descriptions, while other methods fail, Du says.

The researchers also showed the model images of scenes it hadnt seen before, as well as several different text descriptions of each image, and it was able to successfully identify the description that best matched the object relationships in the image.

And when the researchers gave the system two relational scene descriptions that described the same image but in different ways, the model was able to understand that the descriptions were equivalent.

The researchers were impressed by the robustness of their model, especially when working with descriptions it hadnt encountered before.

This is very promising because that is closer to how humans work. Humans may only see several examples, but we can extract useful information from just those few examples and combine them together to create infinite combinations. And our model has such a property that allows it to learn from fewer data but generalize to more complex scenes or image generations, Li says.

While these early results are encouraging, the researchers would like to see how their model performs on real-world images that are more complex, with noisy backgrounds and objects that are blocking one another.

They are also interested in eventually incorporating their model into robotics systems, enabling a robot to infer object relationships from videos and then apply this knowledge to manipulate objects in the world.

Developing visual representations that can deal with the compositional nature of the world around us is one of the key open problems in computer vision. This paper makes significant progress on this problem by proposing an energy-based model that explicitly models multiple relations among the objects depicted in the image. The results are really impressive, says Josef Sivic, a distinguished researcher at the Czech Institute of Informatics, Robotics, and Cybernetics at Czech Technical University, who was not involved with this research.

This research is supported, in part, by Raytheon BBN Technologies Corp., Mitsubishi Electric Research Laboratory, the National Science Foundation, the Office of Naval Research, and the IBM Thomas J. Watson Research Center.

Read more:
Artificial intelligence that understands object relationships - MIT News

Read More..

Connecticut Money: Artificial intelligence and the AI revolution – New Haven Register

You dont need to be a financial adviser to hear the daily drumbeat of financial analysts talking about artificial intelligence and its long-term impact on finance, health and transportation, among other industries. Technology has come a long way from the days of playing video Pong, Asteroids and Pac-Man at the local cinemas before the movie started.

Fast-forward to today. We are on the forefront of self-driving cars and quantum computing speeds. Based on Moores law, a theory of ongoing exponential growth in microprocessing, its probably just a short matter of time before we see the first flying cars rising above the dreadful summer traffic on Interstate 95 in New Haven.

The endless possibilities and concerns about future technologies are staggering. Some suggest that through AIs enhanced productivity we will get to a point that humans will be free from working monotonous jobs. In return, we may find ourselves receiving stipends from the work that our robot counterparts are performing. Others fear that our robotic workforce will work their way up the corporate ladder and push us out to pasture long before were ready to leave.

No one really knows what the future holds, but one country has an interesting perspective on artificial intelligence and how it will be harnessed to serve its citizens.

Japan has set clear, countrywide technology goal and has named the initiative Society 5.0. With a rapidly aging population combined with declining population growth, Japan aspires to become a super aging, super smart society with the help of expanding technology. As a world leader in robotics, it wishes to augment its workforce in ways to create a better life for its citizens.

If you wish to peer into the future, take a look at a new prototype city that has just broken ground in Japan. Based on Society 5.0 goals, the city is called Woven City and is located at the base of Mount Fuji. A Toyota initiative, Woven City is billing itself as a living laboratory and a city of the future. The goal of Woven City is to have residents live, work and play with an overriding premise of the city being human centered, ever evolving, and a living laboratory of creation. An urban project on steroids.

Residents will get to enjoy being early adapters to the latest gadgets and technology as they are developed. Phase One looks to have autonomous vehicles, robot assistants, clean energy and fully integrated smart home software and hardware for living a better life.

Fingers crossed that these initiatives ultimately hit their mark and create a society that is happier and healthier while still motivated to excel and grow as human beings. Time will tell but I am excited about the possibilities.

Eric Tashlein is a Certified Financial Planner professional, founder and financial advisor at Connecticut Capital Management Group, LLC, 2 Schooner Lane, Suite 1-12, in Milford. He can be reached at 203-877-1520 or through http://www.connecticutcapital.com. This is for informational purposes only and should not be construed as personalized investment advice or legal/tax advice. Please consult your advisor/attorney/tax advisor. Investment Advisor Representative, Connecticut Capital Management Group, LLC, a Registered Investment Advisor. Connecticut Capital Management Group, LLC and Connecticut Benefits Group, LLC are not affiliated.

See more here:
Connecticut Money: Artificial intelligence and the AI revolution - New Haven Register

Read More..

Council of Europe to participate in the Athens Roundtable on Artificial Intelligence and the Rule of Law – Council of Europe

The third edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law will take place online on 6 and 7 December 2021.

The event is organised by the Future Society and ELONTech under the Patronage of the President of the Hellenic Republic, Katerina Sakellaropoulou and is co-hosted by UNESCO, the Council of Europe, the European Parliaments Panel on the Future of Science and Technology (STOA), h IEEE SA, the Center on Civil Justice at the NYU School of Law and the National Judicial College, among other prominent institutions.

The roundtable is designed to facilitate a participatory dialogue among key stakeholders on international AI policy developments and key AI standardisation and benchmarking initiatives in the US, Europe and beyond.

It will also address important issues at the intersection of AI, industry, government and law, including civil liability regimes, regulatory compliance, privacy and consumer protection, and judicial capacity building.

The Secretary General of the Council of Europe, Marija Pejinovi Buri, will deliver welcome remarks at the opening. The Director of Information Society Action against Crime, Jan Kleijssen, will speak in the panel EU AI Act and Beyond: Regulatory Perspectives from Europe and the United States. The Head of Information Society Department, Patrick Penninckx, will participate in the panel on AI and Human Rights.

More information:

Follow us on social media:

See original here:
Council of Europe to participate in the Athens Roundtable on Artificial Intelligence and the Rule of Law - Council of Europe

Read More..

Artificial intelligence innovation among medical industry companies dropped off in the last quarter – Verdict Medical Devices – Medical Device Network

AI has been identified as a key disruptive force facing companies in the coming years, and is one of the areas that companies investing resources in now are expected to reap rewards from.Image source: Shutterstock

Research and innovation in artificial intelligence in the medical sector has declined in the last quarter but remains higher than it was a year ago.

The most recent figures show that the number of AI patent applications in the industry stood at 185 in the three months ending September up from 172 over the same period last year.

Figures for patent grants related to AI followed a different pattern to filings growing from 65 in the three months ending September last year to 64 this year.

The figures are compiled by GlobalData, who track patent filings and grants from official offices around the world. Using textual analysis, as well as official patent classifications, these patents are grouped into key thematic areas, and linked to key companies across various industries.

AI is one of the key areas tracked by GlobalData. It has been identified as a key disruptive force facing companies in the coming years, and is one of the areas that companies investing resources in now are expected to reap rewards from.

The figures also provide an insight into the largest innovators in the sector.

Johnson & Johnson was the top artificial intelligence innovator in the medical sector in the last quarter. The company, which has its headquarters in the United States, filed 48 AI related patents in the three months ending September. That was up from 27 over the same period last year.

It was followed by the Ireland based Medtronic Plc with 23 AI patent applications, Switzerland based F. Hoffmann-La Roche Ltd (21 applications), and Japan based Olympus Corp (16 applications).

Olympus Corp has recently ramped up R&D in artificial intelligence. It saw growth of 68.8% in related patent applications in the three months ending September compared to the same period last year - the highest percentage growth out of all companies tracked with more than 10 quarterly patents in the medical sector.

High-Precision Motion Control Components and Systems

Air Pressure Regulators and Controllers for Medical Tubing

28 Aug 2020

High-Precision Motion Control Components and Systems

28 Aug 2020

Follow this link:
Artificial intelligence innovation among medical industry companies dropped off in the last quarter - Verdict Medical Devices - Medical Device Network

Read More..

Artificial Intelligence in the Intelligence Community: Oversight Must Not Be an Oversight – Just Security

(Editors Note: This article is the fifth and final in a series that is diving into the foundational barriers to the broad integration of AI in the IC culture, budget, acquisition, risk, and oversight. This article considers a modified approach to congressional oversight of the IC. The authors full report examining all of the topics in this series in greater depth is available here.)

Throughout this series, I have explored the most pressing foundational issues impacting the Intelligence Communitys (IC) ability to meet the urgency of this moment in the global artificial intelligence (AI) race. The current bipartisan support for taking bold action to drive national security use of AI is key to the ICs success. That support must propel change in the priority areas I have already identified: modernizing the ICs budget and acquisition processes and enabling a risk-tolerant culture with a new IC AI risk assessment framework that helps IC officers navigate the uncertainty that necessarily accompanies technological innovation. There is one other area, however, that cannot be ignored if the IC is to keep pace with our nations adversaries and provide policymakers with accurate, timely, and impactful insights: congressional oversight.

Congressional oversight of the IC is critical. Congress is the eyes and the ears of the American people. Among other things, it is charged with evaluating IC program performance, and ensuring the IC is efficiently and effectively spending taxpayer dollars and properly executing national security activities consistent with statutory requirements and legislative intent.

But intelligence oversight is complicated and has not sufficiently evolved with the times. When it comes to assessing progress of IC programs, standard oversight processes typically track defined, pre-determined requirements, cost, and timelines. These metrics have worked reasonably well for large programs like the acquisition of satellites and buildings, for which there is a clear beginning, middle, and end, with easily identifiable milestones and a definite budget. However, AI is different; its development moves back and forth across a spectrum of activities often without discrete steps, and failure is a necessary part of the process as the technology evolves and matures. Traditional metrics are, therefore, less effective for AI, as the value (or lack thereof) of certain milestones may only become clear partway through the development process and desired end-states may shift.

The IC has four primary congressional oversight committees. In addition to the House Permanent Select Committee on Intelligence (HPSCI) and the Senate Select Committee on Intelligence (SSCI), which have oversight jurisdiction over the IC, the House Appropriations Committee Defense Subcommittee (HAC-D) and the Senate Appropriations Committee Defense Subcommittee (SAC-D) provide the ICs money. These four committees (hereinafter collectively Committees) must consider a more adaptive approach to oversight, measuring progress and failure through metrics that are less rigid and better tailored for AI and other emerging technologies. In doing so, the Committees may lose a measure of certainty that impacts their most powerful lever fiscal control over the IC. For that reason, the Committees and the IC must simultaneously build a greater degree of trust, transparency, and ultimately partnership.

Much like AI itself, congressional oversight of AI activities must evolve and adapt to the world of emerging technology. While there are a variety of rules that govern Congress oversight responsibilities, Congress has considerable latitude and discretion in the execution of that oversight, including how they measure executive branch progress. To improve IC oversight engagements, Congress and the IC must start with a shared strategic vision for what a successful AI project looks like and create an approach to oversight that is tailored to achieve this goal.

Current measures and metrics often focus on ensuring projects stay on track in terms of cost and schedule; there are well-defined outputs, such as number of tools built, and static timelines for delivery. Such demonstrable deliverables are objective, consistent, and easy to measure, but they are ill-suited to AI, the underlying technology for which is still evolving. To take full advantage of AIs emerging possibilities, the IC must have the ability to test, adjust, and pivot as new algorithms and capabilities are developed and applied to different problem sets.

Recognizing that detailed requirements and program schedules are not well-suited to measuring the success of software, which is the core of AI, the Defense Department is already considering changes to related oversight. Research by Google and others indicates that metrics aligned with DevSecOps, the industry best practice for rapid, secure software development, can better predict the performance of software teams. DevSecOps seeks to unify software development (Dev), security (Sec), and operations (Ops). Its metrics allow teams to focus on outcomes while adjusting for multi-dimensional, dynamic, and continuous improvement in technology along the way. Teams can move quickly, respond rapidly to user needs, and produce reliable software all critical when it comes to scaling AI in the IC.

In addition, AI-related metrics must focus on key performance indicators that track the progress of how AI tools evolve rather than on only the final product to both create the opportunity for, and reflect the expectation of, value to the user earlier based on strong communication and feedback loops. Performance evaluation should center on delivery of incremental capabilities, drilling down on speed and functionality together in phases and time-boxing segmented activities, from staffing to new releases to bug-fixes.

The IC and the Committees must learn from industry best practices related to DevSecOps and software, and together develop relevant and adaptive metrics that can be consistently applied but are more aligned with AIs attributes. This joint process would itself serve as an opportunity for learning and trust building. Once developed, the metrics must continue to drive accountability and demonstrate value, and if timelines slip, the IC must quickly inform the Committees and produce new targets. The IC should expect to use the new metrics first on low-risk activities and ensure the Committees understand the standards and benchmarks the IC is using so they can evaluate programs accordingly.

While pivoting to new metrics is a good start, the IC and the Committees also must remain open to iteration, allowing oversight to change if the initial approach is less than optimal.

There is no dearth of oversight today each year, there are hundreds of written reports, in-person briefings, phone calls, hearings, and other engagements between congressional overseers and the IC. However, current oversight engagements suggest a lack of confidence and trust in the IC; they are often excessively tactical and focused on execution details that provide neither a strategic perspective on the health of a program nor an understanding of potential long-term opportunities and risks. These engagements drive a continuous cycle of meetings and briefings, requesting deeper levels of detail, in an effort to achieve the desired understanding. Unfortunately, layering detail on top of detail does not produce strategic insight, and this approach is ultimately ineffective the Committees do not feel sufficiently informed and the IC does not feel sufficiently supported, steering the relationship toward one that is more adversarial than collaborative.

Current oversight processes were not designed to be overly burdensome or act as roadblocks to progress. They were designed to give Congress appropriate insight and confidence that executive branch activities and spending are being carried out efficiently, effectively, and consistently with the law. Unfortunately, the processes have become onerous due to a history of issues that have undermined Congress trust and confidence in the IC. The IC must rebuild trust with Congress so overseers can step back from day-to-day operational details and engage with the community at a more appropriate strategic level.

The relationship between a Board of Directors (Board) and a Chief Executive Officer (CEO) in the private sector is a helpful model. The Board has ultimate responsibility for ensuring the organization is appropriately stewarding the resources entrusted to it, while the CEO manages the execution of a companys day-to-day activities. According to the KPMG Board Leadership Center, the key to a healthy relationship between a Board and the organization it oversees is trust and transparency, where the Board has constructive conversations with the leadership team about significant decisions and issues as well as the opportunity to provide meaningful input before decisions are made, and the leadership team receives valuable feedback. It is not the Boards role to see every scrap of paper that the management team sees, and it should not wade into tactical details of an issue unless the issue is related to strategy or risk.

Of course, the analogy is not perfect, but it can be instructive. The Committees, which are like the ICs Board, have the responsibility to ensure the IC is appropriately stewarding its resources. In doing so, they also have the responsibility to leverage their knowledge and expertise to provide strategic advice and direction rather than diving into many levels of detail. But, as with the private sector, without the requisite trust and transparency it is difficult for the Committees to operate at the strategic level.

To rebuild trust, the IC and the Committees must fundamentally alter the nature of their interaction, engaging not only in formal ways but also increasingly in informal engagements to better manage expectations and reduce the element of surprise. The IC should seek the Committees views on significant activities before final decisions are made and work to incorporate their feedback when possible to build stronger support and buy-in from the Committees. Formal engagements are important, but informal engagements create relationships that lead to true partnerships.

As the IC seeks additional flexibility from the Committees, it should increase trust and transparency through a more informal and open posture with Congress that includes accommodation from deliberative process privilege as needed. This would require a significant cultural shift but, if done carefully, would pay enormous dividends.

Specifically, the DNI should propose two informal, private engagements with the Committees: 1) semi-annual conversations between Committee staff and high-priority AI project leads for conversation and feedback on progress, issues, concerns, and requirements; and 2) periodic IC leadership coffee catch-ups with Committee members to better drive the strategic relationship, provide the benefit of each others thinking at that moment, and develop a sense of partnership. These engagements should not track metrics or seek to accomplish specific tasks, but rather should create mutual understanding, open dialogue, and build trust around AI activities. AI project leads should share what is known and unknown about projects, potential outcomes, and any changes in spending the Committees may see in the coming months. The IC leadership coffees would, of course, produce benefits well beyond the ICs AI activities.

It is unlikely that the information shared in these engagements would implicate the executive branchs deliberative process privilege, which reflects the presidents constitutional authority to withhold certain information from Congress before a final decision has been made within the executive branch, because these discussions would not be tied to specific decision points. Nevertheless, to help navigate these conversations the DNI should clearly set expectations with the Committees that these conversations are not formal notifications and must not be used to later reprimand the IC. The DNI should also create IC legislative engagement principles to help IC officers appropriately engage. To the extent the IC does seek pre-decisional views from the Committees, the IC should look to the accommodation process, which allows the executive branch to provide information that might otherwise be privileged if necessary to facilitate the legitimate needs of the legislative branch.

Leaning forward in this way does come with risk that the Committees will inappropriately interfere in executive branch matters. Therefore, to truly build trust, the Committees must agree to be judicious in these engagements, focus on insightful strategic and risk-based questions reflective of their extensive experience and expertise, and not misuse the information to obstruct the executive branchs authority to execute the law. Any actions to the contrary will undermine the progress made and likely end this more open dialogue. However, with agreed upon guidelines and parameters, these informal engagements would improve the AI dialogue between the IC and Congress, leading to deeper Committee understanding and, ideally, strengthening Committee support for legislation and funding of AI activities, even in times of loss or failure.

As the Committees introduce more agility into their processes and adjust their oversight to accommodate AI, they should consider the following steps to increase their confidence in the ICs activities.

First, to expand their capacity and institutional expertise, the Committees should re-organize staff along functional lines, as has already been done in some committees. Such a change would allow staff to develop a deeper understanding of various AI tools and technologies, apply that understanding strategically across IC elements, and get a more holistic cross-IC view of AI coordination and activities. While the more common model of organizing staff by IC element makes logical sense, expecting staff to understand everything an IC element does is unrealistic and unreasonable, especially given they are often single-threaded in their roles. Refocusing staff on specific functional areas and allowing them to become experts would greatly benefit not only the Committees oversight of those activities, but the IC elements they oversee. In addition, as many have recommended, Congress should recreate the Office of Technology Assessment a congressional agency that provided impartial analyses of technology and science issues to provide the Committees with access to deep technical experts when needed.

Second, the Committees should hold formal semi-annual closed substantive briefings on high-priority AI projects. In these briefings, the IC should provide enough detail for the Committees to understand progress against the new metrics, and ask questions about the strategic direction of the programs, areas of risk, concerns, unexpected issues, and future legislative and funding requirements. These briefings would provide an official mechanism for the IC to show forward movement and elevate significant issues, and for the Committees to track high-priority AI activities across the IC.

Third, if the IC receives no-year or multi-year funding for AI, the Committees should hold a focused annual review of AI spending during the previous year. This review should include an understanding of what is going well and what did not go as expected so the Committees can provide a timely and critical check on the use of that money. If the funding has been executed in accordance with congressional direction even if some of the activities have failed the money should continue to flow. If the funding has not been executed properly or consistently with congressional direction, the Committees should have the ability to stop the funding immediately.

The executive branch has significant work to do to speed and scale AI into the IC: it must reform budget and acquisition processes; create an IC AI risk assessment framework to encourage reasonable and informed risk-taking; and build an IC culture that supports innovation and accepts a level of failure. But the ICs success will be hard-fought and fleeting if the ICs congressional oversight committees do not simultaneously re-examine their supervision of the IC.

The Committees, similar to a corporate board, provide an important check on the ICs activities. To be successful in this new world of AI and emerging technology, the Committees must embrace a strategic reset, increased flexibility, and an adaptive approach to oversight. In return, the IC must lean forward with open and informal dialogue with the Committees. These adjustments will take practice to get right but, if successful, will dramatically change the ICs partnership with the Committees for the better, providing the Committees with earlier and improved insights and leading to greater support and backing for the IC.

The issues highlighted in this series are not new; countless others have raised them and good people have worked hard to solve them over many years. We cannot wait any longer for implementation to take hold. China and other adversaries are at our doorstep, and the IC must move immediately to embrace the reality of a world awash in data moving at the speed of emerging technology. Now is the time to take advantage of the groundswell of support, remove unnecessary bureaucratic barriers, and take decisive action.

* * *

Additional detail and implementation steps in all of the areas discussed in this series can be found in The Integration of Artificial Intelligence in the Intelligence Community: Necessary Steps to Scale Efforts and Speed Progress, a full-length report produced through the American University Washington College of Law Tech, Law & Security program.

Originally posted here:
Artificial Intelligence in the Intelligence Community: Oversight Must Not Be an Oversight - Just Security

Read More..

Healthcare Artificial Intelligence (AI) Market Size to Reach Revenues of USD 44.5 Billion by 2026 – Arizton – PRNewswire

CHICAGO, Dec. 1, 2021 /PRNewswire/ -- In-depth analysis and data-driven insights on the impact of COVID-19 included in this global healthcare artificial intelligence (AI) market report.

The healthcare artificial intelligence (AI) market is expected to grow at a CAGR of over 46.21% during the period 20202026.

Key Highlights Offered in the Report:

1. Software & services in healthcare AI is expected to witness high growth rate due to the evolution in digital health technologies such as EMR/EHR, telehealth, e-prescribing in healthcare services. The software and services segment accounted for 73.92% of healthcare AI market share in 2020.

2. Artificial intelligence in drug discovery and clinical decision-making leads to new path of speedy workflow in pharmaceutical drug & dosage management. This emerging trend of advancements in drug discovery is incorporated widely as it is cost and time efficient.

3. The growth and acceptance of AI in hospital workflow management witnessed streamlined and efficient patient care offered by the healthcare facilities. Hospital workflow management accounted for 38.41% of healthcare AI market in 2020 and expected to retain its market dominance in coming years.

4. North America captured a dominant position in the market and generated maximum revenue in compared to rest. Additionally, Europe and APAC are likely to see the top line growth in adoption of AI and healthcare information technology solutions in healthcare facilities.

5. Vendors from the market are rising the investment in digital solutions for healthcare. In recent years, the number of collaborations between IT companies and healthcare providers have been made to adopt new technologies and incorporating to better outcomes.

6. Machine learning is widely applied in and adopted in the healthcare services because of its algorithmic more accurate results. In application segment machine learning accounted a major share of 41.16% of healthcare AI market in 2020.

Key Offerings:

Get your sample today! https://www.arizton.com/market-reports/healthcare-artificial-intelligence-market

Healthcare Artificial Intelligence (AI) Market Segmentation

Healthcare Artificial Intelligence (AI) Market by Category

Healthcare Artificial Intelligence (AI) Market by Application

Healthcare Artificial Intelligence (AI) Market by Technology

Healthcare Artificial Intelligence (AI) Market by End-users

Healthcare Artificial Intelligence (AI) Market Dynamics

Global key vendors from the pharmaceutical and biotech industry, such as Amgen, BASF, LEO, Eli Lilly, AstraZeneca, Bayer, Novartis, J&J, and Merck, are now a part of machine learning for pharmaceutical discovery. These vendors are making remarkable growth as AI-driven solutions are easy for the user interface and are creating analytical bookmarks on key drivers in sales to determine investments. For example, the collaboration of GSK with Exscientia identified a small compound for targeted therapeutics and its characteristics toward the specific target using AI platforms. Gilead Sciences and Insitro partnered to research on non-alcoholic steatohepatitis patients. Using machine learning and human genetics, Insitro identified discovery & development in therapeutics.

Key Drivers and Trends fueling Market Growth:

Healthcare Artificial Intelligence (AI) Market Geography

North America captured a dominant position in healthcare artificial intelligence (AI). The presence of a large patient population, coupled with better adoption of digital healthcare with the latest advancements in artificial intelligence (AI), is the primary factor for its high market share. The strong presence of key healthcare IT players is also another reason for the high uptake of healthcare artificial intelligence (AI) in North America. The rise in the adoption of AI technologies in drug discovery & development and patient-centric treatment procedures are expected to drive the market growth during the forecast period. New artificial intelligence (AI) technologies can help companies create more value for their patients and communities by turning time-consuming, labor-intensive, and often ineffective functions into actionable insights for better results. Forward-thinking hospital and healthcare system executives view AI as the most effective path to a more productive, efficient, and better performing healthcare organization.

Get your sample today! https://www.arizton.com/market-reports/healthcare-artificial-intelligence-market

Healthcare Artificial Intelligence (AI) Market by Geography

Major Vendors

Other Prominent Vendors

Explore our healthcare and lifesciencesprofile to know more about the industry.

Read some of the top-selling reports:

About Arizton:

AriztonAdvisory and Intelligence is an innovation and quality-driven firm, which offers cutting-edge research solutions to clients across the world. We excel in providing comprehensive market intelligence reports and advisory and consulting services.

We offer comprehensive market research reports on industries such as consumer goods & retail technology, automotive and mobility, smart tech, healthcare, and life sciences, industrial machinery, chemicals and materials, IT and media, logistics and packaging. These reports contain detailed industry analysis, market size, share, growth drivers, and trend forecasts.

Arizton comprises a team of exuberant and well-experienced analysts who have mastered in generating incisive reports. Our specialist analysts possess exemplary skills in market research. We train our team in advanced research practices, techniques, and ethics to outperform in fabricating impregnable research reports.

Mail: [emailprotected]

Call: +1-312-235-2040 +1 302 469 0707

SOURCE Arizton Advisory & Intelligence

More:
Healthcare Artificial Intelligence (AI) Market Size to Reach Revenues of USD 44.5 Billion by 2026 - Arizton - PRNewswire

Read More..

Artificial intelligence may not actually be the solution for stopping the spread of fake news – The Conversation CA

Disinformation has been used in warfare and military strategy over time. But it is undeniably being intensified by the use of smart technologies and social media. This is because these communication technologies provide a relatively low-cost, low-barrier way to disseminate information basically anywhere.

The million-dollar question then is: Can this technologically produced problem of scale and reach also be solved using technology?

Indeed, the continuous development of new technological solutions, such as artificial intelligence (AI), may provide part of the solution.

Technology companies and social media enterprises are working on the automatic detection of fake news through natural language processing, machine learning and network analysis. The idea is that an algorithm will identify information as fake news, and rank it lower to decrease the probability of users encountering it.

From a psychological perspective, repeated exposure to the same piece of information makes it likelier for someone to believe it. When AI detects disinformation and reduces the frequency of its circulation, this can break the cycle of reinforced information consumption patterns.

However, AI detection still remains unreliable. First, current detection is based on the assessment of text (content) and its social network to determine its credibility. Despite determining the origin of the sources and the dissemination pattern of fake news, the fundamental problem lies within how AI verifies the actual nature of the content.

Theoretically speaking, if the amount of training data is sufficient, the AI-backed classification model would be able to interpret whether an article contains fake news or not. Yet the reality is that making such distinctions requires prior political, cultural and social knowledge, or common sense, which natural language processing algorithms still lack.

Read more: An AI expert explains why it's hard to give computers something you take for granted: Common sense

In addition, fake news can be highly nuanced when it is deliberately altered to appear as real news but containing false or manipulative information, as a pre-print study shows.

Classification analysis is also heavily influenced by the theme AI often differentiates topics, rather than genuinely the content of the issue to determine its authenticity. For example, articles related to COVID-19 are more likely to be labelled as fake news than other topics.

One solution would be to employ people to work alongside AI to verify the authenticity of information. For instance, in 2018, the Lithuanian defence ministry developed an AI program that flags disinformation within two minutes of its publication and sends those reports to human specialists for further analysis.

A similar approach could be taken in Canada by establishing a national special unit or department to combat disinformation, or supporting think tanks, universities and other third parties to research AI solutions for fake news.

Controlling the spread of fake news may, in some instances, be considered censorship and a threat to freedom of speech and expression. Even a human may have a hard time judging whether information is fake or not. And so perhaps the bigger question is: Who and what determine the definition of fake news? How do we ensure that AI filters will not drag us into the false positive trap, and incorrectly label information as fake because of its associated data?

An AI system for identifying fake news may have sinister applications. Authoritarian governments, for example, may use AI as an excuse to justify the removal of any articles or to prosecute individuals not in favour of the authorities. And so, any deployment of AI and any relevant laws or measurements that emerge from its application will require a transparent system with a third party to monitor it.

Future challenges remain as disinformation especially when associated with foreign intervention is an ongoing issue. An algorithm invented today may not be able to detect future fake news.

For example, deep fakes which are highly realistic and difficult-to-detect digital manipulation of audio or video are likely to play a bigger role in future information warfare. And disinformation spread via messaging apps such as WhatsApp and Signal are becoming more difficult to track and intercept because of end-to-end encryption.

A recent study showed that 50 per cent of the Canadian respondents received fake news through private messaging apps regularly. Regulating this would require striking a balance between privacy, individual security and the clampdown of disinformation.

While it is definitely worth allocating resources to combating disinformation using AI, caution and transparency are necessary given the potential ramifications. New technological solutions, unfortunately, may not be a silver bullet.

Originally posted here:
Artificial intelligence may not actually be the solution for stopping the spread of fake news - The Conversation CA

Read More..