Category Archives: Artificial Intelligence
How Does Artificial Intelligence (AI) Work and Its Applications [Updated] – Simplilearn
Artificial Intelligence (AI), the new buzzword in the world of technology, is set to change the way future generations will function. But what exactly is AI and how does AI work? You may not be aware of it, but you probably interact with AI on a daily basis. From smartphones to chatbots, AI is already prevalent in many aspects of our lives. Growing investments in this area and AIs increasing use in the enterprise space are indicative of how the job market is warming up for AI experts.
Let us begin this tutorial by first understanding what is AI and how does AI work. AI is probably one of the most exciting advancements that we're in the middle of experiencing as humans. It is a branch of computer science dedicated to creating intelligent machines that work and react like humans.
Let us cover the types of AI in the next section of this tutorial.
There are four main types of AI. They are:
This kind of AI is purely reactive and does not have the ability to form memories or use past experiences to make decisions. These machines are designed to perform specific tasks. For example, programmable coffeemakers or washing machines are designed to perform specific functions, but they do not have memory.
This kind of AI uses past experiences and the present data to make a decision. Limited memory means that the machines are not coming up with new ideas. They have a built-in program running the memory. Reprogramming is done to make changes in such machines. Self-driving cars are examples of limited memory AI.
These AI machines can socialize and understand human emotions and will have the ability to cognitively understand somebody based on the environment, their facial features, etc. Machines with such abilities have not been developed yet. There is a lot of research happening with this type of AI.
This is the future of AI. These machines will be super-intelligent, sentient and conscious. They are able to react very much like a human being, although they are likely to have their own features.
The next section of this tutorial will help you get a better understanding on how exactly to implement AI.
Lets explore the following ways that explain how we can implement AI:
It is machine learning that gives AI the ability to learn. This is done by using algorithms to discover patterns and generate insights from the data they are exposed to.
Deep learning, which is a subcategory of machine learning, provides AI with the ability to mimic a human brains neural network. It can make sense of patterns, noise, and sources of confusion in the data.
Consider an image shown below:
Here we segregated the various kinds of images using deep learning. The machine goes through various features of photographs and distinguishes them with a process called feature extraction. Based on the features of each photo, the machine segregates them into different categories, such as landscape, portrait, or others.
Let us understand how deep learning works.
Consider an image shown below:
The above image depicts the three main layers of a neural network:
The images that we want to segregate go into the input layer. Arrows are drawn from the image on to the individual dots of the input layer. Each of the white dots in the yellow layer (input layer) are a pixel in the picture. These images fill the white dots in the input layer.
We should have a clear idea of these three layers while going through this artificial intelligence tutorial.
The hidden layers are responsible for all the mathematical computations or feature extraction on our inputs. In the above image, the layers shown in orange represent the hidden layers. The lines that are seen between these layers are called weights. Each one of them usually represents a float number, or a decimal number, which is multiplied by the value in the input layer. All the weights add up in the hidden layer. The dots in the hidden layer represent a value based on the sum of the weights. These values are then passed to the next hidden layer.
You may be wondering why there are multiple layers. The hidden layers function as alternatives to some degree. The more the hidden layers are, the more complex the data that goes in and what can be produced. The accuracy of the predicted output generally depends on the number of hidden layers present and the complexity of the data going in.
The output layer gives us segregated photos. Once the layer adds up all these weights being fed in, it'll determine if the picture is a portrait or a landscape.
Example - Predicting Airfare Costs
This prediction is based on various factors, including:
We begin with some historical data on ticket prices to train the machine. Once our machine is trained, we share new data that will predict the costs. Earlier, when we learned about four kinds of machines, we discussed machines with memory. Here, we talk about the memory only, and how it understands a pattern in the data and uses it to make predictions for the new prices as shown below:
Next up in this tutorial let us take a look at how does AI work and applications of AI.
A common AI application that we see today is the automatic switching of appliances at home.
When you enter a dark room, the sensors in the room detect your presence and turn on the lights. This is an example of non-memory machines. Some of the more advanced AI programs are even able to predict your usage pattern and turn on appliances before you explicitly give instructions.
Some AI programs are able to identify your voice and perform an action accordingly. If you say, turn on the TV, the sound sensors on the TV detect your voice and turn it on.
With the Google dongle and a Google Home Mini, you can actually do this every day.
The last section of this Artificial Intelligence tutorial discusses the use case of AI in healthcare.
AI has several amazing use cases, and this section of the tutorial will help you understand them better, beginning with the application of AI in the healthcare field. The problem statement is predicting whether a person has diabetes or not. Specific information about the patient is used as input for this case. This information will include:
Check out the Simplilearn's video on "Artificial Intelligence Tutorial" to see how a model for this problem statement is created. The model is implemented with Python using TensorFlow.
Learn from the best in the AI/ML industry with our Caltech Artificial Intelligence Course! Enroll now to get started!
AI is redefining the way business processes are carried out in various fields, such as marketing, healthcare, financial services, and more. Companies are continuously exploring the ways they can reap benefits from this technology. As the quest for improvement of current processes continues to grow, it makes sense for professionals to gain expertise in AI.
If you found this tutorial informative, you can also check out our Caltech AI Course. The course will help you learn the basic concepts of AI, data science, machine learning, deep learning with TensorFlow and more. Apart from the theory, you will also get the opportunity to apply your skills to solve real-world problems through industry-oriented projects. So, without further ado, start your career in AI and get ahead!
The rest is here:
How Does Artificial Intelligence (AI) Work and Its Applications [Updated] - Simplilearn
Maybe We’ve Got The Artificial Intelligence In Law ‘Problem’ All Wrong – Above the Law
When some hapless NY lawyers submitted a brief riddled with case citations hallucinated by consumer-facing artificial intelligence juggernaut ChatGPT and then doubled down on the error, we figured the resulting discipline would serve as a wake-up call to attorneys everywhere. But there would be more. And more. Andmore.
Weve repeatedly balked at declaring this an AI problem, because nothing about these cases really turned on the technology. Lawyers have an obligation to check their citations and if theyre firing off briefs without bothering to read the underlying cases, thats a professional problem whether ChatGPT spit out the case or their summer associate inserted the wrong cite. Regulating AI for an advocate falling down on the job seemed to miss the point at best and at worst poison the well against a potentially powerful legal tool before its even gotten off the ground.
Another popular defense of AI against the slings and arrows of grandstanding judges is that the legal industry needs to remember that AI isnt human. Its just like every other powerful but ultimately dumb tool and you cant just trust it like you can a human. Conceived this way, AI fails because its not human enough. Detractors have their human egos stroked and AI champions can market their bold future where AI creeps ever closer to humanity.
But maybe weve got this all backward.
The problem with AI is that its more like humans than machines, David Rosen, co-founder and CEO of Catylex told me off-handedly the other day. With all the foibles, and inaccuracies, and idiosyncratic mistakes. Its a jarring perspective to hear after months of legal tech chit chat about generative AI. Every conversation Ive had over the last year frames itself around making AI more like a person, more able to parse through whats important and whats superfluous. Though the more I thought about it, theres something to this idea. It reminded me of my issue with AI research tools trying to find the right answer when that might not be in the lawyers or the clients best interest.
How might the whole discourse around AI change if we flipped the script?
If we started talking about AI as too human, we could worry less about figuring out how it makes a dangerous judgment call between two conclusions and worry more about a tool that tries too hard to please its bosses, makes sloppy errors when it jumps to conclusions, and holds out the false promise that it can deliver insights for the lawyers themselves. Reorient around promising a tool thats going to ruthlessly and mechanically process tons more information than a human ever could and deliver it to the lawyer in a format that the humans can digest and evaluate themselves.
Make AI Artificial Again if you will.
Joe Patriceis a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free toemail any tips, questions, or comments. Follow him onTwitterif youre interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.
Read the original:
Maybe We've Got The Artificial Intelligence In Law 'Problem' All Wrong - Above the Law
FDA approves AI-driven test for sepsis made by Prenosis – The Washington Post
Bobby Reddy Jr. roamed a hospital as he built his start-up, observing how patient care began with a diagnosis and followed a set protocol. The electrical engineer thought he knew a better way: an artificial intelligence tool that would individualize treatment.
Now, the Food and Drug Administration has greenlighted such a test developed by Reddys company, Chicago-based Prenosis, to predict the risk of sepsis a complex condition that contributes to at least 350,000 deaths a year in the United States. It is the first algorithmic, AI-driven diagnostic tool for sepsis to receive the FDAs go-ahead, the company said in a statement Wednesday.
In hospitals and emergency departments, we are still relying on one-size-fits-all, when instead we should be treating each person based on their individual biology, Reddy, the companys CEO, said in an interview.
Sepsis occurs when a patients immune system tries to fight an infection and ends up attacking the bodys own organs. Managing sepsis is a priority among federal health agencies including the Centers for Disease Control and Prevention and the Centers for Medicare and Medicaid Services.
Sepsis is a serious and sometimes deadly complication, Jeff Shuren, director of the FDAs Center for Devices and Radiological Health, said in a statement. Technologies developed to help prevent this condition have the potential to provide a significant benefit to patients.
To build its test, Prenosis acquired more than 100,000 blood samples along with clinical data on hospital patients, and trained its algorithm to recognize the health measures most associated with developing sepsis. The company narrowed its test to 22 parameters, including blood-based measures and other vital signs such as temperature and heart rate. The diagnostic tool now produces a snapshot that classifies a patients risk of sepsis in four categories, from low to very high.
Though Prenosis is the first to win FDA authorization for such a test, other companies, including Epic Systems, have already brought to market AI-driven diagnostics for the condition. Epic, known for its software that manages electronic health records, has faced questions about the accuracy of its algorithm for predicting sepsis.
Jacob Wright, an Epic spokesman, said that multiple studies have shown that its diagnostic model for sepsis improved patient outcomes, adding that a second version released in 2022 has fewer false positives when compared to the first version. The company is seeking FDA clearance, he said.
Reddy said Prenosis built its technology without initially knowing what problem it would try to solve. An Illinois hospital gave him office space and a badge, allowing him to roam the hospital and observe its staff interacting with patients. What I saw over and over again is that they really run based on protocols, he said. He later came across a paper on sepsis, he said, that opened his eyes to how many people die of it. This is going to be what we do, he said.
At least 1.7 million adults develop sepsis in a given year, including at least 350,000 who die during their hospitalization or are discharged to hospice care, according to the CDC. Roughly 1 in 3 people who die in a hospital had sepsis during their stay, and federal agencies are aiming to reward facilities that are making strides to reduce the condition.
Those at higher risk of sepsis include adults 65 and older, people with weakened immune systems, and those with a recent severe illness or hospitalization.
The new test comes as hospitals are grappling with the future of medicine and how to best incorporate artificial intelligence into the practice. In some instances, artificial intelligence tools have created tension among front-line workers who worry the technology could lead to inaccurate results or replace staff.
See original here:
FDA approves AI-driven test for sepsis made by Prenosis - The Washington Post
The Fate of Hundreds of Thousands of Civilians in Gaza depends on Artificial Intelligence – Sarajevo Times
Israeli sources say Israel risks at least 20 civilian casualties for every 37,000 suspects identified by an artificial intelligence program called Lavander to identify human targets in attacks on the blockaded Gaza Strip.
Sources from Tel Aviv testified for the media houses +972 and Local Call that Lavander analyzed data on about 2.3 million people in Gaza according to unclear criteria and assessed whether any of the persons had ties to Hamas.
A total of six sources stated that the Israeli army fully complied with that program, especially in the early stages of the war, and the names identified by Lavander were labeled as targets without control and without taking into account any special criteria, except that its about men.
37,000 suspected Palestinians
Sources who testified to +972 said that the concept of military target, which allows killing on private property even if there are civilians in the facility and surroundings, previously included only high-level military targets, and that after October 7 concept extended to all members of Hamas.
Due to the enormous increase in the number of targets, the need for artificial intelligence has arisen because the possibility of examining and checking targets individually by humans has been eliminated, and sources also state that artificial intelligence has marked close to 37,000 Palestinians as suspects.
The sources said that Lavander was very successful in classifying Palestinians, and that the process was fully automated.
We killed thousands of people. We automated everything and did not control each target separately. We bombed the targets as soon as they moved in their houses, the source said, confirming that human control of the targets had been eliminated.
The comment of one of the sources that he found it very surprising that they were asked to bomb a house to kill an unimportant person is a sort of acknowledgment of the Israeli massacre of civilians in Gaza.
Green light for high level targets with up to 100 civilian casualties
The sources stated that up to 20 civilian victims were allowed in the action that was carried out against the lower ranks, and that this number often changed during the process, and they emphasized that the principle of proportionality was not applied.
On the other hand, it was stated that the number of possible collateral civilian casualties increased to 100 for high-level targets.
While the sources said they were ordered to bomb every place they could, one of the sources said that hysteria dominated senior officials and all they knew was to bomb like crazy to limit Hamass capabilities.
A senior soldier with the initials B., who used the Lavander program, said that the margin of error of the program is about ten percent and that there is no need for people to control targets and waste time on it.
Israeli soldier B stated that in the beginning there were fewer labeled targets, but that with the expansion of the definition of Hamas members, the practice was further expanded and the number of targets grew. He added that members of the police and civil protection who may have helped Hamas, but who were not a threat to the Israeli army, were also targeted.
There are many shortcomings of the system. If the target person gave their phone to another person, that person is bombed at home with their entire family. This happened very often. This was one of the most common mistakes Lavander made, said Soldier B.
Most of the killed are women and children
On the other hand, the same sources said that the software called Wheres Daddy? it tracks thousands of people at a time and notifies Israeli authorities when they enter their homes. Attacks are also carried out on the database of this program.
Lets say you calculate that there is one member of Hamas and ten civilians in the house, usually those ten people are women and children. So, absurdly, most of the people you kill are women and children, said one of the sources.
Unguided bombs are used to save money
Sources also said that many civilians were killed because less important targets were hit with ordinary and cheaper missiles instead of guided smart missiles.
We usually carried out the attacks with unguided missiles, which meant literally destroying the entire house with its contents. The system kept adding new targets, one of the sources said.
Artificial intelligence is not used to reduce civilian casualties, but to find more targets
Speaking to Al Jazeera on the subject, Marc Owen Jones, professor of Middle East Studies and Digital Humanities at Hamid bin Khalifa University in Qatar, said it was increasingly clear that Israel was using unproven artificial intelligence systems that had not undergone transparent evaluation to help in making decisions about the lives of civilians.
Jones believes that Israeli officials activated an artificial intelligence system to select targets to avoid moral responsibility.
He emphasized that the purpose of using the program is not to reduce civilian casualties, but to find more targets.
Even the officials who run the system see AI as a killing machine. It is unlikely that Israel will stop using artificial intelligence in attacks if its allies do not put pressure on it. The situation in Gaza is genocide supported by artificial intelligence. A call for a moratorium on the use of artificial intelligence in warfare is needed, Jones concluded.
Habsora
Another study published on December 1, 2023 revealed that an artificial intelligence application called Habsora (Gospel), which the Israeli military also used to identify targets in its attacks on the Gaza Strip, was used to precisely target civilian infrastructure and that was used in attacks against automatically generated targets. In this case, the balance of civilian victims who would die with the target was known.
Habsora is an artificial intelligence technology used by Israel to attack buildings and infrastructure, and Lavander is used when targeting people, AA writes.
Read more from the original source:
The Fate of Hundreds of Thousands of Civilians in Gaza depends on Artificial Intelligence - Sarajevo Times
Humane, Rabbit, Brilliant, Meta: the AI gadgets are here – The Verge
Im just going to call it: well look back on April 2024 as the beginning of a new technological era. That sounds grandiose, I know, but in the next few weeks, a whole new generation of gadgets is poised to hit the market. Humane will launch its voice-controlled AI Pin. Rabbits AI-powered R1 will start to ship. Brilliant Labs AI-enabled smart glasses are coming out. And Meta is rolling out a new feature to its smart glasses that allow Metas AI to see and help you navigate the real world.
There are many more AI gadgets to come, but the AI hardware revolution is officially beginning. What all these gadgets have in common is that they put artificial intelligence at the front of the experience. When you tap your AI Pin to ask a question, play music, or take a photo, Humane runs your query through a series of language models to figure out what youre asking for and how best to accomplish it. When you ask your Rabbit R1 or your Meta smart glasses who makes that cool mug youre looking at, it pings through a series of image recognition and data processing models in order to tell you thats a Yeti Rambler. AI is not an app or a feature; its the whole thing.
Its possible that one or many of these devices will so thoroughly nail the user experience and feature list that this month will feel both like the day you got your first flip phone and the day the iPhone made that flip phone look like an antique. But probably not. More likely, what were about to get are a lot of new ideas about how you interact with technology. And together, theyll show us at least a glimpse of the future.
The primary argument against all these AI gadgets so far has been that the smartphone exists. Why, you might ask, do I need special hardware to access all this stuff? Why cant I just do it on the phone in my pocket? To that, I say, well, you mostly can! The ChatGPT app is great, Googles Gemini is rapidly taking over the Android experience, and if I were a betting man, Id say theres a whole lot of AI coming to iOS this year.
Smartphones are great! None of these devices will kill or replace your phone, and anyone who says otherwise is lying to you. But after so many years of using our phones, weve forgotten how much friction they actually contain. To do almost anything on your phone, you have to take the device out of your pocket, look at it, unlock it, open an app, wait for the app to load, tap between one and 40,000 times, switch to another app, and repeat over and over again. Smartphones are great because theyre able to contain and access practically everything, but theyre not actually particularly efficient tools. And theyre not going to get better, not as long as the app store business model stays the way it is.
The promise of AI and I want to emphasize the word promise because nothing weve seen so far comes remotely close to accomplishing this is to abstract all those steps and all that friction out of existence. All you need to do is declare your intentions play music, navigate home, text Anna, tell me what poison ivy looks like and let the system figure out how to get it done. Your phone contains multitudes, but its not really optimized for anything. An AI-optimized gadget can be easier to reach, quicker to launch, and alert to your input at all times.
The promise of AIis to abstract all those steps and all that friction out of existence
If that pans out, we might get not only a new set of gadgets but also a new set of huge companies. Google and Apple won the smartphone wars, and no company over the last decade has even come close to upsetting that app store duopoly. So much of the race to augmented reality, the metaverse, wearables, and everything else has been about trying to open up a new market. (On the flip side, its no accident that while so many other companies are building AI gadgets, Google and Apple are working hastily to shove AI into your phone.) AI might turn out to be just another flailing attempt from the folks that lost the smartphone wars. But it might also be the first general-purpose, all-things-to-all-people technology that actually feels like an upgrade.
Obviously, the AI-first approach brings its own set of challenges. Starting with the whole AI is not yet very good or reliable thing. But even once were past that, all the simplicity by abstraction can actually turn into confusion. What if I text Anna in multiple places? What if I listen to podcasts in Pocket Casts and music in Spotify and audiobooks in Audible, and I have accounts with a bunch of other music services I never even use? What if the closest four-star coffee shop is a Starbucks, and I hate Starbucks? If I tell my AI device to buy something, what card does it use? What retailer does it pick? How fast will it ship? Automation requires trust, and we dont yet have many reasons to trust AI.
So far, the most compelling approach seems to be a hybrid one. Both Humane and Rabbit have built complex web apps through which you can manage all your accounts, payment systems, conversation history, and other preferences. Rabbit allows you to actually teach your device how to do things the way you like. Both also have some kind of display Humane, a laser projector, Rabbit, a small screen on the R1 on which you can check the AIs work or change the way its planning to do something. The AI glasses from Meta and Brilliant try to address these problems either by directing you to look at something on your phone or just by not trying to do everything for everyone. AI cant do everything yet.
In many ways, it feels like its 2004 again. Id bet that none of these new devices will feel like a perfectly executed, entirely feature-complete product even the people who make these gadgets dont think theyve finished the job, no matter how self-serious their product videos might be. But before the iPhone turned the whole cellphone market into panes of glass, phones swiveled; they flipped; they were candy bars and clamshells and sliders and everything in between. Right now, everyones searching for the iPhone of AI, but were not getting that anytime soon. We might not get it ever, for that matter, because the promise of AI is that it doesnt require a certain kind of perfected interface it doesnt require any interface at all. What were going to get instead are the Razr, the Chocolate, the Treo, the Pearl, the N-Gage, and the Sidekick of AI. Its going to be chaos, and its going to be great.
See the rest here:
Humane, Rabbit, Brilliant, Meta: the AI gadgets are here - The Verge
US and Great Britain Forge AI Safety Pact – PYMNTS.com
The U.S. and U.K. have pledged to work together on safe AI development.
Theagreement, inked on Monday (April 1) by U.S. Commerce SecretaryGina Raimondoand U.K. Technology SecretaryMichelle Donelan, will see the AI Safety Institutes of both countries collaborate on tests for the most advanced artificial intelligence (AI) models.
The partnership will take effect immediately and is intended to allow both organizations to work seamlessly with one another, theDepartment of Commercesaid in a news release.
AI continues to develop rapidly, and both governments recognize the need to act now to ensure a shared approach to AI safety which can keep pace with the technologysemerging risks.
In addition, the two countries agreed to forge similar partnerships with other countries to foster AI safety around the world. The institutes also plan to conduct at least one joint test on a publicly accessible model and to tap into a collective pool of expertise by exploring personnel exchanges between both organizations.
The agreement comes days after the White House unveiled a policy requiring federal agencies to identify and mitigate the potential risks of AI and todesignate a chief AI officer.
Agencies must also create detailed and publicly accessible inventories of their AI systems. These inventories will highlight use cases that could potentially impact safety or civil rights, such as AI-powered healthcare or law enforcement decision-making.
Speaking to PYMNTS following this announcement,Jennifer Gill, vice president of product marketing atSkyhawk Security, stressed the need for the policy to require uniform standards across all agencies.
If each chief AI officer manages and monitors the use of AI at their discretion for each agency, there will be inconsistencies, which leads to gaps, which leads to vulnerabilities, said Gill, whose company specializes in AI integrations for cloud security.
These vulnerabilities in AI can be exploited for a number ofnefarious uses. Any inconsistency in the management and monitoring of AI use puts the federal government as a whole at risk.
This year also saw the National Institute of Standards and Technology (NIST) launch the Artificial Intelligence Safety Institute Consortium(AISIC), is designed to promote collaboration between industry and government to foster safe AI use.
To unlockAIs full potential, we need to ensure there is trust in the technology,MastercardCEOMichael Miebachsaid at the time of the launch. That starts with a common set of meaningful standards that protects users and sparks inclusive innovation.
Mastercard is among the more than 200 members of the group, composed of tech giants such asAmazon,Meta,Google andMicrosoft, schools like Princeton and Georgia Tech, and a variety of research groups.
Go here to see the original:
US and Great Britain Forge AI Safety Pact - PYMNTS.com
Artificial Intelligence Rockets to the Top of the Manufacturing Priority List – Bain & Company
This article is part of Bain's Global Machinery & Equipment Report 2024
As machinery and equipment companies build new tech muscle, they are investing heavily in artificial intelligence (AI). In fact, the AI market in industrial machinery, which includes intelligent hardware, software, and services, is expected to reach $5.46 billion in 2028, according to the Business Research Company.
Why? From supply chain volatility to cost pressures to the shortage of skilled workers, AI can help address top challenges facing machinery and equipment executives.
Many machinery executives increasingly see AI adoption as an urgent task. In the broader advanced manufacturing industry, 75% of executives say that adopting emerging technologies such as AI is their top priority in engineering and R&D, according to Bain research. Yet, while many companies have collected a mountain of data, a basic enabler of AI, most are not using it.
Leading advanced machinery companies offer a clue to success. Before investing in AI, they identify their core business challenges and how AI can help them improve processes and overall performance. That includes evaluating how specific types of AI, such as machine learning (ML) or generative AI, use data to create value. Early movers are using AI to solve key problems in procurement, assembly, maintenance, quality control, and warehouse logistics.
Some forward thinkers are beginning to deploy generative AI to synthesize huge volumes of unstructured data in order to revolutionize knowledge work, such as retrieving and summarizing relevant information from across the enterprise to answer questions from employees. Others are experimenting with generative AI service bots that partner with field technicians, for instance, to recognize more quickly when maintenance is required and to improve the quality of that work.
Those who are pulling ahead are also integrating AI solutions into processes and back-end systems.
Explore the use cases with the highest potential.
Artificial intelligence is a broad term that encompasses technologies such as basic data analytics, ML, deep learning, and generative AI. Winning companies start by identifying their top business challenges and then selecting the specific AI solutions best suited to solve their unique key issues.
Ongoing disruptions such as Covid-19 and geopolitical instability have forced organizations to improve supply chain resilience and sustainability. The challenge is moving beyond reacting to problems after they happen. AI, however, can report supply chain bottlenecks in real time, predict potential disruptions in advance, and enable proactive planning to mitigate impacts to supply chains from an end-to-end business perspective.
AI can also track employee productivity and measure costs across all levels. AI helps companies shift their business models from simply selling machinery to offering machinery as a service, in which after-sales support and maintenance become part of the core offering. This includes applying ML to predict when equipment or parts need replacement, thereby reducing unplanned production downtime.
Finding qualified workers remains a challenge across the industry, especially for more complex engineering tasks. AI provides workers with information and insights to free them to focus on activities that add more value. It can also help train and upskill new workers to quickly come up to speed.
Generative AI in manufacturing is in its infancy, but many believe it will transform the sector. Specifically, the large language models that underpin generative AI fundamentally change how people interact with systems and documents. Generative AI can surface hidden insights from unstructured data that can lead to dramatic improvements in productivity, customer service, and financial performance.
More than 90% of machinery companies already collect and store production data, according to a recent Bain survey. But most do not know how to derive value from that data. One reason is a lack of understanding about where AI can deliver the greatest returns.
Front-runners are already using AI to solve a variety of supply chain challenges (from cutting costs in procurement to using predictive monitoring) to identify failures before they occur in industrial assets, equipment, and infrastructure. In short, AI enables many digital applications that are top of mind for the industry (see Figure 1).
Three specific areas (of many) in which companies are cashing in on AI include minimizing assembly defects/improving quality control; boosting productivity; and streamlining warehouse management.
Minimizing assembly defects/improving quality control: AI can help identify mistakes in real time to improve assembly efficiency and product quality. For example, one machinery original equipment manufacturer (OEM) adopted AI-based video processing to track manual assembly activities, automate quality checks of manual assembly activities, and help optimize the use of resources and employees. Those solutions helped the machinery OEM reduce failures in the assembly process by as much as 70% while also cutting down efforts for quality checks by 50% for some lines.
In another case, a material supplier for machinery OEMs used computer vision to detect foreign objects in chemical bulk material instead of relying only on human inspections. The accuracy of the automated inspection increased by 80%, to greater than 99%, compared with todays mainly manual visual inspection.
Boosting productivity: AI can also supercharge employee productivity, providing a boost to companies short on staff. One machinery manufacturer adopted an AI-powered industrial copilot that converts natural language into code and translates old programming languages into natural language, completing both tasks more expeditiously and at a higher quality than human developers. Among other benefits, engineers using this AI solution were approximately 5% more productive, according to preliminary results. Downtime costs also went down as there were fewer data deployment errors and issues were mitigated more quickly.
Streamlining warehouse management: AI can also help ensure that warehouses operate as efficiently as possible, meaning that they carry the appropriate items to meet demand and minimize extra inventory. One equipment machinery company, for instance, adopted an AI-based inventory management system that helped it minimize overstock while still fulfilling all orders.
AI also provides more flexible job production planning so that companies can allocate specific assembly activities to the most relevant assembly expert at a given time to maximize productivity. As a result, the manufacturer can simultaneously enhance the quality of its products and adjust processes to meet specific customer needs. In short, AI allows companies to customize and personalize without negatively affecting planning, productivity, and costs on the shop floor.
Scaling AI and taking successful AI pilots from one manufacturing line to other lines or other plants is not easy, but it is important. A 2022 survey by MIT Technology Review Insights showed that scaling AI use cases to generate value is the top priority for 78% of executives across industries (see Figure 2).
Top-performing companies monitor their return on investment throughout the AI implementation and ensure that they factor in all costs. While this may seem obvious, many companies forget to log computation costs on the cloud, for instance. Leaders also conduct regular governance checks (e.g., every quarter) to reassess their AI investment decisions.
Legacy software systems and fragmented data can also often pose problems as they create a chaotic data environment with low-quality data. The best teams standardize analytics systems and platforms to enable multiple AI use cases. They also use unified data models that allow them to merge many fragmented data sources into one.
To keep pace with rapid changes in AI, leaders use modular and loosely coupled components, connected via microservices, to make it easy to replace software. When integrating generative AI, they ensure that these new components enhance the existing data architecture. Successful companies also verify that efficient processes and tools (MLOps/DevOps) are factored into the technical architecture so that they can deploy AI at scale.
Leaders in AI also embrace a test-and-learn approach. Machinery engineers typically favor rigorous thinking and perfect product design. Software and AI work, however, require a test-and-learn, fail-fast approach using Agile methodology. In successful AI implementations, plant engineers and AI experts collaborate closely to create, test, and refine AI models until they meet the companys goals.
Finally, machinery companies often struggle to find and retain employees with strong AI skills. To build in-house AI capability, many are bringing in external AI experts to train existing employees and increase data literacy throughout the entire workforce.
To retain skilled workers who may feel that some aspects of the work are uninteresting, successful companies have several approaches. Some are automating simple AI tasks so that experts can focus on more data- and analytics-intensive work. Others are developing expert squads to handle more complex AI use cases and crack data insight problems.
While each company faces different AI challenges, the leaders are addressing three core dimensions. First, they determine where AI unlocks the greatest value for the business. Second, they tailor the technology to address core problems and integrate it with their IT and operational technology setup. That means making sure that the technology is flexible so that it can be applied to immediate use cases but is also scalable in the future. Finally, they are developing a data culture that integrates AI skills and AI-enabled ways of working into the operating model.
AI has captured the imagination of machinery executives. As a growing number of companies experiment with and deploy new solutions, they are raising the industry bar for productivity and performance. Companies that defer investing will need to run twice as fast to keep pace.
The authors would like to express thanks to Josef Waltl, Kevin Denker, Robert Recknagel, Dennis Kuesters, Leonides De Ocampo, Marian Zoll, and Mary Stroncek for their contributions to this article.
Original post:
Artificial Intelligence Rockets to the Top of the Manufacturing Priority List - Bain & Company
OneTrust Joins Responsible Artificial Intelligence Institute – PR Newswire
OneTrust partners with RAI Institute to contribute to its development of tangible governance tools for trustworthy, safe, and fair Artificial Intelligence
ATLANTA, April 3, 2024 /PRNewswire/ -- OneTrust, the market-defining leader for trust intelligence, today announced that it has joinedtheResponsible Artificial Intelligence Institute(RAI Institute), the prominent non-profit enabling global organizations to harness the power of responsible AI.
For over five years, OneTrust has led the market in privacy management software, with offerings designed to operationalize integrated risk management. As AI adoption accelerates, OneTrust recognizes responsible AI practices are critical for building trust and unlocking AI's full potential across industries. Last May, the Company introduced OneTrust AI Governance, a comprehensive solution designed to help organizations inventory, assess, and monitor the wide range of risks associated with AI. As organizations use AI and machine learning (ML) to process large amounts of data and drive innovation, AI Governance provides visibility and control over data used and risks generated by AI models. The end-to-end solution helps organizations to operationalize regulatory requirements for laws such as the EU AI Act and align with key industry frameworks such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF), Organization for Economic Co-operation and Development (OECD) Framework for the Classification of AI Systems, and more.
By implementing responsible AI practices, companies build trust with customers, regulators, and society at large
"We're delighted to welcome OneTrust as a member of the Responsible AI Institute," said Alyssa Lefaivre kopac, Head of Global Partnerships & Growth at Responsible AI Institute. "OneTrust's governance solutions and deep expertise in privacy, security, and ethics will be invaluable in our collective work to shape the practices, policies, and standards that enable AI for good across all sectors."
"Responsible AI is not an option, but a necessity in today's business landscape," said Jisha Dymond, Chief Ethics & Compliance Officer at OneTrust. "With OneTrust, organizations can not only observe the AI revolution, but also actively enable innovation. By implementing responsible AI practices, companies build trust with customers, regulators, and society at large, and facilitate a future where technology and human ingenuity converge to create unprecedented value. We look forward to partnering with RAI Institute as we continue to build a responsible AI future together."
This partnership with RAI Institute builds upon OneTrust's commitment to ethical and safe AI deployment. The Company is also a foundational supporter of the International Association of Privacy Professionals (IAPP) AI Governance Center, created to address the industry's need for AI governance professionals. Through its own expert-led OneTrust for AI Governance Masterclass webinar series, OneTrust enables compliance and technology professionals alike to mature their technology-driven AI compliance programs and foster responsible AI practices across their businesses.
About the Responsible AI InstituteFounded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks, and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as ATB Financial, Amazon Web Services, Boston Consulting Group, Yum! Brands, Shell, Chevron, Roche, and many others dedicated to bringing responsible AI to all industry sectors.
About OneTrustOneTrust enables every organization to transform siloed compliance initiatives into world-class, coordinated trust programs with the category-defining Trust Intelligence Platform. Customers use OneTrust to build and demonstrate trust, measure and manage risk, and go beyond compliance. As trust has emerged as the ultimate enabler for innovation, OneTrust delivers the intelligence and automation organizations need to meet critical program goals across data privacy, responsible AI, security, ethics, and ESG. http://www.onetrust.com
2024 OneTrust LLC. All rights reserved. OneTrust and the OneTrust logo are trademarks or registered trademarks of OneTrust LLC in the United States and other jurisdictions. All other brand and product names are trademarks or registered trademarks of their respective holders.
Media ContactsAinslee Shea, OneTrust [emailprotected] +1 (404) 855-0803
Nicole McCaffrey, Responsible AI Institute [emailprotected] +1 (440) 785-3588
SOURCE OneTrust
Continued here:
OneTrust Joins Responsible Artificial Intelligence Institute - PR Newswire
1 Magnificent Artificial Intelligence (AI) Stock to Buy and Hold Forever – sharewise
Artificial intelligence (AI) has been garnering plenty of headlines over the past 18 months. Though the technology has been around for a while, recent breakthroughs could lead to massive innovations. The companies that lead the pack in this space will be rewarded.
There are plenty of businesses investors could consider if they want to profit from the AI boom. Let's examine one of them: (NASDAQ: MSFT). The tech giant could be a winner in AI over the long run and deliver market-beating returns along the way.
AI's recent momentum arguably began with the November 2022 launch of ChatGPT, a generative AI platform created by the privately held, Microsoft-backed company OpenAI. ChatGPT quickly became one of the fastest-growing apps ever, gaining more than a staggering 100 million users in just five days. OpenAI's success was clear proof that Microsoft was right to invest in the company. That's why the tech giant decided to double down. In January 2023, Microsoft announced a new multiyear, multimillion-dollar deal with OpenAI.
Continue reading
Source Fool.com
Originally posted here:
1 Magnificent Artificial Intelligence (AI) Stock to Buy and Hold Forever - sharewise
FACT SHEET: Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk … – The White House
Administration announces completion of 150-day actions tasked by President Bidens landmark Executive Order on AI
Today, Vice President Kamala Harris announced that the White House Office of Management and Budget (OMB) is issuing OMBs first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits delivering on a core component of the President Bidens landmark AI Executive Order. The Order directed sweeping action to strengthen AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more. Federal agencies have reported that they have completed all of the 150-day actions tasked by the E.O, building on their previous success of completing all 90-day actions.
This multi-faceted direction to Federal departments and agencies builds upon the Biden-Harris Administrations record of ensuring that America leads the way in responsible AI innovation. In recent weeks, OMB announced that the Presidents Budget invests in agencies ability to responsibly develop, test, procure, and integrate transformative AI applications across the Federal Government.
In line with the Presidents Executive Order, OMBs new policy directs the following actions:
Address Risks from the Use of AI
This guidance places people and communities at the center of the governments innovation goals. Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society, and the public must have confidence that the agencies will protect their rights and safety.
By December 1, 2024, Federal agencies will be required to implement concrete safeguards when using AI in a way that could impact Americans rights or safety. These safeguards include a range of mandatory actions to reliably assess, test, and monitor AIs impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI. These safeguards apply to a wide range of AI applications from health and education to employment and housing.
For example, by adopting these safeguards, agencies can ensure that:
If an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.
To protect the federal workforce as the government adopts AI, OMBs policy encourages agencies to consult federal employee unions and adopt the Department of Labors forthcoming principles on mitigating AIs potential harms to employees. The Department is also leading by example, consulting with federal employees and labor unions both in the development of those principles and its own governance and use of AI.
The guidance also advises Federal agencies on managing risks specific to their procurement of AI. Federal procurement of AI presents unique challenges, and a strong AI marketplace requires safeguards for fair competition, data protection, and transparency. Later this year, OMB will take action to ensure that agencies AI contracts align with OMB policy and protect the rights and safety of the public from AI-related risks. The RFI issued today will collect input from the public on ways to ensure that private sector companies supporting the Federal Government follow the best available practices and requirements.
Expand Transparency of AI Use
The policy released today requires Federal agencies to improve public transparency in their use of AI by requiring agencies to publicly:
Today, OMB is also releasing detailed draft instructions to agencies detailing the contents of this public reporting.
Advance Responsible AI Innovation
OMBs policy will also remove unnecessary barriers to Federal agencies responsible AI innovation. AI technology presents tremendous opportunities to help agencies address societys most pressing challenges. Examples include:
Advances in generative AI are expanding these opportunities, and OMBs guidance encourages agencies to responsibly experiment with generative AI, with adequate safeguards in place. Many agencies have already started this work, including through using AI chatbots to improve customer experiences and other AI pilots.
Grow the AI Workforce
Building and deploying AI responsibly to serve the public starts with people. OMBs guidance directs agencies to expand and upskill their AI talent. Agencies are aggressively strengthening their workforces to advance AI risk management, innovation, and governance including:
Strengthen AI Governance
To ensure accountability, leadership, and oversight for the use of AI in the Federal Government, the OMB policy requires federal agencies to:
In addition to this guidance, the Administration announcing several other measures to promote the responsible use of AI in Government:
With these actions, the Administration is demonstrating that Government is leading by example as a global model for the safe, secure, and trustworthy use of AI. The policy announced today builds on the Administrations Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework, and will drive Federal accountability and oversight of AI, increase transparency for the public, advance responsible AI innovation for the public good, and create a clear baseline for managing risks.
It also delivers on a major milestone 150 days since the release of Executive Order 14110, and the table below presents an updated summary of many of the activities federal agencies have completed in response to the Executive Order.
###
Read this article:
FACT SHEET: Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk ... - The White House