Page 325«..1020..324325326327..330340..»

3 Machine Learning Stocks That Could Be Multibaggers in the Making: March Edition – InvestorPlace

Machine learning could be a $528.1 billion market by the time 2030 rolls around,according to Statista. From there,Precedence Research says it could be worth more than $771.32by 2032. All creating big opportunities for machine learning stocks.

All as companies flock to the technology that involves showing data to a machine so it can learn and even make predictionslike a humanwith things such as facial recognition, product recommendations, financial accuracy, predictive analytics, medical diagnoses, and speech recognition just to name a few.

Look at healthcare, for example.

According to BuiltIn.com,Healthcare professionals usewearable technologyto compile real-time data, which machine learning can quickly process and learn from. Thats why the United Sates Food and Drug Administration has been working to integrate ML and AI intomedical device software. Machine learning is also helping to speed up the drug discovery process, organize patient data, and even help personalize treatments.

From there, the skys the limit.As these technologies continue to advance and mature, they are expected to have a transformative impact on various industries, shaping the way businesses operate, make decisions, and deliver value to customers,added Grand View Research.

That being said, investors may want to consider investing in some of the top machine learning stocks, including:

Source: Ascannio / Shutterstock.com

The last time I mentionedNvidia(NASDAQ:NVDA), it traded at $700 a share onFeb. 22.

I noted, I strongly believe its headed to at least $1,000, even $1,500 this year. All thanks to its dominance with artificial intelligence and machine learning, where its graphic processing units.

While its not up to $1,000 just yet, it did hit a high of $967.66. Thats not a bad return in about a month. From here, though, it could easily see $1,000.

Helping, the company recently launched its most powerful chips theGrace Blackwell 200 Superchips, which will continue to strengthen NVDA dominance with machine learning. We also have to consider the companys H100 GPUs have been the very backbone of cloud AI programs. Even its DRIVE platform uses machine learning to deliver autonomous vehicle navigation.

Even better, analysts at UBS just raised their price target on NVDA to $1,100. The firm noted NVDA sits on the cusp of an entirely new wave of demand from global enterprises and Sovereigns, as noted by Business Insider.

Source: Ascannio / Shutterstock.com

We can also look at machine learning stocks, likePalantirTechnologies (NYSE:PLTR), which designs programs that rely on machine learning to make decisions.

Most recently, the companywon a $178 million TITAN contractwith the U.S. Army. TITAN, or the Tactical Intelligence Targeting Access Node (TITAN), is the Armys next generation deep-sensing capability enabled with artificial intelligence and machine learning, as noted in a PLTR press release.

Helping, analysts at Wedbush raised their price target to $35 from $30, with an outperform rating. With the AI Revolution now quickly heading towards the key use case and deployment stage, Palantir with its flagship AIP platform and myriad of customer boot camps is in the sweet spot to monetize a tidal wave of enterprise spend now quickly hitting the shores of the tech sector in our opinion,said the firm, as quoted by Seeking Alpha.

Earnings havent been too shabby either. In its most recent quarter, the company beat expectations with EPS of eight cents on revenue of $608.35 million. Thats comparable to estimates of eight cents on $602.88 million revenue. U.S. commercial revenue jumped 70% to $131 million, while its customer list grew by 55% to 221.

Source: Sergio Photone / Shutterstock.com

Or, if youre rather diversify with 43 companies involved with artificial intelligence and machine learning, theres theGlobal X Robotics & Artificial Intelligence(NASDAQ:BOTZ).

With an expense ratio of 0.69%, the BOTZ ETF invests in companies that potentially stand to benefit from increased adoption and utilization of robotics and artificial intelligence (AI), including those involved with industrial robotics and automation, non-industrial robots, and autonomous vehicles,as noted by GlobalXETFs.com.

Some of its top holdings include Nvidia,Intuitive Surgical(NASDAQ:ISRG),ABB Ltd. (OTCMKTS:ABBNY),SMC Corp. (OTCMKTS:SMCAY), andUiPath Inc.(NYSE:PATH) to name just a few.

While the BOTZ ETF already ran from a recent low of $22.63 to a high of $31.94, theres still further upside remaining. In fact, with the AI and machine learning boom showing no clear signs of slowing, the BOTZ ETF could easily see $40 near term. Also, whats nice about the BOTZ ETF is we can gain exposure to massive companies, like NVDA, for less than $32 a share.

On the date of publication, Ian Cooper did not hold (either directly or indirectly) any positions in the securities mentioned. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Ian Cooper, a contributor to InvestorPlace.com, has been analyzing stocks and options for web-based advisories since 1999.

Original post:
3 Machine Learning Stocks That Could Be Multibaggers in the Making: March Edition - InvestorPlace

Read More..

Achieve DevOps maturity with BMC AMI zAdviser Enterprise and Amazon Bedrock | Amazon Web Services – AWS Blog

In software engineering, there is a direct correlation between team performance and building robust, stable applications. The data community aims to adopt the rigorous engineering principles commonly used in software development into their own practices, which includes systematic approaches to design, development, testing, and maintenance. This requires carefully combining applications and metrics to provide complete awareness, accuracy, and control. It means evaluating all aspects of a teams performance, with a focus on continuous improvement, and it applies just as much to mainframe as it does to distributed and cloud environmentsmaybe more.

This is achieved through practices like infrastructure as code (IaC) for deployments, automated testing, application observability, and complete application lifecycle ownership. Through years of research, the DevOps Research and Assessment (DORA) team has identified four key metrics that indicate the performance of a software development team:

These metrics provide a quantitative way to measure the effectiveness and efficiency of DevOps practices. Although much of the focus around analysis of DevOps is on distributed and cloud technologies, the mainframe still maintains a unique and powerful position, and it can use the DORA 4 metrics to further its reputation as the engine of commerce.

This blog post discusses how BMC Software addedAWS Generative AIcapabilities to its productBMC AMI zAdviser Enterprise. The zAdviser usesAmazon Bedrockto provide summarization, analysis, and recommendations for improvement based on the DORA metrics data.

Tracking DORA 4 metrics means putting the numbers together and placing them on a dashboard. However, measuring productivity is essentially measuring the performance of individuals, which can make them feel scrutinized. This situation might necessitate a shift in organizational culture to focus on collective achievements and emphasize that automation tools enhance the developer experience.

Its also vital to avoid focusing on irrelevant metrics or excessively tracking data. The essence of DORA metrics is to distill information into a core set of key performance indicators (KPIs) for evaluation. Mean time to restore (MTTR) is often the simplest KPI to trackmost organizations use tools like BMC Helix ITSM or others that record events and issue tracking.

Capturing lead time for changes and change failure rate can be more challenging, especially on mainframes. Lead time for changes and change failure rate KPIs aggregate data from code commits, log files, and automated test results. Using a Git-based SCM pulls these insight together seamlessly. Mainframe teams using BMCs Git-based DevOps platform, AMI DevX ,can collect this data as easily as distributed teams can.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.

BMC AMI zAdviser Enterprise provides a wide range of DevOps KPIs to optimize mainframe development and enable teams to proactvely identify and resolve issues. Using machine learning, AMI zAdviser monitors mainframe build, test and deploy functions across DevOps tool chains and then offers AI-led recommendations for continuous improvement. In addition to capturing and reporting on development KPIs, zAdviser captures data on how the BMC DevX products are adopted and used. This includes the number of programs that were debugged, the outcome of testing efforts using the DevX testing tools, and many other data points. These additional data points can provide deeper insight into the development KPIs, including the DORA metrics, and may be used in future generative AI efforts with Amazon Bedrock.

The following architecture diagram shows the final implementation of zAdviser Enterprise utilizing generative AI to provide summarization, analysis, and recommendations for improvement based on the DORA metrics KPI data.

The solution workflow includes the following steps:

The following screenshot shows the LLM summarization of DORA metrics generated using Amazon Bedrock and sent as an email to the customer, with a PDF attachment that contains the DORA metrics KPI dashboard report by zAdviser.

In this solution, you dont need to worry about your data being exposed on the internet when sent to an AI client. The API call to Amazon Bedrock doesnt contain any personally identifiable information (PII) or any data that could identify a customer. The only data transmitted consists of numerical values in the form of the DORA metric KPIs and instructions for the generative AIs operations. Importantly, the generative AI client does not retain, learn from, or cache this data.

The zAdviser engineering team was successful in rapidly implementing this feature within a short time span. The rapid progress was facilitated by zAdvisers substantial investment in AWS services and, importantly, the ease of using Amazon Bedrock via API calls. This underscores the transformative power of generative AI technology embodied in the Amazon Bedrock API. This API, equipped with the industry-specific knowledge repository zAdviser Enterprise and customized with continuously collected organization-specific DevOps metrics, demonstrates the potential of AI in this field.

Generative AI has the potential to lower the barrier to entry to build AI-driven organizations. Large language models (LLMs) in particular can bring tremendous value to enterprises seeking to explore and use unstructured data. Beyond chatbots, LLMs can be used in a variety of tasks, such as classification, editing, and summarization.

This post discussed the transformational impact of generative AI technology in the form of Amazon Bedrock APIs equipped with the industry-specific knowledge that BMC zAdviser possesses, tailored with organization-specific DevOps metrics collected on an ongoing basis.

Check out the BMC website to learn more and set up a demo.

Sunil Bemarkar is a Sr. Partner Solutions Architect at Amazon Web Services. He works withvarious Independent Software Vendors (ISVs)and Strategic customers across industries to accelerate their digital transformation journeyand cloud adoption.

Vij Balakrishna is a Senior Partner Development manager at Amazon Web Services. She helps independent software vendors (ISVs) across industries to accelerate their digital transformation journey.

Spencer Hallman is the Lead Product Manager for the BMC AMI zAdviser Enterprise. Previously, he was the Product Manager for BMC AMI Strobe and BMC AMI Ops Automation for Batch Thruput. Prior to Product Management, Spencer was the Subject Matter Expert for Mainframe Performance. His diverse experience over the years has also included programming on multiple platforms and languages as well as working in the Operations Research field. He has a Master of Business Administration with a concentration in Operations Research from Temple University and a Bachelor of Science in Computer Science from the University of Vermont. He lives in Devon, PA and when hes not attending virtual meetings, enjoys walking his dogs, riding his bike and spending time with his family.

Read the original post:
Achieve DevOps maturity with BMC AMI zAdviser Enterprise and Amazon Bedrock | Amazon Web Services - AWS Blog

Read More..

Reinforcement learning is the path forward for AI integration into cybersecurity – Help Net Security

AIs algorithms and machine learning can cull through immense volumes of data efficiently and in a relatively short amount of time. This is instrumental to helping network defenders sift through a never-ending supply of alerts and identify those that pose a possible threat (instead of false positives). Reinforcement learning underpins the benefit of AI to the cybersecurity ecosystem and is closest to how humans learn through experience and trial and error.

Unlike supervised learning, reinforcement learning focuses on how agents can learn from their own actions and feedback in an environment. The idea is that reinforcement learning will maximize its capabilities over time by using rewards and punishments to calculate positive and negative behavior. Enough information is collected to make the best decision in the future.

Alert fatigue for security operations center (SOC) analysts has become a legitimate business concern for chief information security officers, who are concerned about analyst burnout and employee turnover as a result. Any solution able to handle most of the alert noise so that analysts can prioritize actual threats will be saving the organization both time and money.

AI capabilities help mitigate the threat posed by large social engineering, phishing, and spam campaigns by understanding and recognizing the kill chain of such attacks before they succeed. This is important given the security resource constraints most organizations experience, regardless of their size and budget.

More sophisticated dynamic attacks are a bigger challenge and, depending on the threat actor, may only be used a limited number of times before the attackers adjust or alter a part of the attack sequence. Here is where reinforcement learning can study the attack cycles and identify applicable patterns from previous attacks that have both failed and succeeded. The more exposed to sophisticated attacks and their varied iterations, the better-positioned reinforcement learning is positioned to identify them in real-time.

Granted, there will be a learning curve at the onset, especially if attackers frequently change how they pull off their attacks. But some part of the attack chain will remain, becoming a pertinent data point to drive the process.

Detection is only one part of monitoring threats. AI reinforcement learning may have applicability in prediction to prevent attacks as well, learning from past experiences and low signals and using patterns to predict what might happen next time.

Preventing cyber threats is a natural advancement from passive detection and is a necessary progression to making cybersecurity proactive rather than reactive. Reinforcement learning can enhance a cybersecurity products capability by making the best decisions based on the threat. This will not only streamline responses, but also maximize available resources via optimal allocation, coordination with other cybersecurity systems in the environment, and countermeasure deployment. The continuous feedback and reward-punishment cycle will increasingly make prevention more robust and effective the longer it is utilized.

One use case of reinforcement learning is network monitoring, where an agent can detect network intrusions by observing traffic patterns and applying lessons learned to raise an alert. Reinforcement learning can take it one step further by executing countermeasures: blocking or redirecting the traffic. This can be especially effective against botnets where reinforcement learning can study communication patterns and devices in the network and disrupt them based on the best course of response action.

AI reinforcement learning can also be applied to a virtual sandbox environment where it can analyze how malware operates, which can aid vulnerability management patch management cycles.

One immediate concern is the number of devices continually being added to networks, creating more endpoints to protect. This situation is exacerbated by remote work situations, as well as personal devices being allowed in professional environments. The constant adding of devices will make it increasingly more difficult for machine learning to account for all potential entry points for attacks. While the zero-trust approach alone could bring intractable challenges, synergizing it with AI reinforcement learning can achieve a strong and flexible IT security.

Another challenge will be access to enough data to detect patterns and enact countermeasures. In the beginning, there may be an insufficient amount of available data to consume and process, which may skew learning cycles or even provide flawed courses of defensive action.

This could have ramifications when addressing adversaries that are purposefully manipulating data to trick learning cycles and impact the ground truth of the information at the onset. This must be considered as more AI reinforcement learning algorithms are integrated into cybersecurity technologies. Threat actors are nothing if not innovative and willing to think outside the box.

Contributing author: Emilio Iasiello, Global Cyber Threat Intelligence Manager, Dentons

Read more from the original source:
Reinforcement learning is the path forward for AI integration into cybersecurity - Help Net Security

Read More..

Ask AT&T: Revolutionizing Efficiency and Creativity with AI – AT&T Newsroom

Participants showcased their talent in machine learning, code generation, and problem-solving, guided by Ask AT&T. The team used Ask AT&T to research industry trends, draft business plans, conduct SWOT analysis, and design PowerPoint templates.

By the end of the competition, Ask AT&T emerged as an indispensable tool for everyday work. Although AI tools like Ask AT&T have potential for improvement, their immense potential was recognized. As AI continues to develop, it will revolutionize our work processes, increasing efficiency and allowing more time for complex tasks. This aligns with our focus on improving internal processes at AT&T.

The TDP's AI Learning & Problem-Solving Challenge was an inclusive event, involving around 700 employees from the corporate systems organization. The competition comprised 16 teams and over 70 participants, from new hires to veterans.

The most innovative teams proposed diverse learning and training tools. Several leaders evaluated the final four contenders, with PLEdge of Progress emerging as the winners. Some of the winning solutions are in the backlog for development.

Participants expressed that AI tools like Ask AT&T, when used effectively, can significantly enhance efficiency and productivity.

Follow this link:
Ask AT&T: Revolutionizing Efficiency and Creativity with AI - AT&T Newsroom

Read More..

Ethical AI: Tackling Bias And Ensuring Fairness In Machine Learning Algorithms – Dataconomy

One of the most recognizable trends of the early years of the 21st century has been the spread and application of AI (Artificial Intelligence) within many professional areas. The data analysis, pattern recognition, and decision-making functionalities in AI have produced remarkable efficiencies and ideas. However, ethical concerns have risen to dominate as these artificial intelligence systems including machine learning algorithms penetrate our daily lives. This signifies a significant year in our journey towards addressing these issues that would ensure that equity is promoted in AI systems and prevent them from perpetuating or worsening societal disparities by 2024.

The term bias in AI refers to systematic discrimination or advantage afforded to some individuals or groups and not others. This can be expressed in different ways like racial, gender, socio-economic status, and age biases among others. Such prejudices are usually derived from the data used for training machine learning models. If the training data is non-representative of a varied population on earth or it contains historical biases, then such AI systems are likely to capture those partialities resulting in unfair and disproportionate outputs. How this AI biasness algorithms and Machine learning working practically that you can understanding from multiple AI tutorial or Data Science Course available online.

The reason to create artificial intelligence systems that are fair is justice. In critical fields such as health care, law enforcement, employment and financial services, these technologies play a bigger role. The effects of biased decisions can be life-changing for individuals. Guaranteeing fairness in AI has more than one aim: its about making systems that mirror our shared values and promote a more equitable way of life.

One of the leading tactics aimed at fighting bias in artificial intelligence is to ensure that the data sets used for training the machine learning models are diverse and representative of the global population. This means demographic diversity, but also different experiences, perspectives and environments. Again, efforts aiming at auditing and cleansing datasets from historical biases, are important too.

Transparency is about an AI system that can be understood and investigated by humans in the way it was created. This is closely related to the idea of explainable AI, where models are built to provide reasons for their decisions in a language understandable to human beings. Hence, stakeholders can grasp how and why particular choices were made thereby identifying and mitigating biases.

It is important to continuously check the bias of AI systems. Such checks include both pre-deployment and after-deployment processes that ensure continued fairness even as they encounter new data or scenarios.

Ensuring AI fairness requires developing, and implementing ethicalness of AI frameworks as well as governance arrangements at the societal and organizational levels. These AI framework is little bit very complex task to understanding. Multiple artificial intelligence course helps to understand these complex structure of fairness pattern in AI. Establishing guidelines, principles or standards for developing and using ethical artificial intelligence alongside mechanisms that can hold accountable those who have suffered from bad decisions of AI are fundamental in this regard.

Tackling bias in AI is a complex challenge that requires collaboration across disciplines, including computer science, social sciences, ethics, and law. Such collaboration can bring diverse perspectives and expertise to the forefront, facilitating more holistic and effective solutions.

A dynamic and constantly changing field is the ethical AI adventure in such a way that it remains very important even as we go forward. Technology and methodology advancements combined with an increasing understanding among the general population about ethical considerations are facilitating the movement to more equitable AI systems. The concern is on making sure that harm has ceased happening and also utilizing AI potentiality towards societal benefit and human well-being.

In conclusion, bias in AI and fairness issues rank top among various pressing ethical challenges facing the AI community now. In addition, diversity and ethics, continuous vigilance, transparency, accountability, and oversight of research operations involved in its development will foster not only innovative but also just outcomes for all people from different backgrounds.

Featured image credit: Steve Johnson/Unsplash

Original post:
Ethical AI: Tackling Bias And Ensuring Fairness In Machine Learning Algorithms - Dataconomy

Read More..

What is a model card in machine learning and what is its purpose? – TechTarget

What is a model card in machine learning?

A model card is a type of documentation that is created for, and provided with, machine learning models. A model card functions as a type of data sheet, similar in principle to the consumer safety labels, food nutritional labels, a material safety data sheet or product spec sheets.

There has been a dramatic rise in the development and adoption of machine learning (ML) and artificial intelligence (AI) during recent years. Further advances in generative AI employ large language models (LLMs) as a core component. However, the many models used in those platforms are increasingly complex and difficult to understand. Even model developers sometimes struggle to fully understand and describe the ways a given model behaves. This complexity has created serious questions about core business values such as transparency, ethics and accountability. Common questions include the following:

First proposed by Google in 2018, the model card is a means of documenting vital elements of a ML model so users -- including AI designers, business leaders and ML end users -- can readily understand the intended use cases, characteristics, behaviors, ethical considerations, and the biases and limitations of a particular ML model.

As late as 2024, there are no current legislative or regulatory requirements to produce or provide model card documentation with ML models. Similarly, there are no currently established standards in model card format or content. However, major ML developers have spearheaded the adoption of model card documentation as a way of demonstrating responsible AI development, and adopters can find model cards for major platforms such as Meta Llama, Google face detection and OpenAI GPT-3.

The rise of ML and AI is driving the need for transparency and responsible governance. Businesses must understand what ML models are for, how they work, how they compare to other competitive models, how they're trained and their suitability for intended tasks.

Model cards are a tool that can address such concerns which readily impact governance and regulatory issues for the business. Model cards can provide a range of important benefits to ML and AI projects including the following:

Labels and other informational summaries are generally most effective when they allow comparing similar products side by side using comparable content and formats. However, the information presented on an ML model card can vary. Unlike highly regulated informational displays -- such as food nutritional labeling -- there are no current standards to govern the information or formatting included on ML model cards.

ML models can vary dramatically in their scope, purpose and capabilities, so this makes it hard to regulate. For example, an ML model developed to aid in medical diagnosis can be distinctly different than an ML model created to run analytics on retail sales operations, or a complex LLM used in an AI construct. Consequently, ML model developers largely use their own discretion to determine what information to include, and how that information should be presented. Yet, as leading technology firms develop ML/AI platforms and document those offerings through model cards, some de facto documentation standards are taking shape. Model cards should include the following:

This first section of a model card is typically the introduction to the model which can outline the model's essential details including the model's name, version, revision list, a brief general description of the model, business or developer details and contact information, and licensing details or limits.

This section describes the intended uses, use cases and users for the model. For example, a section on use cases may describe uses in object detection, facial detection or medical diagnoses. This section may also include caveats, use limitations or uses deemed out of scope. For example, a model intended for object detection may detail input from photos or video; output including detection of a specified number of object classes; and other output data such as object bounding box coordinates, knowledge graph ID, object description and confidence score.

This section describes the overall design of the model and any underlying hardware back end that runs the model and hosts related data. Readers can refer to the model card to understand the design elements or underlying technologies that make the model work. For the object detection model example, the model card may describe an architecture including a single image detector model with a Resnet 101 backbone and a feature pyramid network feature map.

This section outlines, describes or summarizes the data used in model training; where and when the data was obtained; and any statistical distribution of key factors in the data which may allow for inadvertent bias. Since training data may be proprietary to the model's developers, training details may be deliberately limited or protected by a separate confidentiality agreement. Training details may also describe training methodologies employed with the model.

This section outlines details related to the model's performance measured against a test data set, not a training data set, as well as details about the test data set itself. For the object detection model example, performance metrics included on the model card may note the use of both Google's internal image data set as well as an open source image set as test data and the number of object classes the model can detect in each data set. Additionally, performance details may outline reported metrics including the precision and accuracy of the object detection. More sophisticated models may utilize other detailed metrics to measure performance.

A key segment of any model card is the section describing limitations, possible biases or variable factors that might affect the model's performance or output. For the object detection model example, known limitations may include factors such as object size, clutter, lighting, blur, resolution and object type since the model can't recognize everything.

This final segment of a model card is often dedicated to business-related details including information about the model's developers, detailed contact, support and licensing information, fairness/privacy and usage information, suggestions for model monitoring, any relevant assessment of impacts to individuals or society, and other ethical or potential legal concerns related to the model's usage.

As leading technology organizations build ML and AI platforms, their work on model cards and other documentation has provided a standard for other ML firms to follow. Today there are many examples of ML model cards to review including the following major examples:

There are also more standardized tools for model card creation, as well as model card repositories, such as these examples:

Both GitHub and Hugging Face provide a repository of model cards which are available for review and study, offering model card examples across many different model types, purposes and industry segments.

Original post:
What is a model card in machine learning and what is its purpose? - TechTarget

Read More..

Artificial Intelligence in Nutrition: Definition, Benefits, and Algorithms – ThomasNet News

Artificial intelligence (AI) is transforming the way we perceive and manage nutrition. There are applications for diet tracking, which offer personalized guidance and meal plans, solutions that pinpoint ingredients with specific health benefits, and tools for analyzing medical data to inform customized nutrition interventions.

These technologies serve to optimize medical outcomes, improve public health nutrition advice, encourage healthy eating, support chronic disease management, prevent health decline, aid disease prevention, and improve overall well-being.

The use of AI and machine learning (ML) in nutrition has benefits in several areas, including:

A one-size-fits-all approach to public health nutrition guidance fails to account for different dietary preferences, health goals, lifestyles, nutritional requirements, intolerances, allergies, and other health conditions.

A young and active vegan with a nut allergy, for example, has hugely different dietary needs to an elderly carnivore living with diabetes.

AI-powered technology can quickly analyze vast amounts of nutrition data and cross-reference it with an individuals measurements and requirements to produce personalized and optimal nutrition plans for all.

Clinical nutrition can be defined as a discipline that deals with the prevention, diagnosis, and management of nutritional and metabolic changes related to acute and chronic disease and conditions caused by a lack or excess of energy and nutrients.

AI has several applications in this field, from analyzing complex medical data and medical images to informing the decisions of medical practitioners and producing personalized nutrition plans for patients. Because AI solutions can identify previously overlooked associations between diet and medical outcomes, they can improve chronic disease management, optimize patient recovery, and improve patient wellbeing.

A tailored nutrition plan for a diabetic person, for example, will evaluate their gut microbiome and blood glucose levels, while a person with cardiovascular problems may require a diet that takes into consideration their cholesterol levels and blood pressure.

There has been a rise in AI-powered apps that assist users in tracking their nutritional intake while offering personalized guidance on making healthier choices.

The challenge with self-reported food diaries is that they depend on the memory and honesty of individuals, which often leads to under- and over-reporting and other inaccuracies. When certain snacks and meals are forgotten, portion sizes are miscalculated, or food choices that are perceived to be less healthy are deliberately omitted, it is more difficult for nutrition-focused apps and healthcare professionals to provide informed and effective nutritional advice.

With AI-powered computer vision technology, food tracking apps can identify food items, estimate portion sizes, and calculate nutritional values with increasing accuracy. Coupled with wearable devices, which track a users activity, this technology is empowering people to make optimal nutritional choices. Some nutrition apps offer additional personalization.

For example, they might partner with health organizations to obtain their users electronic health records or feature a nutrition chatbot to quickly respond to queries or perform a dietary assessment.

Nutraceuticals are products derived from food sources that promise additional health benefits to their basic nutritional value. Some examples include glucosamine, which is used in the treatment of arthritis; omega-3 fatty acids, which are used to treat inflammatory conditions; and many nutrient-rich foods, including soybeans, ginger, garlic, and citrus fruits.

Various nutraceutical companies have come under fire for marketing products as health solutions without meaningful scientific evidence to back their claims. But AI looks set to transform the industrys image by finding genuine health solutions fast.

The speed and accuracy with which an AI solution can identify bioactive compounds in foods and then predict the actions they will have in the body is of particular interest to nutraceutical companies. At present, it often takes several years to identify, develop, test, and launch a new ingredient.

In the future, ML solutions are likely to support the development of targeted nutraceutical solutions.

Across 48 countries, 238 million people are facing high levels of acute food insecurity. Meanwhile, one-third of the food produced for human consumption is lost or wasted, which equates to 1.3 billion tons every year.

AI is aiding the global effort to address food insecurity and reduce waste generation.

It can predict demand for certain crops to enable farmers to optimize their planting plans, detect crop and livestock disease at an early stage to contain damage and limit loss, and identify trends in consumer behavior to help retailers forecast demand and better manage their inventories. In addition, AI systems can track food from farm to plate, helping to ensure is it harvested, shipped, and consumed on time.

In the aftermath of a natural disaster or conflict, AI can quickly analyze data to inform humanitarian responses.

The challenges associated with AI in nutrition include:

To improve accuracy and efficiency, ML solutions are fed vast amounts of training data. In nutrition, such data is especially sensitive, including personal information and medical records.

Once a product, such as a food tracking app, is live, additional data is collected, as users are required to disclose personal information, including measurements, medications, food intake, and existing health conditions.

Rigorous safeguarding must be implemented to ensure that all personal data is safely collected and stored and that users understand how it is being used.

AI solutions are known to perpetuate societal stereotypes and biases. Amazon deployed a recruitment system that discriminated against women, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) created a tool to predict the likelihood of criminals reoffending, which misclassified almost twice as many Black defendants, and a Twitter algorithm was proven to favor white faces over Black faces when cropping images.

If nutrition-centered AI solutions are not carefully developed, these tools could reinforce outdated and oversimplified concepts of nutritional health and wellness or reflect biases in the healthcare system.

The use of diverse training data can prevent unfair or inaccurate outcomes, and these tools must be continuously monitored and updated to echo the latest healthcare guidance.

Meal scanning technology enables food-tracking app users to log their intake by simply snapping a photograph of their meals via their cell phone cameras. These tools are exceptionally fast and can be highly accurate, but there are some major limitations to consider.

For example, the technology will struggle to detect a basic ingredient swap in familiar recipes. When scanning a slice of cake, it would record items such as butter and eggs, even if those ingredients had been replaced with avocado and yogurt. Similarly, the app wont register when a creamy pasta sauce is replaced with a milk-based alternative.

Fortunately, these shortcomings can be addressed with some manual effort on the users part.

Nutrition is a complex and nuanced field, which will continue to benefit from the inputs and expertise of qualified healthcare professionals.

Take the management of chronic illnesses as an example. While an AI-powered app can produce highly customized dietary plans for individuals living with diabetes, celiac disease, or Crohns disease, additional medical support and monitoring is likely to be required.

Complexities also arise when poor or unusual eating habits are linked to mental health conditions, such as eating disorders. In these scenarios, food-tracking apps are likely to cause more harm than good.

The market for personalized nutrition is fast-expanding, driven largely by rapid developments in AI.

Some exciting industry players include;

Nutrition labels are designed to prevent false advertising and promote food safety. But perhaps one of the most arduous tasks involved in launching new food products, medicines, and supplements is ensuring adherence to labeling regulations and standards, which are not only complex but can also vary enormously from country to country. Manual reviews in the food industry are repetitive, slow, and prone to human error, which, at best, results in delayed product launches and, at worst, poses a threat to human health.

Verifying the accuracy and compliance of labels is made easy with AI algorithms. Manufacturers simply upload their recipes and packaging design to an AI-powered tool, which analyzes the ingredients and identifies any issues. This drives operational efficiencies, reduces product waste, ensures customer safety, and enables more cost-effective international trade.

An increasing number of companies are using smart labels to provide consumers with additional nutritional information. This enables people to make more informed decisions about the food and supplements they consume while ensuring food safety.

The growing demand for personalized nutrition has led to the rapid adoption of fitness-tracking apps like MyFitnessPal and MyPlate. Indeed, almost two-thirds of American adults are mobile health app users, according to a 2023 survey.

Amid widespread criticism that these apps are promoting unhealthy diets, extreme exercise regimes, and rapid weight loss, users must understand the technologys limitations.

Here are some important things to consider:

AI-powered apps are more likely to be beneficial when healthcare professionals, including doctors and dietitians, work closely with their patients and clients to recommend appropriate products and monitor usage.

The applications of AI in nutrition are far-reaching, enabling personalized diet plans, enhanced clinical nutrition, the development of targeted nutraceuticals, and more effective methods for addressing food insecurity.

As adoption increases, these solutions will require increasingly robust regulation, particularly in relation to data handling and security, algorithm bias, and consumer education.

With the revenue from health apps forecast to grow to $35.7 billion by 2030, healthcare professionals must be aware of the information that is being communicated to consumers so they can guide their patients toward truly health-promoting options.

As for the developers of AI-powered nutrition technology, inputs from experts in diverse fields, including healthcare, nutrition, technology, and ethics, will ensure solutions are safe and effective.

Read the original:
Artificial Intelligence in Nutrition: Definition, Benefits, and Algorithms - ThomasNet News

Read More..

The impact of AI and machine learning technology in revolutionizing manufacturing practices – Asia Business Outlook

In an exclusive interviewwith Asia Business Outlook, BN Shukla, Operations Director, Jabil, India, shares his views on the challenges of interpreting and understanding the decision-making processes, strategies to optimize the implementation of AI & ML, robust measures to protect sensitive production data, how manufacturers ensure compliance with industry standards and regulations and more. BN Shukla, Operations Director for Jabil in India, has a career spanning more than 28 years with a specialization in operational excellence and business management.

Considering the complexity of AI and machine learning algorithms, especially within critical manufacturing processes, how can manufacturers navigate the challenge of interpreting and understanding the decision-making processes of these systems?

Some of the common challenges in increasing the adoption of digital technology in manufacturing include:

Capital investment: Varying degrees of costs from IoT sensors used on existing machines to purchasing large machinery with integrated machine learning solutionsto enterprise-wide infrastructure adaptations, particularly in large-scale projects.

Effective change management: AI/ML is changing the way we do things as we merge the physical and digital together. Strategies must be accompanied by a support structure for employees, empowering them with the right tools and skills, thus creating a culture ripe for a successful transition.

Technical skill gaps: Fuelled by digitalization, the roles and expectations of the workforce on and off the shop floor are evolving. Talents that have digital dexterity and are ready to adapt and innovate in manufacturing processes and adopt digital tools that support those processes will successfully implement new technology and maintain operations.

Data growth, sensitivity, and security: The physical and digital systems in smart factories make real-time interoperability possible. While large volumes of data are generated, challenges remain in data quality and management for decision-making, looming concerns over data and IP privacy, ownership, governance, and an increased risk of an expanded attack surface as numerous machines and devices are connected to networks.

To ensure the quality and reliability of the data used to train and optimize these systems, we have put in place several guard rails:

Datafication: We progress from digitization to digitalization to datafication, where we investigate business processes and transform the process into quantifiable data to track, monitor, and analyze. To do this effectively, we have set up an enterprise-wide Data & AI council, which involves senior members from all the functions to help identify key processes critical to the business and have the process owners work on critical data definitions, data lineage, and data sources. Although this is not technology-specific, it helps to set a good foundation for the organization moving forward. Teams across Jabil are learning how to use data effectively to enable Data to speak, data to act.

AI/ML: Collected data that is not used effectively is a waste of resources. We leverage AI/ML/deep learning to extract value out of the swarm of data we collect each day from our factories and work processes to help deliver business insights, automate tasks, and advance system capabilities. Our AI/ML strategy spans from using AI algorithms to improving our inspection process in the factories. By using advanced data analytics to derive algorithms or new business models, we can gain new insights and intelligence for our business. Were also developing our knowledge database and combining it with Generative AI technology to merge the insights from our self-healing manufacturing line (ready in April 2024) with the know-how of our workforce to continuously train our AI models and guide our technicians to take action.

SAP S4: When Jabil migrated from SAP ECC to SAP S4 Hana in January 2022, we were able to offload the technical debt that came from 20-plus years of over-customization and subsystems that were peripheral to the legacy SAP system. The Hana database also brought about greater speed in data processing and a simplified data structure. Nevertheless, the benefits of SAP S4 migration should not be just about solving technical issues but about bringing new value for the users through new ways of report creation, enhanced user experience, increased productivity, and the ability to leverage new functionality to transform the processes. We are still in the continuous improvement process to better leverage these functionalities and are bringing the users along in the transformation journey.

Automation: Process automation through the use of robotic process automation (RPA) tools has helped us automate many back-end office and repetitive tasks in various functions. Many functional teams who are using the RPA bots also try to humanize the bots and treat them as part of the (digital) workforce, measuring the performance of these bots to ensure we obtain the maximum ROI.

To navigate the challenge of interpreting the complexity in the decision-making process, it is crucial that organizations first have a clear strategy of how they plan to leverage AI/ML in the company. At Jabil, we took a customer-first approach, and we deliberated from the onset. How we leverage AI/ML is about solving a business problem or providing deep insights to realize a step-change improvement in safety, quality, delivery, and cost.

Interpreting and understanding such systems, communications, and engagement with stakeholders is critical to ensure we are working on what matters to the business most.

In light of the challenge of ensuring scalability and adaptability across diverse manufacturing operations, what strategies can manufacturers employ to optimize the implementation of AI and machine learning solutions?

Once we knew what our North Star looked like, we were able to clearly break down obstacles within the People, Process, and Systems categories and develop solutions to take us to the next level. Some of these were:

Focus on people at the heart of transformations by taking an employee-first digitalization approach. Many people are familiar with the saying, AI will not replace people, but the people who can use it will. With that in mind, the human factor is a major lever for transitioning and tapping into opportunities that come with AI/ML.

Our industry-certified internal courses, in partnership with industry experts and local universities, have allowed us to grow our pool of subject matter experts by ensuring that technological know-how is retained and expanded through customized application-based upskilling. Additionally, as engineers and technicians take business-related modules, they promote diversity in the workplace in the form of business differentiation and innovative decision-making.

Enhance industry ecosystem through public-private initiatives: In many of our locations, we partner with leading equipment providers and government agencies to build a strong manufacturing ecosystem. We must continue to actively partner with academia to create the next generation of talented professionals.

Amidst concerns regarding workforce reskilling and upskilling, how can manufacturers effectively foster collaboration between human workers and intelligent machines in manufacturing processes?

There are seismic shifts with the convergence of technologies across operations, information technology (IT), and supply chains, creating a data-driven environment that enables us to deliver the future of Jabils manufacturing.

We need to embrace a work environment that is expected to blend advanced technology and digital skills with uniquely human skills to yield the highest level of productivity. The rise of advanced technology can replace the manual or repetitive tasks many jobs entail. This frees up space for skills that are uniquely and essentially human, or so-called soft skills, including critical thinking, people management, creativity, and effective communication. Companies need workers who can exhibit these skills, as well as digital skills, to work alongside robots and technologies.

The broader aim of digital transformation is not just to eliminate tasks and cut costs but to create value, safer workplaces, and meaningful work for people. Industry leaders need to put humans in the loop when preparing their workforce through rethinking work structure, retraining and reskilling talents, and structuring the organization to leverage technology and transform its business.

Theres no one-size-fits-all answer to this other than saying that digitalization, digital transformations, and digital readiness of ones operations and workforce are imperative to remain relevant and competitive in the long term.

We believe that the best way to make our organization more data-centric and digital is to invest in those who are adaptable, curious, and flexible. We look to our existing and future talents with the logic that digital transformation is changing everyones role, from the factory floor to our executives.

This marriage of multi-generational talents not only propels the industry forward but also heralds change within the industry itself, making manufacturing a destination for innovative jobs and being the continued change maker in Indias socio-economic landscape.

Given the cybersecurity risks associated with the adoption of AI and machine learning technologies in manufacturing, how can manufacturers implement robust measures to protect sensitive production data and mitigate potential cyber threats and data breaches?

At Jabil, we take cybersecurity seriously. Through our industry expertise and enterprise education and awareness efforts, we are building a data protection culture at Jabil, where our employees are empowered, and our customers and partners are confident in our ability to conduct business safely in todays evolving digital world.

Jabil delivers a three-pronged risk management methodology as part of our Defense-in-Depth approach:

This layered system provides several levels of protection for data, does not rely on any single tool or policy, and enables redundancy in our systems and processes.

We manage digital security guided by the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF), coupled with best-of-class solutions for data protection, threat detection and continuous monitoring within Jabils Security Operations Center (SOC).

Backed by policies and procedures to ensure enterprise resiliency, we provide robust and holistic risk-based guidance and high-quality shared cybersecurity services and solutions. To ensure that our capabilities are always relevant and updated, we use a leading third-party assessor to rate our security program as a whole and conduct penetration tests (PenTests) to monitor compliance with policy and identify gaps for remediation.

We have also built an effective security program that centers around technical controls and empowering our people to be our best line of defense by equipping them with top-tier cybersecurity education and awareness programs that provide them with the information they need to stay safe online in their personal and professional lives.

A great example is educating our employees before they click and providing best practices and tools to analyze site URLs after an individual click. We ensure our readiness to act if cyberattacks breach our defenses through continuous improvement programs such as tabletop exercises, a ransomware playbook, and incident response.

See the rest here:
The impact of AI and machine learning technology in revolutionizing manufacturing practices - Asia Business Outlook

Read More..

Study Shows UK Leading the AI and Machine Learning Charge for Enhanced Productivity – Automation.com

Global research report from Rockwell Automation highlights how artificial intelligence is being deployed to give the workforce superpowers and build resiliency in operations.

Rockwell Automation, Inc., the world's largest company dedicated to industrial automation and digital transformation, today announced the results of the 9th annual State of Smart Manufacturing Report, offering valuable insights into trends, challengesand plans for global manufacturers. The study surveyed more than 1,500 respondents from 17 countries, including the United Kingdom (UK), France, Germany, Italyand Spain.

Among many findings, three major trends emerge from this report: the AI revolution is here for manufacturing, technology is being deployed to give the workforce superpowersand building resiliency in operations is growing in importance.

"The world has changed, and manufacturing has changed with it, but there is more to do," said Asa Arvidsson, regional vice president sales, north region, Rockwell Automation. "Talent continues to be elusive globally. As manufacturers continue to seek opportunities for profitable growth, they are finding that uncertainty in workforce availability is impacting quality and their ability to meet their customers needs and transform at pace.

"The clear message from this report is that manufacturers view technology as an advantage for improving quality, agility, and innovation, and for attracting the next generation of talent. Manufacturers expect to mitigate risk through technology tied to process and people, build resiliency, and drive future success."

Key UK findings include:

According to this year's survey, automation and optimization through artificial intelligence (AI) and machine learning (ML) are the main reasons for smart manufacturing investments within the UK. In the 2024 survey, 88% of responding companies in the UK said they have invested, or plan to invest in the next 12 months, in AI and ML. This is above the European figure of 84% and highest among all the European countries surveyed, matching the United States. When companies deploy the technology, there are four standout applications: quality control (38%), cybersecurity (37%), logistics (34%), and customer service (32%).

In any digital transformation, employees risk being the forgotten factor. According to the survey, manufacturers are investing in people and technology to advance. Rather than having AI replace roles, organizations are looking to use AI to tackle roles they are struggling to fill today. To address the manufacturing industry's labor shortage and skills gap, 38% of companies in the UK are spreading the net in their search for talent by leveraging remote work to access a wider talent pool for remote-capable jobs.

Quality remains a top priority for companies in the UK, which ranked it first at 42%. While Europe also listed cost and efficiency, along with quality, in their top three, the UK went in a different direction. Its top priorities behind quality were improving the company's financial position and improving decision-making with data, both at 35%.

"The findings of this year's State of Smart Manufacturing Report underscore a pivotal moment for the industry, as UK manufacturers lead the charge in integrating AI and machine learning technologies, Arvidsson concluded. This strategic embrace is not merely about technological adoption but signifies a broader transformation towards smarter, more resilient manufacturing ecosystems. By leveraging AI to enhance data-driven decision-making and operational efficiency, UK manufacturers are setting new benchmarks for innovation and competitiveness on the global stage.

The full findings of the report can be foundhere.

This report analyzed feedback from 1,567 respondents from 17 of the top manufacturing countries with roles from management up to the C-suite and was conducted in association with Sapio Research and Plex Systems. The survey covered discrete, process, and hybrid industries across a balanced distribution of company sizes, with revenues spanning $10 million to over $10 billion, providing a wide breadth of manufacturing business perspectives.

Rockwell Automation, Inc.is a global leader in industrial automation and digital transformation. We connect the imaginations of people with the potential of technology to expand what is humanly possible, making the world more productive and more sustainable. Headquartered in Milwaukee, Wisconsin, Rockwell Automation employs approximately 29,000 problem solvers dedicated to our customers in more than 100 countries.

Check out our free e-newsletters to read more great articles..

Continue reading here:
Study Shows UK Leading the AI and Machine Learning Charge for Enhanced Productivity - Automation.com

Read More..

Why the CEO of Quantcast is betting on "personalized AI" – Big Think

Theres a new startup in San Francisco, and theyre into marketing. Alone, that wouldnt be so interesting, but this company has made its business about algorithmic marketing. This is 2006, in the days when Facebook was still new and generative artificial intelligence was a conversation among science fiction fans. But this startup is sure that machine learning is the future.

The startup secures a major client, a local tourist board, and is eager to turn this into an early win. So they put their heads together and burned the midnight oil. Why would you come to San Francisco? Who should they target, and how? The team came up with the old, familiar answers: Alcatraz, Golden Gate Bridge, and wine country. But then they turned to their algorithm to measure the decision-making behaviors of potential San Francisco visitors.

Their systems buzzed and whirred and they came back with dentistry. At first, the CEO thought it was a mistake. Dentistry? San Franciscans have nice teeth, no doubt, but they arent famous for them. Then the team realized something: the annual Dentistry Association conference was about to be held in San Francisco. No one had known. No one had even thought of that. But the machine did. The team whipped up a presentation for their client, and everyone thought it was great.

Try BigThink+ for your business

Engaging content on the skills that matter, taught by world-class experts.

Thats the true story of one of Quantcasts earliest successes. Today, Quantcast is one of the biggest AI-driven advertising technology companies in the world. They have offices in ten countries around the world with hundreds of employees. They were one of the few to recognize the power of AI before the rest of the world did. Last month, Big Think talked with Quantcast founder and CEO, Konrad Feldman, to make sense of how AI has changed marketing and will continue to revolutionize business.

Almost everyone in the world uses some kind of bank. There are billions of transactions occurring every second, and each of those transactions represents a behavior. A boy is buying a chocolate bar from a vending machine. A pensioner is moving money between pension accounts. A young couple is putting down a deposit on a house. A businessman is giving a generous tip. Money moves, and it tells us a story. We can often tell more about peoples behavior from their spending habits than from any self-recorded account.

The problem, then, is how to make sense of those stories. How can anyone hope to read the habits of billions of microtransactions? I started my career in research in neural networks at UCL, Feldman tells Big Think. I started a business using machine learning techniques to find patterns predominantly in the financial service industry. I ended up working in lots of different business domains where it was really hard for people to interpret all of the data and find the patterns of activity that they needed to address particular business problems. We ended up building a lot of systems in the banking sector to help the big banks detect things like money laundering and terrorist financing.

Having fine-tuned various algorithms in the financial sector, Feldman looked to move into the online space; Quantcast was focused on dot-com entrepreneurs just starting out. But now there was a problem. In the days before cloud computing and the internet of everything, the internet was a web of tiny pockets and gated communities. Quantcast Measure is the product we launched in 2006 to help websites of any size, explains Feldman. It was a classic sort of disruptive innovation initially. These algorithms have the ability to provide much broader visibility to better understand audiences. Over time, larger and larger sites realized the richness of audience information they could derive, and participate in machine learning. That was the first use of Quantcast Measure.

Marketers need data. But everyone has their own data sets, often very small. Quantcast offered the opportunity to combine them and analyze them. We had to solve some really thorny technical issues, says Feldman, both in terms of the sort of data processing architectures we needed and because the data volumes involved are just absolutely immense.

Its often misunderstood that machine learning and AI are some kind of data-processing panacea. A computer can do billions of calculations per nanosecond, so surely they can make light work of all that data? But an algorithm is only as good as the humans who engineer it. As Feldman puts it, A lot of people, when they think about AI or machine learning, think of it as some sort of holistic algorithm that does everything. In fact, even though we have algorithms now in large language models (LLMs) that can actually do a number of different things, fundamentally, most specific problems require specific algorithms and specific tuning and application of those algorithms.

The various explosions of GPTs are a good example of this. OpenAI has produced one of the most powerful AIs in the history of civilization, but after a few months, its usage slipped, and people started to see it as a time waste. Then, at the end of 2023, OpenAI introduced custom-made GPTs. Essentially, anyone could fine-tune and create an algorithm of their own using OpenAIs LLMs. Users started to craft prompts in an algorithmic style: Imagine youre a movie critic, In the style of a romantic poet, You are Albert Einstein, Explain like Im five years old, and so on. This is an example of the fine-tuning Feldman talks about to turn machine learning into something practical because machine learning varies.

We believe it will be possible to build machine learning models that could actually infer audience characteristics.

The machine learning we use to measure audiences is different from the machine learning were using to build predictive audience models for advertising delivery, explains Feldman. Its different from the machine learning thats been used to ensure brand safety or to price individual impressions. Weve developed different types of machine learning approaches to solve some of these fundamental problems. Quantcasts marketing algorithms will capture information in real time [and] then make adjustments as they go along. They are interactive tools that will optimize for some specified goal as theyre working.

The clear implication here is that the next stage of AI will be more flexible, fine-grained and fine-tuned. For AI to become really useful and for the conversation to become really exciting AI has to become personalized AI.

We believe it will be possible to build machine learning models that could actually infer audience characteristics, says Feldman. Humans are really good [at] pattern matching thats how weve evolved. But machines also have flexibility because they can deal with so many dimensions of data. If youre going adjust and tailor the content, if youre going to modify content for each individual recipient, well then, people cant do that. You have to use a computer to do it.

Read the original post:
Why the CEO of Quantcast is betting on "personalized AI" - Big Think

Read More..