Page 762«..1020..761762763764..770780..»

Japan’s AI draft guidelines ask for measures to address overreliance – Japan Today

Companies and organizations that utilize artificial intelligence will be required to take measures to reduce the risk of overreliance on the technology, according to draft guidelines by a Japanese government panel.

The draft guidelines obtained by Kyodo News also call on AI developers to be careful not to use biased data for machine learning, while urging them to maintain records of their interactions with the technology, to be provided in the event of any issues.

The panel, which is tasked with discussing the country's AI strategy, is expected to finalize the guidelines by the end of the year. Japan, this year's chair of the Group of Seven industrialized nations, is also working with other members on drawing up international guidelines for AI developers.

The draft outlines 10 basic rules for AI-related businesses, such as ensuring fairness and transparency with regard to protecting human rights and preventing personal information from being given to third parties without an individual's permission.

The rules also ask that information be provided about how data is acquired from an individual or entity and how it is then used by related parties.

Companies that develop AI platforms, providers of services that utilize the technology and users will all be required to share some degree of responsibility.

The guidelines provide principles according to business categories. Developers are requested to ensure that data employed for AI purposes is both accurate and up to date, and that they preferably adopt measures to ensure information that has not been approved for use cannot be accessed.

Meanwhile, providers that utilize AI will be asked to warn users to avoid inputting personal information that they do not want accessed by third parties, and guarantee that their services are limited to their intended use to prevent bad actors from employing the technology for malign purposes.

Link:
Japan's AI draft guidelines ask for measures to address overreliance - Japan Today

Read More..

Psychologists use machine learning to unveil unexpected links between positive communication and romantic outcomes – PsyPost

New research sheds light on how positive communication can shape satisfaction and desire within romantic relationships. The study, which utilized machine learning techniques, indicates that the impact of positive communication is more nuanced than previously thought.

While actions like showing affection, offering compliments, and expressing fondness, are generally linked to higher levels of sexual and relationship satisfaction in couples, certain combinations of positive communication appear to lead to varying degrees of satisfaction, influenced by factors like age and the balance of compliments and affection.

The research has been published in Sexual and Relationship Therapy.

I study the ways in which individuals and couples can maintain satisfying sexual lives over time, said study author Christine E. Leistner, an associate professor in the Department of Public Health and Health Services Administration at California State University, Chico. Parenthood is a transition that has been associated with lower levels of sexual satisfaction and desire, especially for women. So, I was interested in this topic because it provides tangible information about communication skills that couples can begin engaging in anytime.

The researchers collected data from 246 couples, amassing a total of 6,416 observations over a 30-day period. All the participants were either married or cohabiting. Each partner in these couples shared their experiences, providing researchers with a wealth of information about their daily lives. Participants reported on various aspects of their relationships, including the level of positive communication they experienced.

The researchers used two different methods to analyze their findings: traditional statistical analysis and advanced machine learning techniques. These two approaches offered complementary insights into the complex interplay of communication and relationship dynamics.

One of the most significant discoveries was that positive communication consistently predicted higher levels of sexual and relationship satisfaction for both individuals in the couple and their partners. When individuals experienced more positive communication from their partners on a given day, they reported increased satisfaction and desire in their relationship, as did their partners. This emphasized the powerful influence of daily acts of kindness and affection.

While all aspects of positive communication had a positive impact on outcomes, the study also highlighted nuanced differences between the communication subscales. Fondness, affection, compliments, and sharing each played unique roles in predicting satisfaction and desire. For instance, fondness and compliments were strong predictors of sexual satisfaction, while partners compliments and affection were particularly relevant for sexual desire.

The main takeaways are that when individuals with children engage in positive communication that includes 1) expressing fondness toward their partner, 2) providing their partner with physical and emotional affection, 3) complimenting their partner, and 4) sharing or disclosing information about their inner world, they are more sexually and romantically satisfied in the relationship and have more sexual desire for their partner, Leistner told PsyPost.

On a daily level, when individuals who are parents perceive their partner engaging in positive communication on a given day, they have higher levels of relationship and sexual satisfaction the next day and their partners have higher levels of sexual desire for them the next day.

But the researchers were surprised to find several unique nonlinear interactions when using machine learning techniques. These interactions added depth and nuance to the understanding of how positive communication affects satisfaction and desire in romantic relationships.

A noteworthy interaction was discovered between an individuals perception of their partners compliments and affection concerning sexual satisfaction. Surprisingly, at high levels of both compliments and affection, the interaction predicted a decrease in sexual satisfaction for most participants. However, for a subset of participants, having high levels of compliments and affection actually increased sexual satisfaction.

This finding suggests that, for some couples, an abundance of compliments and affection positively predicts sexual satisfaction, but for others, it may lead to lower sexual satisfaction.

In addition, age was found to influence the connection between perceived affection and sexual desire differently for older and younger partners. Younger individuals reported higher sexual desire when their partners were perceived as less affectionate, whereas older individuals experienced higher sexual desire when their partners were more affectionate. This finding suggests that age may impact how sexual desire is expressed or communicated within couples.

Our results demonstrate that positive communication between romantic partners is linked to higher levels of sexual and relationship satisfaction and sexual desire for both partners, Leistner explained. However, the machine learning analysis reveals that this link may not be positive or linear for everyone for every outcome.

While this study offers valuable insights into the importance of positive communication in romantic relationships, its important to acknowledge some limitations. Firstly, the sample primarily consisted of white, educated, and heterosexual couples. Future research should aim to include more diverse groups to ensure the generalizability of these findings across different populations and relationship dynamics.

Additionally, this study focused solely on positive communication and did not explore negative communication patterns. Understanding the interplay between positive and negative communication could provide a more comprehensive understanding of relationship dynamics.

The study, Associations between daily positive communication and sexual desire and satisfaction: an approach utilizing traditional analyses and machine learning, was authored by Christine E. Leistner, Laura M. Vowels, Matthew J. Vowels, and Kristen P. Mark.

Link:
Psychologists use machine learning to unveil unexpected links between positive communication and romantic outcomes - PsyPost

Read More..

5 ways AI is leveling the battlefield – Fox News

The AI revolution started by ChatGPT continues to accelerate, with machine learning showing up in everything from ecommerce to tractors. And while the applications continue to explode, its becoming clear that AI can help smaller players compete by harnessing their data in the same way industrial behemoths have for decades.

In warfare, AI is giving a similar edge to smaller, tech-savvy militaries for good and ill. Here are five ways AI is already finding its way onto the battlefield, and how it is likely to evolve over the next few years:

Decision-making: Generative AI tools like ChatGPT, Bard or Midjourney use internet data to train a model so it can predict how to complete tasks like writing a line of computer code or creating a new painting in Picassos style. These same AI techniques can also help military commanders formulate plans.

An MQ-9 Reaper remotely piloted aircraft (RPA) flies during a training mission at Creech Air Force Base on Nov. 17, 2015, in Indian Springs, Nevada. (Isaac Brekken/Getty Images)

Normally, legions of planners think through each aspect of an operation, from food and fuel to missile attacks, and build courses of action for a commander to consider. Trained with data from past operations, the characteristics of the force, and estimates about the enemy, generative AI models can create plans that although not perfect give planners a head start. And because an AI tool can think through more options than a staff of humans, it can reveal alternatives human planners may not have considered.

AI-POWERED COMBAT AIRCRAFT BRING US HUGE BATTLEFIELD ADVANTAGE BUT RAISE ETHICAL QUESTIONS

Ukrainian troops are already using AI tools like these to stay a step ahead of Russian forces, while China and the U.S. are incorporating AI-enabled decision aids into their command and control systems. Fighter cockpits will soon include AI-enabled assistants that help interpret data or fly a plane while the pilot assesses the situation.

A Russian soldier was seen surrendering to a Ukrainian drone May 9 in edited video released by Ukraine's 92nd Mechanized Brigade. (Ukraine's 92nd Mechanized Brigade)

Intelligence analysis: AI-enabled image recognition has been around for about a decade. Now militaries, like some businesses, are pursuing AI-enabled algorithms to predict what intelligence data suggests about an adversarys plans and intentions. And going one step further, AI-enabled tools will soon predict ways a nation can operate, equip and position its military to deter an opponent or make it stumble into the wrong one.

Smart weapons: Militaries already use killer robots. Automated torpedoes and missiles have been around for decades. But AI algorithms can make automated weapons smarter and more discriminating. The same way Google Translate uses AI to recognize text, algorithms help weapons not just discern a tank from a trolley, but also predict whether the tank is the best one to hit based on its location, direction of movement, and armament.

AI is also helping weapons and drones navigate. In Ukraine, satellite navigation systems like GPS are routinely jammed or spoofed. In the same way humans turn to landmarks, terrain, ocean waves, stars or radio towers to orient themselves, AI algorithms can help weapons and drones predict their location based on what their sensors see.

PUTIN'S HOPE FOR AI TO INCREASE INFORMATION CONTROL, END WESTERN TECH DEPENDENCE LARGELY ASPIRATIONAL

Predictive maintenance: Soldiers could die if weapons or vehicles break during a fight, so militaries check and replace equipment more often than needed. The result is higher costs and more time in the shop. To break that cycle, militaries like many airlines, shipping fleets and trucking companies are using AI-enabled tools to predict when a system or part is nearing the failure point and should be repaired or replaced. With the U.S. military becoming smaller each year, it needs every tank, ship or plane to stay online as much as possible.

Drone warfare: Uncrewed systems are where the above trends come together. The war in Ukraine and Nagorno-Karabakh show the side with the best and most drones has an advantage. Despite being outnumbered 3 to 1 on the ground and 10 to 1 in the air, Ukrainian troops stopped Russias advance and have slowly pushed Moscows forces back in no small part because of Ukrainian drone boats and aircraft.

Equipped with AI-enabled algorithms to help them navigate and avoid threats, Ukrainian drones are striking targets themselves or finding targets for artillery and rocket attacks. By getting the most out of its munitions stocks, drones are allowing Ukraines troops to fight above their weight. Facing missile shortages of its own, Russia also turned to drones, which attack Ukrainian infrastructure and help high-end Russian missile reach their targets.

CLICK HERE FOR MORE FOX NEWS OPINION

Although a Terminator-like hellscape is not on the horizon, military applications of AI are likely to make warfare more lethal, more intense and more competitive. Most concerning for the Pentagon, smaller and less advanced U.S. adversaries like Iran or terrorists can use AI-enabled software and uncrewed systems to level the playing field.

To stay ahead, the U.S. military should lean into AI to become more creative, precise and get the most out of every defense dollar.

CLICK HERE TO GET THE FOX NEWS APP

Originally posted here:
5 ways AI is leveling the battlefield - Fox News

Read More..

Scientists begin building AI for scientific discovery using tech behind ChatGPT – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

close

An international team of scientists, including from the University of Cambridge, have launched a new research collaboration that will leverage the same technology behind ChatGPT to build an AI-powered tool for scientific discovery.

While ChatGPT deals in words and sentences, the team's AI will learn from numerical data and physics simulations from across scientific fields to aid scientists in modeling everything from supergiant stars to the Earth's climate.

The team launched the initiative, called Polymathic AI earlier this week, alongside the publication of a series of related papers on the arXiv open access repository.

"This will completely change how people use AI and machine learning in science," said Polymathic AI principal investigator Shirley Ho, a group leader at the Flatiron Institute's Center for Computational Astrophysics in New York City.

The idea behind Polymathic AI "is similar to how it's easier to learn a new language when you already know five languages," said Ho.

Starting with a large, pre-trained model, known as a foundation model, can be both faster and more accurate than building a scientific model from scratch. That can be true even if the training data isn't obviously relevant to the problem at hand.

"It's been difficult to carry out academic research on full-scale foundation models due to the scale of computing power required," said co-investigator Miles Cranmer, from Cambridge's Department of Applied Mathematics and Theoretical Physics and Institute of Astronomy. "Our collaboration with Simons Foundation has provided us with unique resources to start prototyping these models for use in basic science, which researchers around the world will be able to build fromit's exciting."

"Polymathic AI can show us commonalities and connections between different fields that might have been missed," said co-investigator Siavash Golkar, a guest researcher at the Flatiron Institute's Center for Computational Astrophysics.

"In previous centuries, some of the most influential scientists were polymaths with a wide-ranging grasp of different fields. This allowed them to see connections that helped them get inspiration for their work. With each scientific domain becoming more and more specialized, it is increasingly challenging to stay at the forefront of multiple fields. I think this is a place where AI can help us by aggregating information from many disciplines."

The Polymathic AI team includes researchers from the Simons Foundation and its Flatiron Institute, New York University, the University of Cambridge, Princeton University and the Lawrence Berkeley National Laboratory. The team includes experts in physics, astrophysics, mathematics, artificial intelligence and neuroscience.

Scientists have used AI tools before, but they've primarily been purpose-built and trained using relevant data.

"Despite rapid progress of machine learning in recent years in various scientific fields, in almost all cases, machine learning solutions are developed for specific use cases and trained on some very specific data," said co-investigator Francois Lanusse, a cosmologist at the Center national de la recherche scientifique (CNRS) in France.

"This creates boundaries both within and between disciplines, meaning that scientists using AI for their research do not benefit from information that may exist, but in a different format, or in a different field entirely."

Polymathic AI's project will learn using data from diverse sources across physics and astrophysics (and eventually fields such as chemistry and genomics, its creators say) and apply that multidisciplinary savvy to a wide range of scientific problems. The project will "connect many seemingly disparate subfields into something greater than the sum of their parts," said project member Mariel Pettee, a postdoctoral researcher at Lawrence Berkeley National Laboratory.

"How far we can make these jumps between disciplines is unclear," said Ho. "That's what we want to doto try and make it happen."

ChatGPT has well-known limitations when it comes to accuracy (for instance, the chatbot says 2,023 times 1,234 is 2,497,582 rather than the correct answer of 2,496,382). Polymathic AI's project will avoid many of those pitfalls, Ho said, by treating numbers as actual numbers, not just characters on the same level as letters and punctuation. The training data will also use real scientific datasets that capture the physics underlying the cosmos.

Transparency and openness are a big part of the project, Ho said. "We want to make everything public. We want to democratize AI for science in such a way that, in a few years, we'll be able to serve a pre-trained model to the community that can help improve scientific analyses across a wide variety of problems and domains."

More information: Michael McCabe et al, Multiple Physics Pretraining for Physical Surrogate Models, arXiv (2023). DOI: 10.48550/arxiv.2310.02994

Siavash Golkar et al, xVal: A Continuous Number Encoding for Large Language Models, arXiv (2023). DOI: 10.48550/arxiv.2310.02989

Francois Lanusse et al, AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models, arXiv (2023). DOI: 10.48550/arxiv.2310.03024

The rest is here:
Scientists begin building AI for scientific discovery using tech behind ChatGPT - Tech Xplore

Read More..

FOXO Technologies Announces Issue Notification from USPTO for a … – BioSpace

MINNEAPOLIS--(BUSINESS WIRE)-- FOXO Technologies Inc. (NYSE American: FOXO) (FOXO or the Company), a leader in the field of commercializing epigenetic biomarker technology, today announced that the United States Patent & Trademark Office (USPTO) has provided an Issue Notification for a key patent utilizing a machine learning model trained to determine a biochemical state and/or medical condition using DNA epigenetic data to enable the commercialization of epigenetic biomarkers. Previously, the USPTO had issued Notices of Allowance to the Company for two related patents and the Company awaits Issue Notification for the second allowed patent.

The first patent, for which the Company has received an Issue Notification, aids in practical applications of the technology that include generating epigenetic biomarkers. On occasion, epigenetic data may be missing or unreliable because a specific DNA site may not have been assayed or was unreliably measured. The patent allows the use of machine learning estimators to fill in the missing or unreliable epigenetic values at specific loci.

The second patent, for which the Company received a Notice of Allowance, leverages machine learning to estimate aspects about an individuals health, such as disease states, biomarker levels, drug use, health histories, and factors used to underwrite mortality risk. Commercial applications for this patent may include a potential AI platform for the delivery of health and well-being data-driven insights to individuals, healthcare professionals and third-party service providers, life insurance underwriting, clinical testing, and consumer health.

To support these patents, the Company has generated epigenetic data for over 13,000 individuals through internally sponsored research and external research collaborations. Pairing these data with broad phenotypic information is expected to help drive product development as demonstrated in the Companys patent claims.

Mark White, Interim CEO of FOXO Technologies, stated, As a pioneer in epigenetic biomarker discovery and commercialization, FOXO Technologies is dedicated to harnessing the power of epigenetics and artificial intelligence to provide data-driven insights that foster optimal health and longevity for individuals and organizations alike. With a strong commitment to improving the quality of life and promoting well-being, FOXO Technologies stands at the forefront of innovation in the biotechnology industry, with plans to leverage AI technology in order to expand into additional commercial markets.

The newly granted patent underscores FOXO Technologies' position as a leader in the convergence of biotechnology and artificial intelligence. It represents a significant milestone in the Company's mission to extend and enhance human life through advanced diagnostics, therapeutic solutions, and lifestyle modifications. Moreover, by combining the fields of epigenetics and artificial intelligence, FOXO Technologies' pioneering approach sets a new standard for personalized healthcare. This patent represents a significant step forward in developing innovative tools that empower individuals and healthcare professionals to make informed decisions about health and well-being.

Nichole Rigby, Director of Bioinformatics & Data Science at FOXO Technologies, further noted, "The granting of these patents reaffirms our commitment to pushing the boundaries to bring together biotechnology and AI. We eagerly anticipate the transformative impact of this technology on health solutions, paving the way for healthier and longer lives for everyone."

About FOXO Technologies Inc. (FOXO)

FOXO, a technology platform company, is a leader in epigenetic biomarker discovery and commercialization focused on commercializing longevity science through products and services that serve multiple industries. FOXO's epigenetic technology applies AI to DNA methylation to identify molecular biomarkers of human health and aging. For more information about FOXO, visit http://www.foxotechnologies.com. For investor information and updates, visit https://foxotechnologies.com/investors/.

Forward-Looking Statements

This press release contains certain forward-looking statements for purposes of the safe harbor provisions under the United States Private Securities Litigation Reform Act of 1995. Any statements other than statements of historical fact contained herein, including statements as to future results of operations and financial position, planned products and services, business strategy and plans, objectives of management for future operations of FOXO, market size and growth opportunities, competitive position and technological and market trends, are forward-looking statements. Such forward-looking statements include, but not limited to, expectations, hopes, beliefs, intentions, plans, prospects, financial results or strategies regarding FOXO; the future financial condition and performance of FOXO and the products and markets and expected future performance and market opportunities of FOXO. These forward-looking statements generally are identified by the words anticipate, believe, could, expect, estimate, future, intend, strategy, may, might, strategy, opportunity, plan, project, possible, potential, project, predict, scales, representative of, valuation, should, will, would, will be, will continue, will likely result, and similar expressions, but the absence of these words does not mean that a statement is not forward-looking. Forward-looking statements are predictions, projections and other statements about future events that are based on current expectations and assumptions and, as a result, are subject to risks and uncertainties. Many factors could cause actual future events to differ materially from the forward-looking statements in this press release, including but not limited to: (i) the risk of changes in the competitive and highly regulated industries in which FOXO operates, variations in operating performance across competitors or changes in laws and regulations affecting FOXOs business; (ii) the ability to implement FOXOs business plans, forecasts, and other expectations; (iii) the ability to obtain financing if needed; (iv) the ability to maintain its NYSE American listing; (v) the risk that FOXO has a history of losses and may not achieve or maintain profitability in the future; (vi) potential inability of FOXO to establish or maintain relationships required to advance its goals or to achieve its commercialization and development plans; (vii) the enforceability of FOXOs intellectual property, including its patents and the potential infringement on the intellectual property rights of others; and (viii) the risk of downturns and a changing regulatory landscape in the highly competitive biotechnology industry or in the markets or industries in which FOXOs prospective customers operate. The foregoing list of factors is not exhaustive. Readers should carefully consider the foregoing factors and the other risks and uncertainties discussed in FOXOs most recent reports on Forms 10-K and 10-Q, particularly the Risk Factors sections of those reports, and in other documents FOXO has filed, or will file, with the SEC. These filings identify and address other important risks and uncertainties that could cause actual events and results to differ materially from those contained in the forward-looking statements. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and FOXO assumes no obligation and does not intend to update or revise these forward-looking statements, whether as a result of new information, future events, or otherwise.

View source version on businesswire.com: https://www.businesswire.com/news/home/20231013459322/en/

Source: FOXO Technologies Inc.

More here:
FOXO Technologies Announces Issue Notification from USPTO for a ... - BioSpace

Read More..

Shortcomings of Visualizations for Human-in-the-Loop Machine … – Stanford HAI

Because machine learning models are built on data, it makes sense to use data visualization tools to help us interpret how those systems work.

For the last few years, some data visualization researchers have been doing just that, launching a field known as Visualization for Machine Learning, or VIS4ML. The goal: to provide human-in-the-loop domain experts with visualizations that will help them accomplish diverse tasks including designing, training, engineering, interpreting, assessing, and debugging ML models.

But when Hariharan Subramonyam, assistant professor at Stanford Graduate School of Education and a faculty fellow with Stanfords Institute for Human-Centered AI, and his colleague Jessica Hullman of Northwestern University examined 52 recent VIS4ML research publications, they became concerned that researchers are overstating their accomplishments.

For example, Subramonyam says, researchers in this space are not testing VIS4ML tools in ecologically valid ways and are making inappropriately broad claims about their tools applicability. The teams analysis, which has been accepted for publication at IEEE VIS, is available now on preprint service ArXiv.org.

The VIS4ML community is trying to solve the problem of making ML models more interpretable, Subramonyam says, but the way theyre doing it has shortcomings.

VIS4ML researchers aspire to keep humans in the ML design loop because that will improve ML model performance, Subramonyam says. Its an admirable goal, but also a difficult challenge. Many ML models are complex black box models that evade insight into their inner workings. It will take brand new data visualization tools to help humans understand whats going on inside those black boxes, he says.

Some VIS4ML researchers have taken a laudable stab at inventing novel data visualization tools that offer a window into some aspects of ML models, Subramonyam says. For example, there are VIS4ML tools for creating a scatter plot that depicts clusters in high-dimensional data, with different colors for each of the categories an ML algorithm finds in a dataset types of clothing in images, for example, as shown below. This allows an expert to spot items that are mislabeled and re-label them. Other tools might visualize the various layers of a convolutional network in a manner that users can understand, or visualize the nature of various possible features of an ML model so that an expert can make appropriate decisions about which features to include.

When an ML model categorizes thousands of items of clothing from several online shopping sites into 14 types of clothing (T-shirt, shirt, jacket, suit, dress, vest, etc.), it is correct only 61% of the time. In this visualization, the categories are color coded, allowing an expert to easily identify and re-label miscategorized items (red dots in a group of purple dots, for example). This type of scatter plot relies on a visualization algorithm that is good at showing clusters when they exist, but can also imply structure that doesnt actually exist in the data, Hullman says.

While the development of novel VIS4ML tools for aiding human-in-the-loop ML is important work, Subramonyam and Hullmans analysis shows some troubling findings: These tools are too often tested by a small set of experts often those who were involved in designing the tools in the first place; and they are typically tested on only the most standard popular datasets. The measure of each tools usefulness is quite narrow, Subramonyam says.

In addition, only a third of the 52 VIS4ML papers reviewed went beyond asking an expert if a tool seemed useful and actually reported whether using the tool changed the performance of an ML model. Evidence in the other papers depended on hypothetical claims about a visualization tools potential benefits, essentially positing that the tool will improve model performance for any kind of model and dataset.

These papers make these claims without providing supporting evidence and without acknowledging their limitations and constraints, Subramonyam says.

VIS4ML researchers should curtail the unsupported claims about their tools generalizability and be more transparent about their limitations, Subramonyam says.

If these researchers want to truly support human-in-the-loop ML, they need to more thoroughly evaluate VIS4ML tools and build a stronger evidence base for any claims of broad applicability. Researchers need to connect the dots between the new tools and their usefulness in the real world, Subramonyam says. To further that aim, he and Hullman set out some concrete guidelines for transparency in their paper.

In addition, Subramonyam says, theres a need for closer collaboration between the people who are building these visualization solutions and the communities they hope to serve. Human-centered AI is a multidisciplinary endeavor, he says. You cant have tunnel vision where you build a visualization solution expecting its going to work in multiple domains and workflows without actually testing it in those domains and workflows.

Stanford HAIs mission is to advance AI research, education, policy and practice to improve the human condition.Learn more.

See the rest here:
Shortcomings of Visualizations for Human-in-the-Loop Machine ... - Stanford HAI

Read More..

Multus Bio harnesses AI to optimize cell culture media formulation – AgFunderNews

Whats the key to cutting the cost of cell culture media for cultivated meat?

Finding more efficient ways to produce pricey inputs such as growth factors (proteins that encourage cells to grow and differentiate) at scale will be critical, whether thats via genetically engineered microbes, plants, or fruit flies. But thats only part of the challenge, says London-based startup Multus Biotechnology.

What holds back media development is the inefficiency of optimization, so thats across cost; ingredient quality, potency, stability and sustainability; scalability; and bioprocess productivity [proliferation rate, cell density, differentiation efficiency etc], Multus Bio head of business development Dr. Charlie Taylor tells AgFunderNews.

Optimizing across objectives across all of these areas at the same time is the real challenge facing companies in this space. Its not just a case of how do we make growth factors cheaper?

And when it comes to optimizing media formulations, he says, Smarter decision-making, more data, and doing more in parallel equals better results, faster. Coupled to cheaper inputs and scale economies, thats the roadmap to low-cost media across the metabolic panoply of cultivated meat cell lines.

By combining AI and automation, you can address two key choke points in optimal media formulation, claims Taylor: underpowered decision-making tools [for example what amino acid to change in a protein or what protein to change in a formulation]; and underpowered experimentation throughput.

Founded by four students at Imperial College London (Cai Linton, Kevin Pan, Rka Trn, and Brandon Ma), Multus Bio is working with multiple cultivated meat companies on serum-free media development, and using its AI-powered approach to develop more potent and effective growth factors.

According to Taylor: Multus has three offerings. One is off-the-shelf products, both mixes of ingredients and individual proteins. Two, we do media development for customers. And three, we do manufacturing of serum-free alternatives to FBS (fetal bovine serum), so the capacity were building [at a new pilot plant in Acton, London] is the worlds first food grade production facility for a serum alternative.

Were building a pilot scale facility with the capacity to produce enough serum-free FBS alternative for about 200,000 liters of complete media per month. So thats enough for a fairly sizable cultivated meat facility. Its currently being equipped with operations scheduled to begin during Q4 2023.

He added: Our USP is about combining data science with high-throughput experimentation to figure out the right formulations to use.

In practical terms, he argued, running all the experiments you would need to do to figure out a formulation that is optimal for your cell lines, is practically impossible. Lets say youve got 30 ingredients and you want to know the right concentration of each to make your cells grow optimally? If you looked at every possible combination, thats millions of experiments using conventional approaches.

At Multus Bio, he said, were not using whats called Design of Experiments. Were using a type of machine learning called Bayesian optimization, statistical tools used for the automatic optimization of decisions about ad spend on social media platforms by constantly learning from data about where to spend the next dollar.

Weve built a version of that for the optimization of experiments to explore different media formulations for cells. But weve also bought lab equipment and automation tools and written software to knit together those pieces of equipment with the decision-making engine run with the machine learning application weve created.

So we set up the experiments, put the cells into the trays, but beyond that, its a fairly autonomous experimentation platform. Its not relying on a scientist to come in and decide what to do.

Traditionally, he said, scientists would do some experiments with cells, then stain them in order to look at them under a microscope. Were taking photographs under a microscope, and then using image processing software weve developed.

So rather than letting cells grow for two or three days and then doing some staining and taking a photograph, we take photographs every two or three hours because a robotic arm is taking trays of cells out of the incubator, putting them under the microscope and taking a picture.

Image processing allows Multus Bio to look at growth rates, but also morphology, he explained. Cell biologists often say, do my cells look happy, based on their shapes? And so when we train models for whatever cells were working with, well train them on the right morphology so we can see how happy the cells are as well as their growth.

And because this process happens every two or three hours, the decision-making engine running the experiments can make better decisions about what to do next, he said.

Ultimately, it means were able to make experimental progress more quickly and we can solve for quite complex objectives. So for example, make it cheaper but improve my growth rate at the same time, while moving from pharma-grade ingredients to food-grade ingredients. And by the way, try using these complex plant-based ingredients rather than just the chemically defined ones.

Multus Biowhich raised $9.5m in a series A led by Mandi Ventures in January, including a $2.5m grant from Innovate UK through the EUs EIC Acceleratorhas a partnership with fellow British startup Extracellular with whom it has launched a license free cultivated meat cell bank, noted Taylor.

In-house were working with a wide variety of cells. But beyond that, were working with customers who have provided us with their bovine cells because they want media formulations for those and were also in conversations with companies who wants to send us porcine cells and quite a few companies working with fish cells.

Weve just bought a new microscope coupled to its own incubator so we can work at different temperatures [seafood cells typically grow at lower temperatures than mammalian cells].

So what has Multus Bio learned so far about optimizing media formulations?

If you think about media, one part of it is nutrients like amino acids, sugar, and salts [so-called basal media], said Taylor. The other part is the elixir, whether its an FBS equivalent or a platelet lysate or whatever. But we dont just have to work at bringing down the cost of growth factors [such as FGF, IGF etc], we also have to bring down the cost of amino acids and things like albumin [a protein found in blood plasma commonly used as a supplement to cell culture media].

We need to use side streams from plant processing, so for example, the husk from processing rice, and get the amino acids from that, rather than using precision fermentation to make these ingredients one by one.

Replicating the functionality of albumin from plant-based sources is something we currently have in development, although were not there yet.

Multus Biowhich now has a team of 19is also working on developing its own, more effective growth factors with greater potency and a longer half-life, he said. Protein AI is a rapidly expanding field, and we use a generative AI model to help us produce alternative amino acid sequences. We use models to do protein structure prediction and stability assessments.

We use a bunch of different models ultimately to generate a shortlist of candidate sequences that we think will have the right structure and will have more advantaged thermostability and will be soluble. And then we can move into in vitro validation. Do they express well? Are they soluble? Do they have the right functionality and activity?

Based on the results of that, we can go back and loop through the process again, but it shortens the cycle time.

If it finds a candidate with potential, however, Multus Bio would not aim to manufacture it at scale in-house, said Taylor: We do some small-scale work to validate candidate products, but we have a manufacturing partner on precision fermentation to benefit from the reduced production costs of its highly efficient, commercial scale reactors.

So how does Multus Bio make money right now?

According to Taylor: We sell media and ingredients. We have our first formulation that were selling to cultivated meat companies. We have three protein ingredients that we have customers for, and were also working at an early stage in medical life sciences.

Further reading:

AgFunder leads $5m seed round for ML-powered Atinary Technologies to turbocharge R&D process

Bytes to Bites part one: Digitizing consumption insights. Leveraging AI in food product development

Bytes to Bites part two From farm to fork: Leveraging AI in the food supply chain

Here is the original post:
Multus Bio harnesses AI to optimize cell culture media formulation - AgFunderNews

Read More..

Does AI and machine learning further entrench gender inequity? – Women Love Tech

Tomorrow, I will be emceeing an incredible panel of women at SXSW Sydney. They are Dr Catriona Wallace from the Responsible Metaverse Alliance and author of Checkmate Humanity, Tracey Spicer, author of Man-Made, and Shivani Gopal, CEO and Founder of Elladex.

Our topic is Does Machine Learning and AI further entrench gender inequity for future generations of women?

Heres a link to our panel:https://sxswsydney.com/session/does-machine-learning-and-ai-further-entrench-gender-inequity-for-future-generations-of-women/

If you have any questions you want me to ask this panel of experts, you can email us on editor@womenlovetech.com.

Our panel promises to be a lively debate. Please join us at the ICC in Sydney at 12.30pm on Tuesday, October 17.

We will also be including a video from Stela Solar, Director of the National Artificial Intelligence Centre at the CSIRO and introducing the idea of being a trust architect for AI from Zachary Zeus, CEO, Pyx Global. You can find out more about that role, here.

Originally posted here:
Does AI and machine learning further entrench gender inequity? - Women Love Tech

Read More..

The Future of AI in Hybrid: Challenges & Opportunities – TechFunnel

Over the past several weeks, the flurry of new generative AI products and capabilities from ChatGPT to Bard and numerous variations from others built around large language models (LLMs) has created an excessive hype cycle. However, many argue that these generalized models are unsuitable for enterprise use. Most AI engines show signs of struggling when assigned niche or domain-specific tasks. Could hybrid AI be the answer?

Hybrid AI is the expansion or enhancement of AI models using machine learning, deep learning, and neural networks alongside human subject matter expertise to develop use-case-specific AI models with the greatest accuracy or potential for prediction.

The rise of hybrid AI tackles many significant and legitimate concerns. More than AI models built on large datasets are required in numerous scenarios or domains for maximum benefit or actual value creation. For example, consider ChatGPT being asked to write a long and detailed economic report.

Adopting or enhancing the model with domain-specific knowledge can be the most effective way to reach a high forecasting probability. Hybrid AI combines the best aspects of neural networks (patterns and connection formers) and symbolic AI (fact and data derivers) to achieve this.

Todays LLMs have several flaws, including inadequate performance on mathematical tasks, a propensity to invent data, and a failure to articulate how the model yields results. All of these issues are typical of connectionist neural networks, which depend on notions of how the human brain operates.

These issues are typical of connectionist neural networks, which depend on notions of the human brains operation.

Classical AI is also referred to as symbolic AI. It attempts to plainly express human knowledge in a declarative form, such as rules and facts interpreted from symbol inputs. It is a branch of AI that attempts to connect facts and events using logical rules.

From the mid-1950s to the end of the 1980s, the study of symbolic AI saw considerable activity.

In the 1960s and 1970s, technological advances inspired researchers to investigate the relationship between machines and nature. They believed that symbolic techniques would eventually result in an intelligent machine, which was viewed as their disciplines long-term objective.

In this context, John Haugeland coined good old-fashioned artificial intelligence or GOFAI in his 1985 book Artificial Intelligence: The Very Idea.

The GOFAI method is best suited for inert issues and is far from a natural match for real-time dynamic problems. It favors a restricted definition of intellect as abstract reasoning, whereas artificial neural networks prioritize pattern recognition. Consequently, the latter connectionist or non-symbolic method has gained prominence recently.

The genesis of non-symbolic artificial intelligence is the attempt to simulate the human brain and its elaborate web of neural connections.

To discover solutions to issues, non-symbolic AI systems refrain from manipulating a symbolic representation. Instead, they conduct calculations based on principles that have been empirically proven to solve problems without first understanding precisely how to arrive at a solution.

Neural networks and deep learning are two examples of non-symbolic AI. Non-symbolic AI is also known as connectionist AI, several present-day artificial intelligence apps are based on this methodology, including Googles automated transition engine (which searches for patterns) and Facebooks face recognition program.

In the context of hybrid artificial intelligence, symbolic AI serves as a supplier to non-symbolic AI, which handles the actual task. Symbolic AI offers pertinent training data from this vantage point to the non-symbolic AI. In turn, the information conveyed by the symbolic AI is powered by human beings i.e., industry veterans, subject matter experts, skilled workers, and those with unencoded tribal knowledge.

Web searches are a popular use of hybrid AI. If a user inputs 1 GBP to USD, the search engine detects a currency conversion challenge (symbolic AI). It uses a widget to perform the conversion before employing machine learning to retrieve, position, and exhibit web results (non-symbolic AI). This is a fundamental example, but it does illustrate how hybrid AI would work if applied to more complex problems.

According toDavid Cox,director of the MIT-IBM Watson AI Lab, deep learning and neural networks thrive amid the messiness of the world, while symbolic AI does not. As previously mentioned, however, both neural networks and deep learning have limitations. In addition, they are susceptible to hostile instances, dubbed as adversarial data, which may influence the behavior of an AI model in unpredictable and possibly damaging ways.

However, when combined, symbolic AI and neural networks can establish a solid foundation for enterprise AI development.

Business problems with insufficient data for training an extensive neural network or where standard machine learning cant deal with all the extreme cases are the perfect candidates for implementing hybrid AI. When a neural network solution could cause discrimination, lack of full disclosure, or overfitting-related concerns, hybrid AI may be helpful (i.e., training on so much data that the AI struggles in real-world scenarios).

A prime instance is an AI initiative by Fast Data Science, an AI consulting firm. The objective is to assess the potential hazards of a clinical trial.

The user sends a PDF document detailing the plan for conducting a clinical trial to the platform. A machine learning model can identify vital trial characteristics like location, duration, subject number, and statistical variables. The machine learning models output will be incorporated into a manually crafted risk model. This symbolic model converts these parameters into a risk value, which then appears as a traffic light signaling high, medium, or low risk to the user.

Human intelligence is essential to specify a reasonable and logical rule for converting protocol data into a risk value.

A second illustration is Googles search engine. It is a sophisticated, all-encompassing AI system composed of revolutionary deep learning tools like transformers and symbol manipulation mechanisms like the knowledge graph.

No technique or combination of techniques resolves every problem equally well; therefore, it is necessary to understand their capabilities and limitations. Hybrid AI is not a magic bullet, and both symbolic and non-symbolic AI will continue to be powerful technologies in their own right. The fact that expert understanding and context from everyday life are seldom machine-readable is another impediment. Coding human expertise into AI training datasets presents another issue.

Most organizations fail to fully recognize the cognitive, computational, carbon output, and financial barriers that arise from placing the complex jumble of our lived worlds into a context that AI can comprehend. Therefore, the timeline for AI implementation in any meaningful way may take much longer than expected.

AI initiatives are notoriously problematic; only 1 in 10 pilots and prototypes lead to significant results in production.

Progressive businesses are already aware of the limits of single-mode AI models. They are acutely aware of the need for technology to be versatile, capable of delving deeper into stored data, less expensive, and far easier to use.

Hybrid AI provides solutions to some of these problems, though not all. Since it integrates symbolic AI and ML, it can efficiently use the advantages of each approach while staying explainable, which is vital for industries like finance and healthcare.

ML may focus on specific elements of a problem where explainability doesnt matter, whereas symbolic AI will arrive at decisions using a transparent and readily understandable pathway. The hybrid approach to AI will only become increasingly prevalent as the years go by.

Read the original post:
The Future of AI in Hybrid: Challenges & Opportunities - TechFunnel

Read More..

Kingfisher introduces Athena to boost testing and learning with AI … – Retail Technology Innovation Hub

TCS OmniStore

Kingfisher is using TCS OmniStore, an AI powered unified commerce platform from Tata Consultancy Services.

The company operates a chain of over 1,900 stores in eight countries across Europe under its retail banners includingB&Q,Castorama,Brico Dpt,Screwfix, TradePoint and Kota.

It was looking to upgrade to a multilingual commerce platform that delivers a unified brand experience. In addition, it wanted to address legal, fiscal, and operational differences across all its European banners.

TCS OmniStore has enabled Kingfisher to deliver a range of capabilities such as Click and Collect services, scan and go options, mobile apps, save the cart, and self-checkout facilities along with dynamic promotion capabilities and clienteling.

In addition, the platform supports payment options such as contactless, Apple Pay, Apple wallet, and pay as you go.

Its API architecture is built around a centralised core base that allows localisation across different regions.

Kingfisher has implemented TCS OmniStore across two banners B&Q in UK and Ireland, and Castorama in France with the third coming later this year.

It says that it is benefiting from greater associate productivity, increased revenue, faster checkout, and broader sales opportunities; it was also able to execute promotions better based on data insights.

TCS OmniStore was the strategic choice for Kingfisher's future growth, orchestrating a fast, smooth, and seamless checkout experience, which is needed for today's customers, saysPeter Ash, Product Director, Operations and Fulfilment, Kingfisher.

Our self-checkout systems have allowed us to be more efficient on the front end. It's simple and our customers love it. They're easy to use. But it's also allowed us to bring colleagues further into the store. I'm really excited about the future. And I'm really excited about what OmniStore can bring with our current systems stack.

We are delighted to be a strategic partner to Kingfisher in its transformation journey to reimagine the end customer experience and offer a unified experience across its brands in Europe. The platform is enabling seamless omnichannel shopping experiences, enhancing their competitive differentiation, and driving growth, saysShekar Krishnan, Head, Retail & CPG UK and Europe, TCS.

Excerpt from:
Kingfisher introduces Athena to boost testing and learning with AI ... - Retail Technology Innovation Hub

Read More..