Category Archives: Machine Learning
How to Budget for Generative AI in 2024 and 2025 – InformationWeek
Generative AI is the hot topic on everyones mind. The possibilities seem endless, and thousands of companies, products, models, and platforms have flooded the market to monetize those possibilities. Big players like OpenAI, Microsoft, Meta, Google, and Amazon, along with many other large companies and myriad startups, are vying for market share in the GenAI space.
Real-life use cases are emerging across multiple industries. Wendys is using Google Clouds GenAI at its drive-throughs. Coca-Cola released an AI platform created by OpenAI and Bain & Company as a part of its marketing function. Companies are using GenAI for customer chat applications, to assist with coding, to help write emails, and more.
Right now, Lee Moore, VP of Google Cloud Consulting, sees enterprises on a spectrum when it comes to GenAI. At one end of the spectrum, youve got customers who are almost paralyzed, whether its their internal processes, their concerns around data privacy, or other things that you read about, he explains.
In the middle of the spectrum, enterprises are experimenting with GenAI but not yet investing a significant amount. And then, there are enterprises that are seriously investing in real-life use cases, according to Moore.
Regardless of where an enterprise is in its approach to GenAI, there seems to be no question that the technology is here to stay. But how can enterprise leaders work through the hype cycle to set a budget for this technology and select the right solutions for their organizations?
Related:Thomson Reuters on Using GenAI to Augment the Professional Workforce
GenAI promises a lot of things: improved efficiencies, cost savings, happier employees and customers. But how can that technology do that for your enterprise? Before making a significant investment in any technology, enterprise leaders need to understand the business case.
As C-suite leaders and boards talk about GenAI, they can ask questions about what their priorities are from a cost-saving, revenue-driving, reduction of risk, better customer experience [perspective], and what the right stack is to help serve their enterprise outcomes, says Ryan Martin, partner and AI global lead at Infosys Consulting, a management and IT consulting company.
Being able to have that discussion requires an investment in learning. GenAI technology is evolving so quickly that it demands enterprise leaders constantly learn so they dont fall behind. In my team, I have allocated 10% of everybodys time this year to learning because of the speed of which this is going, Moore shares.
Related:Financial Services Explores GenAI in Interesting Ways
As part of that learning process, Moores team will participate in hackathons or competitions. The team, in a couple of days, came up with the solution that would pretty much automate the generation of our services contracts with our customers, he shares. We get to 80 odd percent completion in minutes. And then the human in the loop gets in there to finish off the rest of the contracting.
Business function leaders can explore how GenAI can solve challenges for their teams.
Demonstrating how the technology can solve a specific problem can create a foundation for an initial investment that can be scaled. It always starts with defining your problem statement and then looking at what are the host of tools you have to solve that, Prasad Ramakrishnan, CIO at Freshworks, a cloud-based SaaS company, tells InformationWeek.
With a business case in mind, enterprise leaders need to sift through a sea of options to find the right GenAI tools and/or partners.
There is a massive proliferation of startups [with] various niche capabilities, and some of them are just out there trying to win budgets and take advantage of the hype, while others have been built on research, says Erik Brown, senior partner, AI and product engineering at digital services firm West Monroe Partners.
Related:Experts Ponder GenAIs Unprecedented Growth and Future
Just how much the GenAI market will change in the coming years is hard to predict. With so many companies offering solutions, it is unlikely that all of them will become long-time players in the space. You cant even keep track of how many are popping up on a regular basis. And my suspicion is within six months half [of] them wont be there anymore, says Ryan Worobel, CIO at LogicMonitor, a cloud-based infrastructure monitoring platform.
Considering a companys funding position and its likelihood of longevity can help narrow down the options. Enterprises can also examine what their existing technology partners are doing with GenAI to see if there is a potential fit. It really is [about] figuring out who are the vendors you trust that youre working with and the vendors you have who you truly consider a partner in the process that wouldnt jeopardize their own relationship with the company just to start throwing capabilities out there, Worobel explains.
While many enterprises are in the experimentation phase, it is important to consider the big picture. AI is a broad category of which GenAI is a piece. How does a potential tool fit in an enterprises overall tech stack?
I would say a potential red flag is one [a tool] thats maybe too myopic in its focus [it] needs to be able to integrate with other AI technologies in the stack, says Martin.
Moore argues for a platform approach. You dont want to get locked into any particular model or tool because there are now hundreds if not thousands of them today, he says.
Choosing a GenAI solution also involves an element of risk management. Martin points out that enterprises can learn lessons from technologies that have emerged and generated massive hype and demand in the past.
If you dont [have] guardrails and some level of governance within your own organization, youre going to end up with thousands of one-off, new implementations that add risk to the organization, add overhead in terms of management, add pain that I think can be avoided if you do it intelligently and with the right governance structure, he cautions.
How much an enterprise spends on GenAI this year, next year, and beyond will depend on a multitude of factors: its industry, its size, its maturity, its target use cases. Weve seen success in integrating with an existing AI model an investment of a $150,000 to $200,000. Weve also built highly complex custom models into products that take [an] investment of$1 million to $2 million, says Brown.
Where do enterprises want to put their dollars toward GenAI? For some, it might make sense to focus on external partnerships and solutions. For others, dollars might be spent on internal R&D. Many enterprises will be budgeting for both.
Its going to be far more predictable to think about how you set a blanket budget for the use of licensed-embedded AI tools and enterprise software like Microsoft Office, says Brown. He expects that budgeting for building GenAI and other forms of AI into custom internal products and workflows will likely be the bigger investment. But I think thats where the most compelling opportunity is going to be moving forward, he contends.
Organizations can approach setting a budget for GenAI in different ways. Worobel shares that his team is taking lessons from the advent of cloud technology. They set aside a percentage of the IT budget into an innovation bucket, which they dipped into to test cloud. Now, Worobel and his team are looking at an innovation bucket for GenAI.
We just put a bucket of dollars kind of on the side in the overall AOP [annual operating planning], Worobel shares. We dont really have [it] earmarked for anything yet, and what were doing as we come upon different ideas and look at different components and capabilities, well make decisions or whether or not we fund it.
Choosing what to invest in goes back to the business use case. What will a particular solution deliver in terms of increased productivity or efficiency? Moore recommends targeting a specific improvement and then deciding what piece of the budget is required to achieve it.
I would be asking my team to come back to me with a plan to deliver a certain percentage improvement in efficiency and output, he outlines. Lets say, 5% or 10% in the first year, with an expectation it would grow beyond that and to give me the budget request to achieve that.
Enterprise leaders want to know that GenAI is generating returns. Tracking its ROI does not have to be radically different than doing so for any other technology investment. Select KPIs and monitor them. Is that tool improving, for example, coder productivity? Is it increasing customer conversion rates?
Starting with a small pilot project can help enterprise leaders prove the use case for a GenAI solution and then scale. If you isolate the people using the tools to start and you set them on kind of clear goals of how to use the tools, you can measure productivity pretty quickly and then figure out how to expand that beyond that small pilot group, says Brown.
Enterprise leaders spearheading a GenAI use case test also need to track utilization of the tool in question. You need to track app deployments, app utilization, and [if they are] using all the features, says Ramakrishnan. If users arent adopting the tool or are only using a fraction of the features the enterprise is paying for, there are dollars being wasted.
Budgeting for GenAI is going to be a learning experience. Enterprise leaders are going to need to make adjustments as they learn which external partners and tools and internal development initiatives drive progress toward their business goals.
In anticipation of the rapid pace at which GenAI is evolving, organizations can leave space in their budgets to adapt as changes arise.
I try to keep somewhere between 15% and 20% of my budget unallocated as I enter the year, which can be tough to do in tight budget environments, Worobel shares. But as long as you have that close partnership with your CFO and you can have an agreement that were not going to needlessly spend it and if we dont need it, well give it back.
Enterprise leaders can also track how GenAI generates cost savings and how those cost savings can potentially impact the overall budget. Ramakrishnan emphasizes the importance of app rationalization as a potential avenue to cost savings. How can a new GenAI tool potentially streamline a business function and pave the way for retiring now unnecessary tools?
This year, for many enterprises, is a year of experimentation. But GenAI is poised to soon move beyond the hype cycle, bringing more concrete, real-life use cases to bear.
I think this year is all about actually justifying the spend [on] enterprise supplementary tools, embedded AI tools, [whereas] the investment in engineering around data science and artificial intelligence for the general enterprise is going to grow more significantly going into next year, says Brown.
While each enterprise has to lay out its own path for adopting a new technology, GenAI will not be ignored.
I think the lesson from the dot-com boom 20 years ago, people didnt fully believe in the possibilities. Here, the possibilities are going to become real very, very quickly, says Moore.
See original here:
How to Budget for Generative AI in 2024 and 2025 - InformationWeek
The role of machine learning and computer vision in Imageomics – The Ohio State University News
A new field promises to usher in a new era of using machine learning and computer vision to tackle small and large-scale questions about the biology of organisms around the globe.
The field of imageomics aims to help explore fundamental questions about biological processes on Earth by combining images of living organisms with computer-enabled analysis and discovery.
Wei-Lun Chao, an investigator at The Ohio State Universitys Imageomics Institute and a distinguished assistant professor of engineering inclusive excellence in computer science and engineering at Ohio State, gave an in-depth presentation about the latest research advances in the field last month at the annual meeting of the American Association for the Advancement of Science.
Chao and two other presenters described how imageomics could transform societys understanding of the biological and ecological world by turning research questions into computable problems. Chaos presentation focused on imageomics potential application for micro to macro-level problems.
Nowadays we have many rapid advances in machine learning and computer vision techniques, said Chao. If we use them appropriately, they could really help scientists solve critical but laborious problems.
While some research problems might take years or decades to solve manually, imageomics researchers suggest that with the aid of machine and computer vision techniques such as pattern recognition and multi-modal alignment the rate and efficiency of next-generation scientific discoveries could be expanded exponentially.
If we can incorporate the biological knowledge that people have collected over decades and centuries into machine learning techniques, we can help improve their capabilities in terms of interpretability and scientific discovery, said Chao.
One of the ways Chao and his colleagues are working toward this goal is by creating foundation models in imageomics that will leverage data from all kinds of sources to enable various tasks. Another way is to develop machine learning models capable of identifying and even discovering traits to make it easier for computers to recognize and classify objects in images, which is what Chaos team did.
Traditional methods for image classification with trait detection require a huge amount of human annotation, but our method doesnt, said Chao. We were inspired to develop our algorithm through how biologists and ecologists look for traits to differentiate various species of biological organisms.
Conventional machine learning-based image classifiers have achieved a great level of accuracy by analyzing an image as a whole, and then labeling it a certain object category. However, Chaos team takes a more proactive approach: Their method teaches the algorithm to actively look for traits like colors and patterns in any image that are specific to an objects class such as its animal species while its being analyzed.
This way, imageomics can offer biologists a much more detailed account of what is and isnt revealed in the image, paving the way to quicker and more accurate visual analysis. Most excitingly, Chao said, it was shown to be able to handle recognition tasks for very challenging fine-grained species to identify, like butterfly mimicries, whose appearance is characterized by fine detail and variety in their wing patterns and coloring.
The ease with which the algorithm can be used could potentially also allow imageomics to be integrated into a variety of other diverse purposes, ranging from climate to material science research, he said.
Chao said that one of the most challenging parts of fostering imageomics research is integrating different parts of scientific culture to collect enough data and form novel scientific hypotheses from them.
Its one of the reasons why collaboration between different types of scientists and disciplines is such an integral part of the field, he said. Imageomics research will continue to evolve, but for now, Chao is enthusiastic about its potential to allow for the natural world to be seen and understood in brand-new, interdisciplinary ways.
What we really want is for AI to have strong integration with scientific knowledge, and I would say imageomics is a great starting point towards that, he said.
Chaos AAAS presentation, titled An Imageomics Perspective of Machine Learning and Computer Vision: Micro to Global, was part of the session Imageomics: Powering Machine Learning for Understanding Biological Traits.
The rest is here:
The role of machine learning and computer vision in Imageomics - The Ohio State University News
CQG Unveils New First-of-its-Kind AI / Machine Learning Trading Toolkit for Predicting Futures Market Moves – PR Newswire
Success in Live Trading Environment Confirms Internal Test Results
DENVER and BOCA RATON, Fla., March 11, 2024 /PRNewswire/ --CQG, a leading global provider of high-performance technology solutions for market makers, traders, brokers, commercial hedgers and exchanges, today announced completion of internal testing and proof-of-concept using live data on what the firm believes to be a first-of-its kind artificial intelligence (AI) predictive model for traders. Following extensive machine learning (ML) training in a back-testing environment, the firm just started applying the technology to live data, with an extremely high level of predictive success in anticipating futures market moves. CQG made the announcement on the first full day of FIA Boca, the International Futures Industry Conference.
Based on the firm's deep experience in analytics, mathematics and market intelligence, the new ML initiative aims to offer retail traders and buy-side firms, including proprietary trading firms and hedge funds, unprecedented tools for identifying new trading and analytics opportunities, guiding trading strategies, and managing their positions. CQG has been exploring the field of AI for the past year in the context of solving for its clients' challenges, testing the technology in a state-of-the-art multi-platform lab. Last week, for the first time, the company tested its next-generation machine learning toolkit in a live trading environment and achieved 80% predictive accuracy matching the results attained in the back-testing environment.
CQG CEO Ryan Moroney said: "In early 2023, we decided we wanted to do something different in machine learning and AI that leveraged our unique position in the market, building off our comprehensive database of historical trade data and analytics in a way that could help our clients and prospects analyze, predict and trade markets through a new lens. We built a lab, and Kevin Darby our Vice President of Execution Technologies has done an extraordinary job of turning that effort into an exciting reality with results that have significantly surpassed our expectations."
Darby said: "We first had to solve multiple real-world challenges, such as storing and curating terabytes of historical market data while retaining the ability to make decisions in microseconds in real-time environments. We built bridges between the current ML infrastructure, based on the Python language, and the reliance of the financial industry infrastructure on C++. We also needed to recast the traditional ML training pipeline to optimize for generative time series prediction to estimate conditional probability distributions in a mathematically satisfying and stable way."
He said the firm's AI in a live environment was consistently able to predict with 80% accuracy whether the next movement in the E-mini S&P 500 futures contract would be up, down or unchanged.
Moroney said CQG has already identified multiple uses related to algorithms (algos), charting and research and is starting to explore other applications with key partners.
He said: "What we've built is portable. We can give a firm a set of encrypted files, and they can see how our technology predicts moves in liquid futures contracts with a high rate of accuracy. They will be able to use our ML lab, apply cloud computing resources and create their own models, either leveraging our models as foundational or making their own from scratch using our historical data and ML toolkit. They can then use CQG for charting and trading with those models. We have extremely smart, creative clients. This is a truly innovative breakthrough, and we're looking forward to collaborating with them on the potential uses we haven't even considered yet."
Moroney said: "For the past 40 years CQG has built sophisticated, intuitive tools for customers to better visualize and analyze market data to make smarter trading decisions. We view our new ML offering as the next breakthrough for mission-critical trading tools delivered by CQG.
About CQG
CQG provides the industry's highest performing solutions for traders, brokers, commercial hedgers and exchanges for their market-related activities globally, including trading, market data, advanced technical analysis, risk management, and account administration. The firm partners with the vast majority of futures brokerage and clearing firms and provides Direct Market Access (DMA) to more than 45 exchanges through its global network of co-located Hosted Exchange Gateways. CQG technology serves as the front end for a variety of exchanges and is increasingly employed as the over-the-counter matching engine for important new markets. CQG's server-side order management tools for spreading, market aggregation, and smart orders are unsurpassed for speed and ease of use. Its market data feed consolidates 85 sources, including exchanges worldwide for futures, options, fixed income, foreign exchange, and equities, as well as data on debt securities, industry reports, and financial indices. One of the longest-serving technology solutions providers in the industry, CQG has won numerous awards for its trading software, technical analysis and multi-asset trading platform. CQG is headquartered in Denver, with sales and support offices and data centers in key markets globally, providing services in more than 60 countries. For more information, visit http://www.cqg.com.
SOURCE CQG
Here is the original post:
CQG Unveils New First-of-its-Kind AI / Machine Learning Trading Toolkit for Predicting Futures Market Moves - PR Newswire
Orange CTO: Now’s not the time to invest in Nvidia GPUs for AI – Light Reading
Bruno Zerbib is not your traditional Orange board member. His colleagues past and present have often spent their whole careers at the French telecom incumbent, even joining it immediately after they attended one of the grandes coles, France's higher-education Ligue 1. The man now wearing the mantle of chief technology and innovation officer had not been employed by Orange in a significant role before June last year. Zerbib's resume shows he has spent most of this century living and working in California for some of the world's best-known Silicon Valley firms. His accent carries undertones of America.
This atypical profile partly explains his appointment, according to one company source. US Internet giants and the technologies they have unleashed are quickly reshaping Orange and other telcos, just as those same telcos previously reshaped economies through mobile and data connectivity. A CTIO with experience on the other side of the fence both geographically and industrially may look invaluable as outside technological forces including generative artificial intelligence (GenAI), accelerated computing and the public cloud pose unanswered questions for telcos.
It is AI in its various forms that has, unsurprisingly, become one of the big priorities for Zerbib amid a lack of telecom-industry consensus on the right next steps. Germany's Deutsche Telekom, the biggest service provider in Europe, thinks operators should build their own large language models (LLMs). Vodafone does not. Japan's Softbank is investing in Nvidia's graphical processing units (GPUs), the versatile chips used in hyperscaler facilities for LLM training. Many are wary. The phenomenon of "inferencing," the rollout of AI applications once LLMs are fully trained, could bring revenue opportunities for telcos at the network "edge," some hope. Others with an eye on recent history are skeptical.
The camp of the Nvidia skeptics
When it comes to Nvidia, Zerbib clearly belongs in the camp of the skeptics. "This is a very weird moment in time where power is very expensive, natural resources are scarce and GPUs are extremely expensive," he told Light Reading during an interview at this year's Mobile World Congress, held last month in Barcelona. "You have to be very careful with our investment because you might buy a GPU product from a famous company right now that has a monopolistic position." Low-cost alternatives are coming, said Zerbib.
He is not the only telco individual right now who sounds worried about this Nvidia monopoly. The giant US chipmaker's share price has doubled since mid-October, valuing Nvidia at nearly $2.2 trillion on the Nasdaq. But a 6% drop on March 8, sparked by wider economic concerns, illustrates its volatility. Its gross margin, meanwhile, has rocketed by 10 percentage points in just two years. The 77% figure it reported last year screams monopoly. And Nvidia controls not just the GPUs but also the Infiniband technology that connects clusters of them in data centers. Howard Watson, who does Zerbib's job at the UK's BT, wants to see Ethernet grow muscle as an Infiniband alternative.
But Zerbib spies the arrival of alternatives to GPUs "for a fraction of the price, with much better performance" in the next couple of years. In his mind, that would make an Orange investment in GPUs now a very bad bet. "If you invest the money at the wrong time, depreciation might be colossal and the write-off could be huge," he said.
What, then, could be the source of these GPU rivals? Established chipmakers and chip design companies such as Intel and Arm have pitched other platforms for AI inferencing. But disruption is likely to come from the hyperscalers or the startups they eventually buy, according to Zerbib. "Right now, the hyperscalers are spending billions on this," he said. "There are startups working on alternatives, waiting to be acquired by hyperscalers, and then you are going to have layers of optimization."
While Zerbib did not throw out any names, Google's own tensor processing units (TPUs) have already been positioned as a GPU substitute for some AI workloads. More intriguing is a startup called Groq, founded in 2016 by Jonathan Ross, an erstwhile Google executive who led the development of those TPUs. Tareq Amin, the CEO of Aramco Digital and a former telco executive (at Japan's Rakuten), thinks Groq's language processing units (LPUs) might be more efficient than GPUs for AI at the edge.
Efficiency is paramount for Zerbib as Orange strives to reach a "net zero" target by 2040. "We need to reduce by ten the amount of resource consumption for a given number of parameters," he said. "It is unacceptable. Right now, we are going to hit a wall. We cannot talk about sustainability and then generate a three-minute video that is equivalent to someone taking a car and doing a long trip. It makes no sense."
Models of all sizes
Today, he says he prefers to invest in people rather than GPUs. For Orange so far that has meant equipping about 27,000 members of staff (Orange had 137,000 employees worldwide at the end of last year) with AI skills. Unlike certain other telcos, however, Orange appears to have no interest in building an LLM from scratch. Instead, Zerbib is an advocate of fine-tuning for telecom purposes the LLMs that already exist.
Orange has, accordingly, spent money on building an AI tool it can effectively plug into the most appropriate LLM for any given scenario. "We have built internally a tool that is an abstraction layer on top of all those LLMs and based on the use case is going to route to the right LLM technology," said Zerbib. "We are very much proponents of open models, and we believe that we need to take huge models that have been trained at scale by people that have invested billions in infrastructure."
Available like items of clothing in numerous different sizes, future language models including those adapted by Orange are likely to be worn at multiple types of facilities, he clearly believes. Zerbib breaks them down into three categories: small language models, featuring up to 10 billion parameters and deployable on user devices; medium-sized models, which include up to 100 billion parameters and can be hosted at the network edge; and the very largest, trillion-parameter LLMs that will continue to reside in the cloud.
With GenAI, an employee could feasibly communicate "in layman's terms" with a telco's NOCs and SOCs (its network and service operations centers) to find out "what is going on," said Zerbib. Language models running at the network edge might also support applications for customers. "Let's say you have a French administration that needs to offer certain services to constituents throughout the country, and they want to make sure they do that with GenAI and have a distributed architecture to do it everywhere," said Zerbib. "Well, this is an opportunity for a telco to say we might provide an infrastructure to run those distributed LLMs that will handle social administration."
The most obvious AI attractions are still on the cost side. AI-generated insights should clearly help telcos reduce customer churn, minimize network outages and "essentially extract more value from the infrastructure for the same amount of capital investment," as Zerbib describes it. The telco dream, though, is of AI as a catalyst for sales growth. "There is a question mark with slicing, with the emergence of new use cases and low-latency capabilities," said Zerbib. "We might be able to monetize that." Conscious of the telecom sector's historical tendency to oversell, Zerbib is for now putting a heavy emphasis on that "might."
See the original post here:
Orange CTO: Now's not the time to invest in Nvidia GPUs for AI - Light Reading
UTSW Researchers Develop AI That Writes Its Own Algorithms – dallasinnovates.com
As represented in this illustration, deep distilling transforms complex neural network insights into concise, human-readable rules or code, making AI-discovered knowledge more accessible and transparent. [Illustration: Anda Kim/UTSW]
Researchers at UT Southwestern Medical Center in Dallas have developed an artificial intelligence (AI) method that writes its own algorithms and may one day operate as an automated scientist to extract meaning from complex datasets.
Milo Lin, Ph.D., is Assistant assistant professor in the Lyda Hill Department of Bioinformatics, Biophysics, and the Center for Alzheimers and Neurodegenerative Diseases at UT Southwestern. [Photo: UTSW]
Our work is the first step in allowing researchers to use AI to directly convert complex data into new human-understandable insights, said Milo Lin, Ph.D., assistant professor in the Lyda Hill Department of Bioinformatics, Biophysics, and the Center for Alzheimers and Neurodegenerative Diseases at UT Southwestern.
He noted that while researchers are increasingly employing AI and machine learning models, these high-performing models provide limited new direct insights into the data.
Lin co-led the study, published in Nature Computational Science, with first author Paul J. Blazek, M.D., Ph.D., who worked on this project as part of his thesis work while he was at UT Southwestern.
According to UTSW, the field of AI has exploded in recent years, with significant crossover from basic and applied scientific discovery to popular use.
One common branch of AI, known as neural networks, emulates the structure of the human brain by mimicking the way biological neurons signal one another, the university said. Neural networks are a form of machine learning, which creates outputs based on input data after learning on a training dataset, the school noted.
Lin said that although this tool has found significant use in applications such as image and speech recognition, conventional neural networks have significant drawbacks.
Most notably, they often dont generalize far beyond the data they train on, and the rationale for their output is a black box, meaning researchers dont have a way to understand how a neural network algorithm reached its conclusion.
UTSW said the study was supported by its High Impact Grant Program, which was started in 2001 and supports high-risk research offering high potential impact in basic science or medicine.
Seeking to address both issues, the researchers said they developed a method called deep distilling.
According to UTSW, deep distilling automatically discovers algorithms or the rules to explain observed input-output patterns in limited training datadatasets used to train machine learning models.
Thats accomplished by training an essence neural network (ENN), previously developed in the Lin Lab, on input-output data. The parameters of the ENN that encode the learned algorithm are then translated into succinct computer codes so users can read them, the university said.
Researchers said they tested deep distilling in a variety of scenarios in which traditional neural networks cannot produce human-comprehensible rules and have poor performance in generalizing to very different data.
Those included cellular automata, in which grids contain hypothetical cells in distinct states that evolve over time according to a set of rules. Thats often used as model systems for emergent behavior in the physical, life, and computer sciences, UTSW noted.
The school said that despite the grid the researchers used having 256 possible sets of rules, deep distilling was able to learn the rules for correctly predicting the hypothetical cells behavior for every set of rules after only seeing grids from 16 rule sets, summarizing all 256 rule sets in a single algorithm.
In another test, researchers trained deep distilling to accurately classify a shapes orientation as vertical or horizontal.
According to UTSW, the method only needed a few training images of perfectly horizontal or vertical lines. However, it was able to use the short algorithm it found to correctly solve much more complicated cases, such as patterns with multiple lines or gradients and shapes made of boxes and zigzag, diagonal, or dotted lines.
Eventually, Lin said, deep distilling could be applied to the vast datasets generated by high-throughput scientific studies, such as those used for drug discovery, and act as an automated scientist capturing patterns in results not easily discernible to the human brain, such as how DNA sequences encode functional rules of biomolecular interactions.
UT Southwestern said that deep distilling potentially could serve as a decision-making aid to doctors, offering insights on its thought process through the generated algorithms.
Sign up to keep your eye on whats new and next in Dallas-Fort Worth, every day.
UT Southwestern Medical Center and pharmaceutical company Pfizer have teamed up to develop new ways to deliver genetic treatments to patients. Together they aim to accelerate the development of therapies that work through RNA biology and engineered delivery methods to fix root genetic causes of disease, which Pfizer could apply to its portfolio of investigational programs.
Saturday's event is one of many marking this year's 150th anniversary of Dallas' historic Deep Ellum neighborhood. Saturday's event will begin just down the street from the center beneath an overpass, where attendees will experience the Centers outdoor installation, Invisible Deep Ellum, followed by a live music performance at the new center by the Light Crust Doughboys.
The initial investment will support the commercialization of life science research, including identifying researchers with entrepreneurial mindsets, safeguarding intellectual property, and recruiting key executives.
Known as the birthplace of the blues in Texas, the district just east of downtown Dallas regularly buzzes with live music, street art, galleries, restaurants, clubs, and culturally creative retail. But all this year, Deep Ellum is kicking it up a notch. Here's what's in store over the next several weeks.
That was then. This is now. New streetlight pole banners are hitting the streets in Dallas' Deep Ellum neighborhood, offering looks at the past and present of the historic area east of downtown.
See the original post:
UTSW Researchers Develop AI That Writes Its Own Algorithms - dallasinnovates.com
Machine Learning for Automated Root Cause Analysis: Promise and Pain – The New Stack
Let’s envision a world where root causes are instantly identified the moment any system degradation occurs:
Maria, an e-commerce site reliability engineer, wakes up to an alert that the site’s checkout success rate has dropped 15% over the last 30 minutes due to higher-than-normal failure rates. With traditional monitoring tools, this would take hours of manual analysis to troubleshoot.
Instead, within seconds, Maria’s AIOps platform sends a notification showing the root cause: A dependency used by the payment microservice has been degraded, slowing transaction-processing times. The latest version of the payment service couldn’t handle the scale placed on the prior version.
The AIOps platform then details all affected components and APIs involved in this event. With this insight, Maria immediately knows both the blast radius and scope of the issue. She quickly resolves the problem by rolling back the last update made to the payment service, and checkout success rates are restored without any further customer impact. Going from alert to resolution took less than 5 minutes.
This level of automated root cause analysis delivers immense benefits:
This promise seems almost too good to be true. And indeed, multiple barriers obstruct the path to production-grade ML pipelines for root cause analysis.
To understand why, think about your production environment as if it were a car. You’re driving on the freeway when your engine starts rattling, sputtering and eventually stalling. If you were trying to replace your mechanic with an ML algorithm to identify the root cause, what are some of the challenges you might encounter?
Let’s explore further these pitfalls inhibiting automated root cause analysis:
1. No machine-readable system topology
ML models can only spot patterns in data they can access. Without an existing topology mapping the thousands of interdependent services, containers, APIs and infrastructure elements, models have no pathway to traverse failures across domains.
Manually creating this topology is remarkably complex and sometimes impossible as production environments dynamically scale across hybrid cloud infrastructure.
2. Root cause inference at scale
Even with a topology, searching during an incident poses scalability issues. Existing ML libraries cannot handle production causality analysis.
To diagnose checkout failure, should we evaluate payment APIs or database clusters? Intuitively, an engineer would prioritize services tied to revenue delivery. But generic ML techniques lack this reasoning, forcing an exponential search across all topology layers — like holding a microphone to every inch of a car engine.
Advanced algorithms are needed to traverse topology graphs during incidents, weighing and filtering options based on business criticality. Both simple and intricate failure chains must be unpackaged — all before revenue and trust disappear.
3. Interpretability for humans
Finally, ML troubleshooting creates a new challenge: how to make inferences understandable to humans. Identifying patterns in metrics data reveals statistical correlations between events, but not causal priority chains:
But this diagnosis doesn’t answer the questions that provide actionable insights to engineers:
Solving this final-mile problem requires models that capture and visualize rootcause probability, business-impact sequencing, risk levels and mitigation recommendations.
While core machine learning techniques show promise, purpose-built solutions are necessary to address the complexity of causality analysis at production scale. Combining specialized topology inference, heuristic graph search algorithms and interpretable data science unlocks the power of automated root cause analysis. But it requires advances in data collection, service mapping, ML and the communication of technical insights — all with the goal of remediation.
To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon Europe in Paris, from March 19-22.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.
SUBSCRIBE
See the original post:
Machine Learning for Automated Root Cause Analysis: Promise and Pain - The New Stack
Artificial intelligence vs machine learning: what’s the difference? – ReadWrite
There are so many buzzwords in the tech world these days that keeping up with the latest trends can be challenging. Artificial intelligence (AI) has been dominating the news, so much so that AI was named the most notable word of 2023 by Collins Dictionary. However, specific terms like machine learning have often been used instead of AI.
Introduced by American computer scientist Arthur Samuel in 1959, the term machine learning is described as a computers ability to learn without being explicitly programmed.
For one, machine learning (ML) is a subset of artificial intelligence (AI). While they are often used interchangeably, especially when discussing big data, these popular technologies have several distinctions, including differences in their scope, applications, and beyond.
Most people are now aware of this concept. Still, artificial intelligence actually refers to a collection of technologies integrated into a system, allowing it to think, learn, and solve complex problems. It has the capacity to copy cognitive abilities similar to human beings, enabling it to see, understand, and react to spoken or written language, analyze data, offer suggestions, and beyond.
Meanwhile, machine learning is just one area of AI that automatically enables a machine or system to learn and improve from experience. Rather than relying on explicit programming, it uses algorithms to sift through vast datasets, extract learning from the data, and then utilize this to make well-informed decisions. The learning part is that it improves over time through training and exposure to more data.
Machine learning models are the results or knowledge the program acquires by running an algorithm on training data. The more data used, the better the models performance.
Machine learning is an aspect of AI that enables machines to take knowledge from data and learn from it. In contrast, AI represents the overarching principle of allowing machines or systems to understand, reason, act, or adapt like humans.
Hence, think of AI as the entire ocean, encompassing various forms of marine life. Machine learning is like a specific species of fish in that ocean. Just as this species lives within the broader environment of the ocean, machine learning exists within the realm of AI, representing just one of many elements or aspects. However, it is still a significant and dynamic part of the entire ecosystem.
Machine learning cannot impersonate human intelligence, which is not its aim. Instead, it focuses on building systems that can independently learn from and adapt to new data through identifying patterns. AIs goal, on the other hand, is to create machines that can operate intelligently and independently, simulating human intelligence to perform a wide range of tasks, from simple to highly complex ones.
For example, when you receive emails, your email service uses machine learning algorithms to filter out spam. The ML system has been trained on vast datasets of emails, learning to distinguish between spam and non-spam by recognizing patterns in the text, sender information, and other attributes. Over time, it adapts to new types of spam and your personal preferences like which emails you mark as spam or not continually improving its accuracy.
In this scenario, your email provider may use AI to offer smart replies, sort emails into categories (like social, promotions, primary), and even prioritize essential emails. This AI system understands the context of your emails, categorizes them, and suggests short responses based on the content it analyzes. It mimics a high level of understanding and response generation that usually requires human intelligence.
There are three main types of machine learning and some specialized forms, including supervised, unsupervised, semi-supervised, and reinforcement learning.
In supervised learning, the machine is taught by an operator. The user supplies the machine learning algorithm with a recognized dataset containing specific inputs paired with their correct outputs, and the algorithm has to figure out how to produce these outputs from the given inputs. Although the user is aware of the correct solutions, the algorithm needs to identify patterns, all while learning from them and making predictions. If the predictions have errors, the user has to correct them, and this cycle repeats until the algorithm reaches a substantial degree of accuracy or performance.
Semi-supervised learning falls between supervised and unsupervised learning. Labeled data consists of information tagged with meaningful labels, allowing the algorithm to understand the data, whereas unlabeled data does not contain these informative tags. Using this mix, machine learning algorithms can be trained to assign labels to unlabeled data.
Unsupervised learning involves training the algorithm on a dataset without explicit labels or correct answers. The goal is for the model to identify patterns and relationships in the data by itself. It tries to learn the underlying structure of the data to categorize it into clusters or spread it along dimensions.
Finally, reinforcement learning looks at structured learning approaches, in which a machine learning algorithm is given a set of actions, parameters, and goals. The algorithm then has to navigate through various scenarios by experimenting with different strategies, assessing each outcome to identify the most effective approach. It employs a trial-and-error approach, drawing on previous experiences to refine its strategy and adjust its actions according to the given situation, all to achieve the best possible result.
In financial contexts, AI and machine learning serve as essential tools for tasks like identifying fraudulent activities, forecasting risks, and offering enhanced proactive financial guidance. Apparently, AI-driven platforms can now offer personalized educational content based on an individuals financial behavior and needs. By delivering bite-sized, relevant information, these platforms ensure users are well-equipped to make informed financial decisions, leading to better credit scores over time. Nvidia AI posted on X that generative AI was being incorporated into curricula.
During the Covid-19 pandemic, machine learning also gave insights into the most urgent events. They are also powerful weapons for cybersecurity, helping organizations protect themselves and their customers by detecting anomalies. Mobile app developers have actively integrated numerous algorithms and explicit programming to make their apps fraud-free for financial institutions.
Featured image: Canva
Read more from the original source:
Artificial intelligence vs machine learning: what's the difference? - ReadWrite
Deep learning model for personalized prediction of positive MRSA culture using time-series electronic health records – Nature.com
Patient characteristics
A total of 26,233 and 152,979 patients who met our selection criteria, as described under Methods, were identified from the Memorial Hermann Hospital System (MHHS) and Medical Information Mart for Intensive Care (MIMIC)-IV databases, respectively. Those patients had 56,233 and 393,713 index culture events over time in MHHS and MIMIC-IV datasets. The aggregated patient characteristics are described in Table1. Some patients were classified into MRSA and non-MRSA groups when they had both MRSA and non-MRSA events at different index time. Patient features were used once if the patient had two or more events in the same group. Demographic features at the time of index culture were used to describe the characteristics when patients were classified more than twice into one group. Overall, the MRSA group had a higher number of intensive care unit (ICU) admissions (MHHS: 4.3% vs. 0.7%, MIMIC-IV: 31.7% vs. 16.7%) and emergency department (ED) patients (MHHS: 66.4% vs. 13.3%, MIMIC-IV: 51.3% vs 35.0%). As MIMIC-IV was originally developed based on an ICU database, the MIMIC-IV dataset included a higher number of ICU patients. Intermediate unit (IMU) status was not included in the MIMIC-IV data. Table2 summarizes types of antibiotics and cultures before index time. Vancomycin was the most commonly used antibiotic, followed by cefepime in the MHHS dataset, whereas ceftriaxone was the second most commonly used antibiotic in the MIMIC-IV dataset. As expected, given the origin of the EHRs (MHHS from Houston and MIMIC-IV from Boston), the MHHS dataset had more Hispanic patients compared to MIMIC-IV (10.510.6% vs. 3.63.9%). Across groups, Caucasian was the most common race, and 5565 years was the most common age group. Gender was equally distributed in all groups. Blood and urine cultures were other common cultures taken during the study periods.
Table3 summarizes the bacteria and diagnostic codes identified within the event periods. S. aureus were the most common bacteria in MRSA groups, whereas E. coli was the most common in the non-MRSA group. Bacteremia (MHHS: 6.7% vs. 2.1%, MIMIC-IV: 8.6% vs. 1.9%) and skin soft tissue infection (MHHS: 24.8% vs. 5.6%, MIMIC-IV: 13.2% vs. 2.6%) were more common in MRSA groups.
Table4 shows the prediction accuracy of the models. For the MHHS dataset, the deep learning model PyTorch_EHR exhibited the highest Area Under Curve of Receiver Operating Characteristics (AUROC) of 0.911 [0.900 0.916] (see ROC curve in Supplementary Fig.5-1) compared to other machine learning models (logistic regression [LR]: 0.857 [0.8490.865] and light gradient boost machine [LGBM]: 0.892 [0.8850.899]). Similar results were obtained for the MIMIC-IV dataset (PyTorch_EHR: 0.859 [0.8490.869], LR: 0.816 [0.8040.828], and LGBM: 0.838 [0.8230.849]; see ROC curve in Supplementary Fig.5-2). We also evaluated the AUROC in each patient group with a specific diagnosis during the event. Although the AUROC decreased by 0.500.10, we had acceptable accuracy in each infection in the MHHS dataset. We also evaluated confusion matrices based on our models high-risk and low-risk predictions (see Supplementary Table4). In high-risk groups, Pytorch_EHR showed a specificity of 95.0% and 99.0%, and a sensitivity of 48.1% and 19.3% in MHHS and MIMIC-IV datasets, respectively, whereas LGBM showed a specificity of 95.0% and 99.0%, and a sensitivity of 44.5% and 14.9%. In low-risk groups, Pytorch_EHR had a sensitivity of 95.0% and 90.0% and a specificity of 62.9% and 58.7% in MHHS and MIMIC-IV datasets, respectively, whereas LGBM showed a sensitivity of 95.0% and 90% and a specificity of 62.8% and 57.2%.
Given the imbalanced distributions of positive events in both datasets, for high-risk patients, positive predictive values (PPV) were relatively low: 65.6% and 22.4% for Pytorch_EHR and 63.6% and 17.5% for LGBM in MHHS and MIMIC-IV datasets, respectively. However, negative predictive values (NPV) were high: 90.3% and 98.9% for Pytorch_EHR and 89.7% and 98.8% for LGBM in MHHS and MIMIC-IV datasets, respectively. For low-risk patients, PPV was low: 37.6% and 3.0% for Pytorch_EHR and 33.5% and 2.9% for LGBM in MHHS and MIMIC-IV datasets, respectively. However, NPV were particularly high: 98.6% and 99.8% for Pytorch_EHR and 98.5% and 99.8% for LGBM in MHHS and MIMIC-IV datasets, respectively.
Fig.1 shows the cumulative incidence curve of MRSA-positive cultures over two weeks from the index culture. In both datasets, our model clearly differentiated the patients with high and low risks of MRSA-positive cultures. The cumulative incidence of MRSA-positive cultures in the MRSA group in the MHHS dataset was 61.2%, whereas the incidence in the MIMIC-IV dataset was approximately 18.2%. The low incidence in MIMIC-IV despite a high risk was likely due to the overall incidence of positive MRSA cultures in the MIMIC-IV dataset.
a and b show cumulative incidence of MRSA cultures in Memorial Hermann Hospital System (MHHS) and Medical Information Mart for Intensive Care (MIMIC)-IV datasets, respectively. Both figures were generated based on the risk predicted by our model in test datasets. Given the significant imbalance in the MIMIC-IV dataset, even high-risk patients achieved 20% positivity compared to the MHHS dataset. In contrast, the low-risk patient group had fewer false negatives. The shaded area in the graph represents the 95% confidence intervals. MHHS Memorial Hermann Hospital System, MIMIC Medical Information Mart for Intensive Care, MRSA Methicillin Resistant Staphylococcus aureus.
AUROC curves over multiple index events were evaluated in MHHS and MIMIC-IV test datasets. (See Supplementary Fig.10) When evaluated on patients with only the first event in MHHS dataset, LGBM model performance was better than that of PyTorch_EHR and LR models. However, when evaluated on patients who had repeated events, i.e., a longer duration of observation in the dataset, PyTorch_EHR model performance improved significantly and sustained superiority against the LR and LGBM models. Similar results were obtained for the MIMIC-IV dataset, with a longer duration of observation providing better performance in the PyTorch_EHR model.
Table5 summarizes the potential clinical impact of the PyTorch_EHR model. In patients predicted as low risk, our model exhibited NPV of 98.6% and 99.8% in MHHS and MIMIC-IV datasets, respectively. In addition, among those low-risk patients who had true negative results, MRSA-specific antimicrobials were given by treating clinicians in 21.6% (1505/6975) and 2.3% (1069/45,533) of events, which translated to 7949 and 1397 doses of MRSA-specific antimicrobials in MHHS and MIMIC-IV, respectively. The main antimicrobials used for those patients were vancomycin (6833 and 1254 doses in MHHS and MIMIC-IV, respectively), followed by linezolid (852 and 88 doses) and daptomycin (264 and 55 doses). Further, 1.4% (98/6,975) and 0.2% (108/45,533) events were false negatives in our model. Among them, only 0.3% (23/6,975) and 0.04% (27/45,533) events received MRSA-specific antimicrobials, which could be missed by our model.
In high-risk patients, our model exhibited PPV of 65.6% and 22.4% in MHHS and MIMIC-IV datasets, respectively (Supplementary Table4). The model predicted 12% (1437/11,922) and 1.2% (957/78,548) of events as high risk. Among high-risk groups, patients did not receive any MRSA-specific antimicrobials in 34.6% (497/1437) and 19.7% (189/957) of events in MHHS and MIMIC-IV datasets, respectively. On the contrary, with our models high-risk prediction, 15.8% (227/1437) and 71.1% (671/957) events may receive unnecessary MRSA-specific antimicrobials (potential harm from our model).
Finally, we evaluated the performance of our model in patients who had MRSA bacteremia. As summarized in Tables5, 31.8% (457/1437) and 7.3% (70/957) of high-risk events in MHHS and MIMIC-IV datasets, respectively, had MRSA bacteremia. These rates were much higher than the rates in low-risk events in MHHS (0.5%; 32/6975) and MIMIC-IV (0.04%; 35/48,455). Based on these findings, high-risk group had 69.3 and 101.2 higher relative risk of MRSA bacteremia compared to low-risk patient group. In addition, our model identified 58.0% (265/457) and 50.0% (35/70) of high-risk patients with true MRSA bacteremia did not receive MRSA-specific antimicrobials, considered optimal antibiotics for MRSA bacteremia, within 12h of the index cultures.
These results were also evaluated in other models and any MRSA antimicrobials (see Supplementary Table5). Overall, PyTorch_EHR model exhibited higher net-benefits against treating clinicians decisions compared to LGBM and LR models, except for MRSA bacteremia in MIMIC-IV dataset. LGBM model provided better net benefit compared to PyTorch_EHR model (18 vs. 10 MRSA bacteremia cases may receive early MRSA antimicrobials, respectively.)
We obtained the contribution scores for positive MRSA cultures in the datasets. Supplementary Fig.7 shows the top 14 median contribution scores of admission diagnoses in our model for MHHS data. Interestingly, our model identified multiple diagnoses often related to MRSA infections, such as cutaneous abscesses or boils. Supplementary Fig.8 shows the top 10 overall contribution scores for antimicrobial exposures before the index time in the datasets. Some common antibiotics had high scores in both datasets, but it was difficult to interpret the scores clinically.
We also present individual feature importance as a bar graph for an example patient among the patients we visualized (see Supplementary Fig.9). The patient is female and between 45 54 years of age, with multiple underlying comorbidities listed on admission two days (2 days) before the index culture (blood culture on index date). Our model identified a risk score of 0.541 (predicted as a positive patient). After the patient was admitted to the hospital, vancomycin and meropenem were initiated, and a blood culture was ordered. Subsequently, cultures identified MRSA over two weeks.
See the original post here:
Deep learning model for personalized prediction of positive MRSA culture using time-series electronic health records - Nature.com
Matillion Announces Release of Data Productivity Cloud for Databricks – AiThority
No-code data pipeline platform unlocksDelta Lakefor BI and machine learning
Matillions Data Productivity Cloud is now available for Databricks, enabling users to access the power of Delta Lake within their data engineering.
The Data Productivity Cloud with Databricks brings no-code data ingestion and transformation capabilities that are purpose-built for Databricks, enabling users to quickly build data pipelines at scale, to be used in AI and analytics projects.
Ciaran Dynes, Chief Product Officer at Matillion said:In the months since launching Data Productivity Cloud, weve continued to integrate more cloud platforms and data sources to bring no-code data engineering tools to every member of the data team.
Recommended AI News: Talkdesk Autopilot: AI Customer Service for Banks & Retailers
We know that SQL is a massive workload for Databricks, so as well as unlocking the value of Delta Lake and lakehouses, were excited to bring no-code tooling to the Data Productivity Cloud to help Databricks users be even more productive running SQL pipelines on the Databricks platform.
Data Productivity Cloud enables users to easily build data pipelines with any data source for business intelligence and data science projects.
Matillion has long partnered with the data and AI company Databricks, and Databricks Ventures is an investor in the tech unicorn.
Recommended AI News: Epiq AI Expands Legal Solutions as Part of the Epiq Service Cloud
Matillion delivers a comprehensive data integration solution that can fit seamlessly within the existing tech stack, helping data teams to centralise and transform data from any source into business-driving insights at speed.
Data Productivity Cloud with Databricks offers consolidated billing with Matillions transparent credit-based pricing model, with setup in minutes. Thousands of enterprises including Cisco, Docusign, Slack and TUI trust Matillion to move, transform and orchestrate data for a wide range of use cars from insights and operational analytics, to data science, machine learning and AI.
Recommended AI News: OurCrowd AI Fund to Collaborate with NVIDIA Inception
[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]
Originally posted here:
Matillion Announces Release of Data Productivity Cloud for Databricks - AiThority
The Top 3 Machine Learning Stocks to Buy in March 2024 – InvestorPlace
Source: NicoElNino / Shutterstock.com
You may be hearing the word AI bubble a lot these days, especially regarding the stock market. Since OpenAI released its artificial-intelligence (AI) chatbot ChatGPT in Nov. 2022, it feels like every company in the world has been getting into the AI business.
Machine learning is a type of AI that allows computers to learn and reproduce how humans learn and use that to replicate their behaviors. As you might imagine, machine learning has the potential to decrease the cost and time of human tasks and eliminate redundant work.
Companies are set to save billions of dollars by integrating machine learning tools and software in their businesses. As investors, not only is it important to look at which companies are successfully using machine learning, but also the companies that are providing these tools to be used. This article will discuss three of the top machine-learning stocks to buy while the AI industry remains red-hot.
Source: Poetra.RH / Shutterstock.com
NVIDIA (NASDAQ:NVDA) is the global leader when it comes to producing GPUs that can power machine-learning computers. The stock has been on a tear over the past year, returning north of 240% to shareholders, while surging up the list of the worlds most valuable companies. Despite such unprecedented growth, Yahoo Finance analysts still remain optimistic for with with a one-year target between an average of $852.10 to a high of $1,400.0.
When it comes to machine-learning GPUs, NVIDIA is second to none in the semiconductor industry. NVIDIA has more demand for its chips than it has supply even at elevated prices, an its customers include some of the most powerful companies in the world.
You might think that a stock that has risen by more than 240% in one year is overinflated. The fact is, that NVIDIAs revenue has grown so fast that its growth has kept pace with its stock valuation. Looking comparatively, NVIDIAs forward P/E ratio of 34.25x is still lower than the likes of Amazon and Tesla. As long as AI and machine learning are being adopted, NVIDIAs stock should continue to reap rewards for investors.
Source: Roschetzky Photography / Shutterstock.com
Tesla (NASDAQ:TSLA) is a company that needs no introduction. It is the largest manufacturer of electric vehicles in the world and single-handedly revolutionized the auto industry. While its stock has lagged behind its other Magnificient 7 counterparts in 2024 due to high-interest rate environments, its consensus one-year price target still aims for a high of $345.00.
So, how does an electric vehicle company operate in the machine-learning industry? Tesla, led by CEO Elon Musk, has long been trying to master self-driving technology. Teslas FSD or full self-driving software has had some roadblocks from regulatory agencies like the NHTSA in America, but Musk remains confident that it will be available to all Tesla users in the future.
Teslas stock still trades at a premium, especially since the company has reported declining operating margins and fairly stagnant revenue growth. The forward P/E ratio of the stock shows that TSLA is trading at about 65x forward earnings, which is nearly double that of NVIDIA. As mentioned, Teslas stock could continue to struggle until interest rates begin to decline. Savvy long-term investors might be taking this period of consolidation as a time to load up on the high-growth stock.
Source: Mamun sheikh K / Shutterstock.com
Palantir (NYSE:PLTR) is a data analytics and software company that has a very polarizing following on social media. At one time, Palantir was looked at as a meme stock, but the company has since proven to be profitable and has exhibited impressive growth.
While the operations of Palantir have always been shrouded in mystery, the company has made clear progress in growing its customer base over the past few years. One of the ways it has done this is by introducing its AIP or Artificial Intelligence Platform. AIP uses machine learning to help large-scale enterprises unluck insights from large sets of data. From this analysis, companies can identify inefficiencies and operate at a higher level.
We did mention Palantirs stock is trading at the high end of analyst estimates, right? Well, although it is a much smaller company, Palantirs valuation currently dwarfs that of both NVIDIA and Tesla. At its current price, Palantirs stock trades at about 25x sales and 79x future earnings. With the potential to be considered for S&P 500 inclusion later this year, and management guiding a FY2024 revenue of around $2.6 billion, Palantir is a worthy company to look into capitalizing off machine learning.
On the date of publication, Ian Hartana and Vayun Chugh did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.
Chandler Capital is the work of Ian Hartana and Vayun Chugh. Ian Hartana and Vayun Chugh are both self-taught investors whose work has been featured in Seeking Alpha. Their research primarily revolves around GARP stocks with a long-term investment perspective encompassing diverse sectors such as technology, energy, and healthcare.
Continued here:
The Top 3 Machine Learning Stocks to Buy in March 2024 - InvestorPlace