Page 215«..1020..214215216217..220230..»

JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing – PYMNTS.com

J.P. Morgan Chasereportedly unveiled an artificial intelligence-powered tooldesignedto facilitate thematic investing.

The tool, calledIndexGPT, delivers thematic investment baskets created withthe assistance ofOpenAIsGPT-4model, Bloomberg reported Friday (May 3).

IndexGPT creates these thematic indexes by generating a list of keywords associated with a particular theme that are then analyzed using a natural language processing model that scans news articles to identify companies involved in that space, according to the report.

The tool allows forthe selection ofa broader range of stocks, going beyond the obvious choices that are already well-known,Rui Fernandes, J.P. Morgans head of markets trading structuring, told Bloomberg.

Thematic investing, which focuses on emerging trends rather than traditional industry sectors or company fundamentals, has gained popularity in recent years, the report said.

Thematic funds experienced a surge in popularity in 2020 and 2021, with retail investors spending billions of dollars on products based on various themes. However, interest in these strategies waned amid poor performance and higher interest rates, per the report.

J.P. Morgan Chases IndexGPT aims to reignite interest in thematic investing by providing a more accurate and efficient approach, according to the report.

While AI hasbeen widely usedin the financial industry for functions such as trading, risk management and investment research, the rise of generative AI tools has opened new possibilities for banks and financial institutions, the report said.

Fernandes said he sees IndexGPT as a first step ina long-term process ofintegrating AI across the banks index offering, per the report. J.P. Morgan Chase aims to continuously improve its offerings, from equity volatility products to commodity momentum products, gradually and thoughtfully.

In another deployment of this technology in the investment space,Morgan Stanleysaid in September that it was launching anAI-powered assistantfor financial advisers and their support staff. This tool, the AI @ Morgan Stanley Assistant, facilitates access to 100,000 research reports and documents.

In the venture capital world, AI has become a tool for making savvyinvestment decisions. VC firms are using the technology to analyze vast amounts of data on startups and market trends, help the firms identify the most promising opportunities and aid them in making better-informed decisions about where to allocate their funds.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Excerpt from:
JPMorgan Chase Unveils AI-Powered Tool for Thematic Investing - PYMNTS.com

Read More..

Nexa AI Introduces Octopus v4: A Novel Artificial Intelligence Approach that Employs Functional Tokens to Integrate … – MarkTechPost

There has been rapid growth in the open-source landscape for Large Language Models (LLMs) after the release of the Llama3 model and its successor, Llama 2, by Meta in 2023. This release has led to the development of multiple innovative LLMs. These models have played an important role in this dynamic field by influencing natural language processing (NLP) significantly. This paper highlights the most influential open-source LLMs like Mistrals sparse Mixture of Experts model Mixtral-8x7B, Alibaba Clouds multilingual Qwen1.5 series, Abacus AIs Smaug, and 01.AIs Yi models that focus on data quality.

The emergence of on-device AI models, such as LLMs has transformed the landscape of NLP, providing numerous benefits compared to traditional cloud-based methods. However, the true potential is seen by combining on-device AI with cloud-based models, resulting in a new idea called cloud-on-device collaboration. AI systems can achieve new heights of performance, scalability, and flexibility by combining the power of on-device and cloud-based models. By using both models together, computational resources can be allocated efficiently: lighter, private tasks are managed by on-device models, and cloud-based models take on heavier or more complex operations.

Researchers from Nexa AI introduce Octopus v4, a robust approach that utilizes functional tokens to integrate multiple open-source models, each optimized for specific tasks. Octopus v4 utilizes functional tokens to direct user queries efficiently toward the most suitable vertical model and optimally adjusts the query format for enhanced performance. Octopus v4, an upgraded version of its predecessors Octopus v1, v2, and v3 models, shows outstanding performance in selection, parameter understanding, and query restructuring. Also, the Octopus model and functional tokens are used to describe the use of graphs as a flexible data structure that coordinates efficiently with various open-source models.

In the system architecture of a complex graph where each node represents a language model, utilizing multiple Octopus models for coordination, below are the components of this system:

In the thorough evaluation of the Octopus v4 system, its performance is compared with other useful models using the MMLU benchmark to prove its effectiveness. Two compact LMs: the 3B parameter Octopus v4, and another worker language model with up to 8B parameters, are utilized in this system. An example of the user query for this model is:

Query: Tell me the result of derivative of x^3 when x is 2?

Response: (Determine the derivative of the function f(x) = x^3 at the point where x equals 2, and interpret the result within the context of rate of change and tangent slope.)

In conclusion, researchers from Nexa AI proposed Octopus v4, a robust approach that utilizes functional tokens to integrate multiple open-source models, each optimized for specific tasks. Also, the performance of the Octopus v4 system is compared with other renowned models using the MMLU benchmark to prove its effectiveness. For future work, researchers are planning to improve this framework by utilizing multiple vertical-specific models and including the advanced Octopus v4 models with multiagent capability.

Check out thePaper.All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter.Join ourTelegram Channel,Discord Channel, andLinkedIn Group.

If you like our work, you will love ournewsletter..

Dont Forget to join our41k+ ML SubReddit

Sajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.

Original post:
Nexa AI Introduces Octopus v4: A Novel Artificial Intelligence Approach that Employs Functional Tokens to Integrate ... - MarkTechPost

Read More..

How Blockchain and Machine Learning Are Reshaping Industries – DataDrivenInvestor

The Future of Technology The Convergence of Decentralized Ledgers and Artificial Intelligence 6 min read

The world is undergoing a technological revolution driven by two groundbreaking innovations: blockchain and machine learning. These cutting-edge technologies are disrupting industries and reshaping business models.

Blockchain, a decentralized digital ledger, promises secure, transparent transactions by eliminating intermediaries and enhancing trust.

Meanwhile, machine learning empowers computers to learn from data, identify patterns, and make accurate predictions, unlocking new realms of efficiency and automation.

As blockchain and machine learning continue to evolve, they are poised to profoundly transform various sectors, from finance and healthcare to supply chains and beyond.

This article explores the transformative potential and real-world applications of these revolutionary technologies across industries.

Blockchain is a decentralized digital ledger that records transactions across multiple computers. It ensures transparency, immutability, and security by using cryptographic algorithms.

Each transaction, or block, is linked to the previous block, creating a chain of information that cannot be altered without consensus from the network.

This makes blockchain highly resistant to fraud and tampering, making it an ideal solution for industries that deal with sensitive data, such as finance and healthcare.

Machine learning, on the other hand, is a subset of artificial intelligence that enables computers to learn and make predictions without being explicitly programmed.

It relies on algorithms and statistical models to analyze vast amounts of data and identify patterns.

Machine learning algorithms can continuously learn from new data, improving their performance over time.

This technology has the potential to automate processes, optimize decision-making, and uncover valuable insights from complex datasets.

Blockchain is revolutionizing industries by increasing transparency, improving security, and streamlining processes.

One industry that has been greatly impacted by blockchain is supply chain management. With blockchain, companies can track the movement of goods from the source to the end consumer in real-time.

This ensures transparency, reduces fraud, and enhances trust between parties. Additionally, blockchain can provide an immutable record of product origins, certifications, and quality, making it easier to identify and address issues related to counterfeit goods or unsafe products.

Another industry that has embraced blockchain is finance. Blockchain technology enables secure and efficient peer-to-peer transactions, eliminating the need for intermediaries such as banks.

This can significantly reduce transaction costs and processing times. Moreover, blockchain can facilitate cross-border transactions by eliminating the need for multiple currency conversions and reducing the risk of fraud.

It also has the potential to democratize access to financial services, especially in underserved regions where traditional banking is limited.

Machine learning is transforming industries by automating processes, optimizing decision-making, and uncovering valuable insights.

In the healthcare industry, machine learning algorithms are being used to analyze medical records, identify patterns, and make accurate diagnoses.

This can lead to earlier detection of diseases, personalized treatment plans, and improved patient outcomes. Machine learning can also help predict epidemics, optimize hospital resource allocation, and improve the efficiency of clinical trials.

In the finance industry, machine learning is revolutionizing fraud detection. Traditional rule-based systems are often insufficient to detect complex fraudulent patterns.

Machine learning algorithms can analyze vast amounts of data in real-time, identify anomalies, and detect fraudulent activities with high accuracy. This can save financial institutions billions of dollars and protect consumers from identity theft and financial fraud.

Machine learning is revolutionizing the healthcare industry by enabling personalized medicine, improving diagnostics, and enhancing patient care. By analyzing large datasets, machine learning algorithms can identify patterns and predict disease outcomes.

This can help physicians make more accurate diagnoses and develop personalized treatment plans based on a patients unique characteristics. Machine learning can also assist in drug discovery by analyzing molecular structures and predicting the efficacy of potential drugs.

Furthermore, machine learning can improve patient care by analyzing electronic health records and identifying potential risks or complications.

This can help healthcare providers intervene early and prevent adverse events. Machine learning algorithms can also analyze medical images, such as X-rays or MRIs, and detect abnormalities or signs of disease that may be missed by human radiologists.

This can lead to earlier detection of diseases, more accurate diagnoses, and improved patient outcomes.

Machine learning is revolutionizing the finance industry by automating processes, improving risk management, and enhancing customer experience. One area where machine learning has had a significant impact is fraud detection.

Traditional rule-based systems are often insufficient to detect complex fraudulent patterns.

Machine learning algorithms can analyze vast amounts of data in real-time, identify anomalies, and detect fraudulent activities with high accuracy. This can save financial institutions billions of dollars and protect consumers from identity theft and financial fraud.

Machine learning is also transforming credit scoring. Traditional credit scoring models rely on limited data points and may not accurately assess a borrowers creditworthiness.

Machine learning algorithms can analyze a wide range of data, including social media activity, online behavior, and transaction history, to provide more accurate credit scores.

This can help financial institutions make better lending decisions and expand access to credit for underserved populations.

Blockchain and machine learning are revolutionizing supply chain management by enhancing transparency, improving efficiency, and reducing costs. With blockchain, companies can track the movement of goods from the source to the end consumer in real time.

This ensures transparency, reduces fraud and enhances trust between parties. Additionally, blockchain can provide an immutable record of product origins, certifications, and quality, making it easier to identify and address issues related to counterfeit goods or unsafe products.

Machine learning, on the other hand, can optimize supply chain processes by analyzing vast amounts of data and identifying patterns. It can predict demand, optimize inventory management, and improve logistics planning.

Machine learning algorithms can also help identify potential bottlenecks or inefficiencies in the supply chain and suggest ways to improve them.

This can lead to cost savings, faster delivery times, and improved customer satisfaction.

The future of blockchain and machine learning is promising, with numerous industries expected to benefit from these technologies. In the healthcare industry, blockchain has the potential to improve patient data security and interoperability.

It can enable patients to have full control over their medical records and decide who can access them. Machine learning, on the other hand, can assist in drug discovery, personalized medicine, and predicting disease outbreaks.

In the finance industry, blockchain can revolutionize cross-border payments, reduce transaction costs, and enhance financial inclusion.

Machine learning can improve risk management, fraud detection, and customer experience. It can also facilitate the development of robo-advisory services, personalized financial recommendations, and automated trading systems.

In supply chain management, blockchain and machine learning can enhance transparency, traceability, and efficiency.

They can reduce fraud, improve inventory management, and optimize logistics planning. Moreover, blockchain can enable the development of decentralized marketplaces and peer-to-peer transactions.

Blockchain and machine learning are reshaping industries by increasing transparency, improving efficiency, and enhancing security.

These technologies have the potential to revolutionize various sectors, from healthcare to finance and supply chain management.

As we embrace the future, it is crucial for businesses and organizations to understand the potential of blockchain and machine learning and leverage their benefits.

By embracing these technologies, we can create a more secure, efficient, and interconnected world.

Thank you for reading!!

See original here:
How Blockchain and Machine Learning Are Reshaping Industries - DataDrivenInvestor

Read More..

NSF Funds Groundbreaking Research Project to ‘Democratize’ AI – Northeastern University

Groundbreaking research by Northeastern University will investigate how generative AI works and provide industry and the scientific community with unprecedented access to the inner workings of large language models.

Backed by a $9 million grant from the National Science Foundation, Northeastern will lead the National Deep Inference Fabric that will unlock the inner workings of large language models in the field of AI.

The project will create a computational infrastructure that will equip the scientific community with deep inferencing tools in order to develop innovative solutions across fields. An infrastructure with this capability does not currently exist.

At a fundamental level, large language models such as Open AIs ChatGPT or Googles Gemini are considered to be black boxes which limits both researchers and companies across multiple sectors in leveraging large-scale AI.

Sethuraman Panchanathan, director of the NSF, says the impact of NDIF will be far-reaching.

Chatbots have transformed societys relationship with AI, but how they operate is yet to be fully understood, Panchanathan says. With NDIF, U.S. researchers will be able peer inside the black box of large language models, gaining new insights into how they operate and greater awareness of their potential impacts on society.

Even the sharpest minds in artificial intelligence are still trying to wrap their heads around how these and other neural network-based tools reason and make decisions, explains David Bau, a computer science professor at Northeastern and the lead principal investigator for NDIF.

We fundamentally dont understand how these systems work, what they learned from the data, what their internal algorithms are, Bau says. I consider it one of the greatest mysteries facing scientists today what is the basis for synthetic cognition?

David Madigan, Northeasterns provost and senior vice president for academic affairs, says the project will help address one of the most pressing socio-technological problems of our time how does AI work?

Progress toward solving this problem is clearly necessary before we can unlock the massive potential for AI to do good in a safe and trustworthy way, Madigan says.

In addition to establishing an infrastructure that will open up the inner workings of these AI models, NDIF aims to democratize AI, expanding access to large language models.

Northeastern will be building an open software library of neural network tools that will enable researchers to conduct their experiments without having to bring their own resources, and sets of educational materials to teach them how to use NDIF.

The project will build an AI-enabled workforce by training scientists and students to serve as networks of experts, who will train users across disciplines.

There will be online and in-person educational workshops that we will be running, and were going to do this geographically dispersed at many locations taking advantage of Northeasterns physical presence in a lot of parts of the country, Bau says.

Research emerging from the fabric could have worldwide implications outside of science and academia, Bau explains. It could help demystify the underlying mechanisms of how these systems work to policymakers, creatives and others.

The goal of understanding how these systems work is to equip humanity with a better understanding for how we could effectively use these systems, Bau says. What are their capabilities? What are their limitations? What are their biases? What are the potential safety issues we might face by using them?

Large language models like Chat GPT and Googles Gemini are trained on huge amounts of data using deep learning techniques. Underlying these techniques are neural networks, synthetic processes that loosely mimic the activity of a human brain that enable these chatbots to make decisions.

But when you use these services through a web browser or an app, you are interacting with them in a way that obscures these processes, Bau says.

They give you the answers, but they dont give you any insights as to what computation has happened in the middle, Bau says. Those computations are locked up inside the computer, and for efficiency reasons, theyre not exposed to the outside world. And so, the large commercial players are creating systems to run AIs in deployment, but theyre not suitable for answering the scientific questions of how they actually work.

At NDIF, researchers will be able to take a deeper look at the neural pathways these chatbots make, Bau says, allowing them to see whats going on under the hood while these AI models actively respond to prompts and questions.

Researchers wont have direct access to Open AIs Chat GPT or Googles Gemini as the companies havent opened up their models for outside research. They will instead be able to access open source AI models from companies such as Mistral AI and Meta.

What were trying to do with NDIF is the equivalent of running an AI with its head stuck in an MRI machine, except the difference is the MRI is in full resolution. We can read every single neuron at every single moment, Bau says.

But how are they doing this?

Such an operation requires significant computational power on the hardware front. As part of the undertaking, Northeastern has teamed up with the University of Illinois Urbana-Champaign, which is building data centers equipped with state-of-the-art graphics processing units (GPUs) at the National Center for Supercomputing Applications. NDIF will leverage the resources of the NCSA DeltaAI project.

NDIF will partner with New Americas Public Interest Technology University Network, a consortium of 63 universities and colleges, to ensure that the new NDIF research capabilities advance interdisciplinary research in the public interest.

Northeastern is building the software layer of the project, Bau says.

The software layer is the thing that enables the scientists to customize these experiments and to share these very large neural networks that are running on this very fancy hardware, he says.

Northeastern professors Jonathan Bell, Carla Brodley, Bryon Wallace and Arjun Guha are co-PIs on the initiative.

Guha explains the barriers that have hindered research into the inner-workings of large generative AI models up to now.

Conducting research to crack open large neural networks poses significant engineering challenges, he says. First of all, large AI models require specialized hardware to run, which puts the cost out of reach of most labs. Second, scientific experiments that open up models require running the networks in ways that are very different from standard commercial operations. The infrastructure for conducting science on large-scale AI does not exist today.

NDIF will have implications beyond the scientific community in academia. The social sciences and humanities, as well as neuroscience, medicine and patient care can benefit from the project.

Understanding how large networks work, and especially what information informs their outputs, is critical if we are going to use such systems to inform patient care, Wallace says.

NDIF will also prioritize the ethical use of AI with a focus on social responsibility and transparency. The project will include collaboration with public interest technology organizations.

Read more here:
NSF Funds Groundbreaking Research Project to 'Democratize' AI - Northeastern University

Read More..

Are we rushing ahead AI artificial intelligence machine learning lab – Chemistry World

The emergence of self-driving labs and automated experimentation has brought with it the promise of increased rates of productivity and discovery in chemistry beyond what humans can achieve alone. But the black box nature of AI means we cannot see how or why deep learning systems make their decisions, making it difficult to know how it can best be used to optimise scientific research or if the outcomes can ever be trusted.

In November 2023, a paper was published in Nature reporting the discovery of over 40 novel materials using an autonomous laboratory guided by AI. However, researchers were quick to question the autonomous labs results. A preprint followed in January that reported that there were systematic errors all the way through owing to issues with both the computational and experimental work.

One of the authors of the critique, Robert Palgrave, a materials chemist at University College London, UK, said that although AI had made big advances there was a bit of a tendency to feel that AI had to change everything right now and that actually we should not expect things to change overnight.

Milad Abolhasani, who leads a research group that uses autonomous robotic experimentation to study flow chemistry strategies at North Carolina State University in the US, says the hype has taken over somewhat when it comes to AI and it is time to pause. As humans we are great at envisioning what the future is going to look like and what are the possibilities but you have to move step by step and make sure things are done correctly.

For many, the draw of AI comes from a need to enhance productivity. Whether thats reviewing the literature faster, running experiments faster, generating data faster, the productivity outcomes of AI are very appealing, explains Lisa Messeri, an anthropologist at Yale University in the US. And that has to do with institutional pressures to publish, to get your research done so you can do all the other things that you have to do.

Messeri says AI also holds the tantalising prospect of the promise of objectivity the idea that scientists are always in pursuit of tools that they feel are robust and limit human biases and interventions. While AI could indeed provide these benefits for some research, there are risks associated with relying too heavily on them and a need for us to remember the importance of including a diverse set of thinkers in the production of scientific knowledge. And, of course, AI models are only as good as the data that trains them.

Theres a rush for everyone to start doing the kind of science thats well suited for AI tools

Molly Crockett, Princeton University

For Messeri and her colleague Molly Crockett, a neuroscientist at Princeton University in the US, who co-wrote a perspective on the topic in Nature, the risks fall into three categories, all of which arise from the illusion of understanding, a phenomenon, well-documented in the cognitive sciences, relating to our tendency to overestimate how well we understand something.

The first risk arises when an individual scientist is trying to solve a problem using an AI tool and because the AI tool performs well, the scientist mistakenly believes that they understand the world better than they actually do, explains Crockett.

The second two refer to scientists as a collective and the inadvertent creation of a scientific monoculture. If you plant only one type of crop in a monoculture, this is very efficient and productive, but it also makes the crop much more vulnerable to disease, to pests, explains Crockett.

Were worried about two kinds of monocultures, she continues. The first is a monoculture of knowing we can use lots of different approaches to solve problems in science and AI is one approach but because of the productivity gains promised by AI tools, theres a rush for everyone to start doing the kind of science thats well suited for AI tools [and the] questions that are less well suited for AI tools get neglected.

They are also concerned about the development of a monoculture of knowers where, instead of drawing on the knowledge of an entire team with disciplinary and cognitive diversity, only AI tools are used. We know that its really beneficial to have interdisciplinary teams if youre solving a complicated problem, says Crockett.

Its great if you have people on your team who come from a lot of different backgrounds or have different skill sets in an era where we are increasingly avoiding human interactions in favour of digital interactions there may be a temptation to replace collaborators with AI tools [but] that is a really dangerous practice because its precisely in those cases where you lack expertise that you will be less able to determine whether the outputs returned by an AI are actually valid.

The question is, how can we tailor AI-driven tools such as self-driving labs to address specific research questions? Abolhasani and his colleague at NC State University, Amanda Volk, recently defined seven performance metrics to help unleash the power of self-driving labs something he was shocked to find did not already exist in the published literature.

The metrics are designed based on the notion that we want the machine-learning agent of self-driving labs to be as powerful as possible to help us make more informed decisions, he says. However, if the data the lab is trained on is not of a high enough quality, the decisions made by the lab are not going to be helpful, he adds.

A lot of self-driving labs do not even mention what the total chemical consumption was per experiment

Milad Abolhasani, North Carolina State University

The performance metrics they describe include degree of autonomy, which covers the level of influence a human has over the system; operational lifetime; throughput; experimental precision; material usage; accessible parameter space, which represents the range of experimental parameters that can be accessed; and optimisation efficiency, or the overall system performance.

We were surprised when we did the literature search that 95% of papers on self-driving labs did not report how long they could run the platform before it broke down [or] before they had to refill something, he explains. I would like to know how many experiments can that self-driving lab do per hour per day what is the precision of running the experiments how much can I trust the data youre producing?

A lot of self-driving labs do not even mention what the total chemical consumption [was] per experiment and per optimisation that they did, he adds.

Abolhasani and Volk say that by clearly reporting these metrics, research can be guided towards more productive and promising technological areas, and that without a thorough evaluation of self-driving labs, the field will lack the necessary information for guiding future research.

However, optimising the role AI can play within intricate fields such as synthetic chemistry will require more than improved categorisation and larger quantities of data. In a recent article in the Journal of the American Chemical Society, digital chemist Felix Strieth-Kalthoff, alongside such AI chemistry pioneers asAln Aspuru-Guzik, Frank Glorius and Bartosz Grzybowski,argues that algorithm designers need to form closer ties with synthetic chemists to draw on their specialist knowledge.

They argue that such a collaboration would be mutually beneficial by enabling synthetic chemists to develop AI models for synthetic problems of particular interest, transplanting the AI know-how into the synthetic community.

For Abolhasani, the success of autonomous experimentation in chemistry will ultimately come down to trust. Autonomous experimentation is a tool that can help scientists [but] in order to do that the hardware needs to be reproducible and trustworthy, he explains.

Its a must for the community in order to expand the user base

Milad Abolhasani, North Carolina State University

And to build this trust entry barriers need to be lowered to give more chemists the opportunity to use self-driving labs in their work. It has to be as intuitive as possible so that chemists with no expertise in autonomous experimentation can interact with self-driving labs, he explains.

In addition, he says, the best self-driving labs are currently very expensive, so lower-cost options need to be developed while still maintaining their reliability and reproducibility. Its a must for the community in order to expand the user base, he says.

Once [self-driving labs] become a mainstream tool in chemistry [they] can help us digitise chemistry and material science and provide access to high-quality experimental data but the power of that expert data is when the data is reproducible, reliable and is standardised for everybody to use.

Messeri believes AI will be most useful when it is seen only as an augmentation to humans, rather than a replacement. To do this, she says, the community will need to be much more particular about when and where it is used. I am very confident that creative scientists are going to be able to come up with cases in which this can be responsibly and productively implemented, she adds.

Crockett suggests scientists consider AI tools as another approach to analysing data one that is different to a human mind. As long as we respect that then we can strengthen our approach by including these tools as another diverse node in the network, she says.

Importantly, Crockett says this moment could also serve as a wake-up call about the institutional pressures that may be driving scientists towards AI in order to improve productivity without necessarily more understanding. But this problem is much bigger than any individual and requires widespread institutional acceptance before any solution can be found.

More here:
Are we rushing ahead AI artificial intelligence machine learning lab - Chemistry World

Read More..

3 Stocks Poised to Profit from the Rise of Artificial Intelligence – InvestorPlace

While artificial intelligence may be all the rage, the usual suspects in the space have largely flourished handsomely, which then incentivizes the case for underappreciated AI stocks to buy.

Rather than simply focusing on technology firms that have a direct link to digital intelligence, its useful to consider companies whether theyre tech enterprises or not that are using AI in their businesses. Yes, the semiconductor space is exciting but AI is so much more than that.

These less-appreciated ideas just might surprise Wall Street. With that, below are intriguing AI stocks to buy that dont always get the spotlight.

Source: Jim Lambert / Shutterstock.com

At first glance, agricultural equipment specialist Deere (NYSE:DE) doesnt seem a particularly relevant idea for AI stocks to buy. Technically, youd be right. After all, this is an enterprise that as roots going back to 1837. That said, an old dog can still learn new tricks.

With so much talk about autonomous mobility, Deere took a page out of the playbook and has invested in an automated tractor. Featuring 360-degree cameras, a high-speed processor and a neural network that sorts through images and determines which objects are safe to drive over or not, Deeres invention is the perfect marriage between a traditional industry and innovative methodologies.

Perhaps most importantly, Deere is meeting a critical need. Unsurprisingly, fewer young people are interested in an agriculture-oriented career. Therefore, these automated tractors are entering the market at the right time.

Lastly, DE trades at a modest price/earnings-to-growth (PEG) ratio of 0.54X. Thats lower than the sector median 0.82X. Its a little bit out there but Deere is one of the underappreciated AI stocks to buy.

Source: Eric Glenn / Shutterstock.com

While its just my opinion, grocery store giant Kroger (NYSE:KR) sells itself. No, the grocery industry is hardly the most exciting arena available. At the same time, people have to eat. Further, the company benefits from the trade-down effect. If economic conditions become even more challenging, people will eschew eating out for cooking in. Overall, that would be a huge plus for KR stock.

With that baseline bullish thesis out of the way, Kroger is also an enticing idea for hidden-gem AI stocks to buy. Earlier this year, the company announced that it will use AI technology for content management and product descriptions for marketplace sellers. Last year, Krogers head executive mentioned AI eight times during an earnings call.

Fundamentally, Kroger should benefit from revenue predictability. While the consensus sales target calls for a 1% decline in the current fiscal year, the high-side estimate is aiming for $152.74 billion. Last year, the print came out to just over $150 billion. With shares trading at only 0.27X trailing-year sales, KR could be a steal.

Source: Travelerpix / Shutterstock.com

Billed as a platform for live online learning, Nerdy (NYSE:NRDY) represents a legitimate tech play for AI stocks to buy. Indeed, its corporate profile states that its purpose-built proprietary platform leverages myriad innovations including AI to connect students, users and parents/guardians to tutors, instructors and subject matter experts.

Fundamentally, Nerdy should benefit from two key factors. Number one, the Covid-19 crisis disrupted education, particularly for young students. That could have a cascading effect down the line, making it all the more vital to play catchup. Nerdy can help in that department.

Number two, U.S. students have continued to fall behind in international tests. Its imperative for social growth and stability for students to get caught up, especially in the digital age. Therefore, NRDY is especially attractive.

Finally, analysts anticipate fiscal 2024 revenue to hit $237.81 million, up 23% from last years tally of $193.4 million. And in fiscal 2025, experts are projecting sales to rise to $293.17 million. Thats up more than 23% from forecasted 2024 sales. Therefore, its one of the top underappreciated AI stocks to buy.

On the date of publication, Josh Enomoto did not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

A former senior business analyst for Sony Electronics, Josh Enomoto has helped broker major contracts with Fortune Global 500 companies. Over the past several years, he has delivered unique, critical insights for the investment markets, as well as various other industries including legal, construction management, and healthcare. Tweet him at @EnomotoMedia.

Continued here:
3 Stocks Poised to Profit from the Rise of Artificial Intelligence - InvestorPlace

Read More..

Who in Europe is investing the most in artificial intelligence? – Euronews

Europe faces challenges in the adoption of artificial intelligence, including regulatory barriers and a shortage of skilled professionals. The Next Generation EU has committed 4.4 billion to AI initiatives, with two Southern European countries leading the way.

Artificial Intelligence (AI) is reshaping the global economic landscape, emerging as a pivotal force in the digital domain and driving innovation across various sectors.

By 2030, AI is expected to have injected more than 11 trillion into the global economy, according to industry forecasts. It's anticipated that AI and robotics will jointly spark the creation of around 60 million new jobs globally by 2025, underscoring the critical importance of digitalisation in propelling economic growth.

In a concerted effort to match global tech leaders, the European Union is intensifying its push to integrate and advance AI, with a particular emphasis on bolstering digital infrastructure and capabilities across its member states.

However, despite these optimistic projections, challenges persist.

Velina Lilyanova, a researcher at the European Parliamentary Research Service, has highlighted Europe's slow AI adoption in critical sectors such as healthcare and public administration.

"Europe has a weakness in this respect," she claims in her recent study entitled Investment in Artificial Intelligence in the National Recovery and Resilience Plans.

Lilyanova points out that Europe faces several challenges that hinder broader AI uptake, including regulatory barriers, trust issues, a shortage of digital skills, and low levels of company digitalisation.

"Member States need to address these barriers to facilitate widespread uptake," she stated, emphasising the need for regulatory reforms, enhancing digital skills, and boosting company digitalisation.

The European Commission has laid out ambitious goals for 2030: aiming for 90% of EU small and medium-sized enterprises (SMEs) to achieve at least a basic level of digital intensity and for 75% of EU companies to adopt technologies like cloud computing, AI, and big data.

Investment strategies in AI vary significantly among EU member states, ranging from direct research and development (R&D) funding to indirect support via business and public service digitalisation, as detailed by Lilyanova.

Spain's National Recovery and Resilience Plan (NRRP) specifically allocates funds to strengthen AI development, aiming to position the country as a leader in AI scientific excellence and innovation. The plan focuses on developing AI tools and applications in the Spanish language to enhance productivity in the private sector and efficiency in public administration.

Italy's Strategic Programme on AI (2022-2024), aligning with the broader EU AI strategy, aims to make Italy a global hub for AI research and innovation by enhancing skills and attracting leading AI talents.

Denmark is leveraging its strong R&D ecosystem and high digital intensity among SMEs to enhance its national digital strategy, incorporating AI to improve public administration through reforms.

The European Commission's Joint Research Centre has conducted an exhaustive analysis of AI-related funding across EU countries.

According to a 2023 study by Papazoglu et al. , the Next Generation EU (NGEU) instrument and its Recovery and Resilience Facility (RRF) account for 70% of total investments in digital transformation.

Specifically, of the 116.8 billion allocated by the NGEU RRF for the "Digital Decade", 4.376 billion is earmarked for AI projects.

A breakdown of national investments reveals Italy as the frontrunner, planning 1.895 billion to AI-related projects. Spain follows with 1.2 billion. Together, the two Southern European nations represent 71% of the total investments allocated to AI-related projects within NGEU RRF.

Denmark leads on a relative basis, dedicating 8.7% of its Digital RRF budget to AI projects. followed by Spain at 6.4%, and Ireland at 5.2%.

European countries are allocating an average of nearly 3% of their digitalisation funds to AI projects.

Sweden, the Netherlands, Belgium, and Austria are at the lower end, committing less than 1% of their RRF budgets to AI-related projects.

Read the rest here:
Who in Europe is investing the most in artificial intelligence? - Euronews

Read More..

EY survey reveals artificial intelligence is creating new hiring needs, while also making it more challenging to source … – PR Newswire

NEW YORK, April 29, 2024 /PRNewswire/ --Ernst & Young LLP(EY US) today announced the release of its latest Technology Pulse Poll, which examines the impact of artificial intelligence (AI) on the future of work from integration to talent and culture. The poll, which was conducted in March 2024 and surveyed more than 250 leaders in the technology industry, highlights the ongoing push and pull around AI in the workplace.

The poll finds that 50% of business leaders anticipate a combination of both layoffs and hiring over the next six months as a direct result of AI adoption. Yet, even with hiring plans in place, three out of five technology leaders (61%) say that emerging technology has made it more challenging for their company to source top technology talent.

"One thing is certain: Companies are reshaping their workforce to be more AI savvy. With this transition, we can anticipate a continuous cycle of strategic workforce realignment, characterized by simultaneous layoffs and hiring, and not necessarily in equal volumes," says Vamsi Duvvuri, EY Technology, Media and Telecommunications AI Leader. "But it's not all doom and gloom. Employees and companies alike continue to show enthusiasm around AI, specifically when it comes to opportunities to scale and compete more effectively in the marketplace."

According to the poll, 72% of respondents say their employees are using AI at least daily in the workplace, with top use cases being coding and software development (51%), data analysis (51%), and internal and external communication (47%).Though many leaders report concerns about AI and believe that more regulation is needed around this technology, most technology business leaders (85%) believe that emerging technology has had a positive impact on their workplace culture.

"AI is transforming the way we work, creating new opportunities for innovation and growth, while simultaneously posing unprecedented challenges, especially when it comes to talent," saysKen Englund, EY Americas Technology, Media and Telecommunications Leader. "Our recent pulse poll demonstrates that technology companies generally have a positive sentiment toward the next productivity wave. There's a lot of excitement at these companies in terms of how they will successfully apply their own industry tools to themselves."

The EY survey also found that:

Methodology

EY US commissioned Atomik Research to conduct an online survey of 255 business leaders in the technology industry throughout the United States. The margin of error is +/- 6 percentage points with a confidence level of 95%. Fieldwork took place between March 8 and March 16, 2024.

About EY

EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.

Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

EY refers to the global organization, and may refer to one or more, of the member firms of Ernst & Young Global Limited, each of which is a separate legal entity. Ernst & Young Global Limited, a UK company limited by guarantee, does not provide services to clients. Information about how EY collects and uses personal data and a description of the rights individuals have under data protection legislation are available via ey.com/privacy. EY member firms do not practice law where prohibited by local laws. For more information about our organization, please visit ey.com.

SOURCE EY

Excerpt from:
EY survey reveals artificial intelligence is creating new hiring needs, while also making it more challenging to source ... - PR Newswire

Read More..

Meet the Newest Artificial Intelligence (AI) Stock in the S&P 500. It Soared 1700% in 2 Years, and Wall Street Says the … – sharewise

The S&P 500 (SNPINDEX: ^GSPC) is the most popular benchmark for the U.S. stock market. The index includes 500 large-cap companies, currently defined as companies worth at least $18 billion, and it covers about 80% of domestic equities by market capitalization. To be considered for inclusion, a company must also be profitable, and its stock must be sufficiently liquid.

Super Micro Computer (NASDAQ: SMCI) became the newest artificial intelligence (AI) company in the S&P 500 when it joined the index in March 2024, little more than a year after it joined the S&P MidCap 400 in December 2022. Meanwhile, shares soared over 1,700% over the last two years as strong demand for AI computing products fueled rapid sales growth.

The stock still carries a consensus rating of "buy" among Wall Street analysts, and the median price target of $965 per share implies 26% upside from its current price of $762 per share. Here's what investors should know about Supermicro.

Continue reading

Source Fool.com

Read the original here:
Meet the Newest Artificial Intelligence (AI) Stock in the S&P 500. It Soared 1700% in 2 Years, and Wall Street Says the ... - sharewise

Read More..

Why Governments Around The World Fear DeFi? – Crypto Times

With the world around us changing rapidly, the calls for decentralization in finance have only grown louder, making governments across the globe uncomfortable and jittery.

The reasons are quite clear- a near century-old federalism system is reluctant to relinquish power.

Over the last decade, decentralization has emerged to be a disruptor in the traditional finance sector. The surge in demand for cryptocurrencies and Decentralized Autonomous Organizations (DAOs) has changed every single dogma about money and how markets operate.

Also, as an unintended consequence of this emerging technology, nefarious elements have used DeFi for various financial crimes, drug and human trafficking as well as terrorist activities, in the past few years.

This has caught the attention of governments across the world, who now use it as a smoking gun to downplay the potential of decentralization. The veil of anonymity offered by DeFi has upset those in power, as they believe it could lead/have already led to a parallel finance structure.

There have been a few remarkable recent events that clearly show how governments are clamping down upon the DeFi sector.

Powerful countries like China and Qatar have banned trading of cryptocurrencies. Japan and Belgium charge over 50% tax on crypto gains.

Recent convictions of crypto moguls Sam Bankman Fried of FTX and Changpeng Zhao of Binance have sent shockwaves within the crypto community.

United States Senator Elizabeth Warren has been vocally anti-crypto in her election campaigning, asking for stricter provisions. Earlier this year, she brought the Digital Assets Anti Money Laundering Bill in the house, which had strict provisions limiting the fundamental benefits of De-Fi markets.

Before we delve deep into this current standoff between governments and users of the peer-to-peer money transfer system, it is important to put a disclaimer.

We cannot imagine a world without public administration despite numerous inherent flaws and errors within governments. In a perfect world, we might not need governing bodies but as of now, the role of governments in implementing laws and maintaining harmony is paramount.

However, there are some areas, as outlined in this piece, where governments have fared poorly, doing a disservice to their citizens.

Steep, unfair taxation policies and opaque monetary systems fall under this category. As the world discovered decentralized finance back in 2009-10 and readily welcomed it, the governments grew more jittery by the idea of no third-party interference in financial transactions.

According to a few scholars, the definition of decentralization is:

Decentralization refers to a systematic effort to delegate to the lowest levels all authority except that which can only be exercised at central points.

Decentralization means the division of a group of functions and activities into relatively autonomous units with overall authority and responsibility for their operation delegated to time of cacti unit.

The simplest understanding of Decentralization is that it is the process of transferring authority from a central government or body to a sub-national entity.

In modern days, the concept of decentralization became popular lingo after a boom in the decentralized finance (DeFi) sector. Thanks to cryptocurrencies, DeFi sector is providing an alternate option over the traditional finance system by offering most of the services that persist into it.

Lets try to understand this through.

Assume a finance system A where one person wants to borrow some money in the centralized system. First, they have to visit the local bank branch that will do required verifications. After that, that particular bank will reach out to the central bank or financial institutions to get approval. On the confirmation, the bank will grant the loan to the borrower.

This process is time consuming, complex and tedious however it guarantees verification through back checks and bureaucratic steps.

Now, assume the finance system B where a person can directly borrow money from a lender in just a few minutes, without intervention of any third party through a peer-to-peer system. The transparency in this process is assured through blockchain technology.

Finance System B is faster, straightforward and transparent.

The real essence of decentralization lies in its elements of autonomy, secured environment and transparency.

To boost the concept of decentralization, new technologies like Blockchain have played a pivotal role. This distributed ledger technology (DLT) works on the motto of Dont trust, verify. This phrase eventually became the essence of the decentralization model.

There is an ongoing power struggle between centralized entities and decentralized seekers. While the decentralized sector is on the rise, governments around the world arent exactly pleased with the idea and they have their own set of reasons.

The main reason is that governments dont want to give up their power and authority to others. The prospect of losing control over the populations finances is giving authoritarian figures- from so-called democracies to monarchies- sleepless nights.

Currently, governments and regulatory agencies are collaboratively monitoring every financial service from bank accounts to transactions.

The rationale behind such an apprehension is that the governments believe they will no longer be able to trace dirty money since DeFi also allows anonymity.

The implementation of decentralized systems could diminish their control over economic activities, especially cutting down taxes and surcharges. Decentralized finance (DeFi) operates on the basis of disintermediation, meaning that transactions occur without the need for traditional financial intermediaries, such as banks or payment processors. Such a radical shift poses a direct challenge to the centralized systems that governments rely upon for surveillance, regulation and enforcement.

Governments have also expressed concerns regarding the risks associated with decentralized finance. These include issues like fraud, money laundering, and the financing of terrorism.

The anonymous nature of transactions in many decentralized platforms complicates the ability of authorities to track the flow of money and enforce laws. Furthermore, the lack of a centralized authority to oversee and intervene in transactions could lead to increased financial volatility and consumer risks.

Another significant issue is the impact of decentralization on a governments ability to implement monetary policy. Central banks control monetary supply, interest rates, and inflation and these tools are critical in managing a countrys economic activity.

With the rise of cryptocurrencies and DeFi platforms, individuals might move away from national currencies. This can destabilize traditional monetary systems and challenge the effectiveness of fiscal policies.

The technological advancements that enable decentralization also present challenges. Blockchain, the underlying technology for most cryptocurrencies and DeFi applications, is complex and requires significant computational resources.

Moreover, the regulatory frameworks currently in place are not well-suited to address the unique characteristics of decentralized systems, which creates a gap that might be exploited by malicious actors.

Beyond the financial and regulatory implications, decentralization also raises social and economic concerns. The shift towards decentralized platforms could lead to greater economic inequality.

While proponents argue that decentralization offers greater access to financial services, the reality is that only those with sufficient technological knowledge and access to digital infrastructure can fully benefit. This digital divide could exacerbate existing inequalities, as those without access are left further behind.

While the critics of the decentralization ecosystem debate much about its negative side, the world has already witnessed its value through various ways.

For instance, Switzerland has implemented decentralized values in its ecosystem in various innovative ways. This includes embracing blockchain technology and creating a supportive environment for decentralized finance (DeFi) and digital identity systems.

Switzerland has leveraged its decentralized federal system to encourage local economic development in smaller towns and regions. (such as Monthey and Solothurn). This approach has helped to boost a collaborative culture that drives innovation and competitive economic ecosystems.

These ecosystems also include productive migrants and multinational companies that contribute to the local knowledge base and enhance the competitiveness of small and medium-sized enterprises (SMEs).

The growth of DeFi in Switzerland shows a commitment to decentralizing financial services. DeFi systems operate on blockchain technology, allowing financial transactions and services to be executed via smart contracts without central authorities.

This not only includes typical financial services but includes more complex operations, like mortgages and loans. This helps in managing transparently and efficiently by code rather than traditional financial parties (Banks).

The Swiss digital identity ecosystem (e-ID) aims to provide a secure and decentralized way of managing identities online.

The governments approach to e-ID emphasizes user control over personal data and minimal data flow. This also aligns with decentralized principles like privacy by design and data minimization. This system supports the issuance of digital credentials, enhancing privacy and data sovereignty for Swiss citizens.

These initiatives reflect a broader commitment to utilizing decentralized technologies to enhance economic resilience, promote innovation, and protect individual privacy across various sectors in Switzerland.

So now the question is still the same. Is decentralization that bad? Here is an answer.

Decentralization is not totally bad, but it just changes how things are done. Instead of banks and governments controlling everything about money, decentralization lets individuals have more power and make decisions directly. This can make things like borrowing money faster and more straightforward.

However, governments are cautious about decentralization because it makes it harder for them to manage the economy.

Decentralization can make financial systems quicker and give people more control, it also brings challenges that need to be managed carefully to make sure its safe and fair for everyone.

As interest in decentralization grows, people are seeking more privacy, efficiency, and control over their finances. This shift challenges governments to find a balance between embracing the benefits of decentralization and their responsibilities to enforce crypto regulations.

In short, this issue is not just about technology or money; its deeply about powerwho has it, how its used, and who benefits from it.

As the field evolves, it is crucial for governments and decentralized groups to talk and create rules that promote innovation while ensuring public safety and social stability. The future of finance will likely depend on keeping the good balance between freedom and regulation.

Read more from the original source:

Why Governments Around The World Fear DeFi? - Crypto Times

Read More..