Page 30«..1020..29303132..4050..»

Palantir Stock vs. Microsoft Stock: Which Is the Best Artificial Intelligence (AI) Stock to Buy? – The Motley Fool

Palantir might be a smaller company, but that doesn't automatically make Microsoft the better investment.

Fool.com contributor Parkev Tatevosian compares Palantir Technologies (PLTR -2.60%) to Microsoft (MSFT -0.66%) to determine the better stock to buy.

*Stock prices used were the afternoon prices of April 14, 2024. The video was published on April 16, 2024.

Parkev Tatevosian, CFA has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Microsoft and Palantir Technologies. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy. Parkev Tatevosian is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through his link, he will earn some extra money that supports his channel. His opinions remain his own and are unaffected by The Motley Fool.

Read more from the original source:
Palantir Stock vs. Microsoft Stock: Which Is the Best Artificial Intelligence (AI) Stock to Buy? - The Motley Fool

Read More..

These 3 Artificial Intelligence (AI) Cryptos Are Rocketing Higher Today – Yahoo Finance

It's been a wild day for cryptocurrency investors, with a number of top tokens seeing outsize volatility in today's session. For AI cryptos, these moves have been even more exaggerated.

As of 2:15 p.m. ET on Monday, The Graph (CRYPTO: GRT), Fetch.ai (CRYPTO: FET), and SingularityNET (CRYPTO:AGIX) are still up meaningfully, surging 5.6%, 2%, and 1.8%, respectively, over the past 24 hours. However, many of these tokens have continued to decline in afternoon trading alongside other risk assets, as Middle East tensions rise.

For AI cryptos, geopolitical concerns shouldn't matter to the same degree as with other assets that are more sensitive to capital flows. That said, capital flows do matter regardless of which niche a given project is pursuing, and selling pressure remains strong today.

Fetch.ai and SingularityNET are two projects uniquely focused on AI that have a shared catalyst that investors are clearly pricing in. Fetch.ai is collaborating with SingularityNET and Ocean Protocol to create what they're calling the "Superintelligence Alliance."

As part of this alliance, some talks around a potential token merger have taken place, with investors now pricing these tokens in high correlation to each other.

That certainly makes sense, given the AI focus of both projects, and their collaborative ties to work together on solving much bigger problems than they likely could on their own. One thing that certainly stands out to me about crypto assets is the relative lack of willingness for projects to merge. If these projects do tie the knot at some point, it will be interesting to see how the market values a token combination.

The demand for blockchain-based AI solutions appears to be strong, and a combination of these two relatively small-cap projects could improve their chances of success in creating meaningful utility for end users.

The Graph's core model as an oracle network, allowing off-blockchain data to be ported on-chain, has seen impressive demand build over time. A number of recent collaborations and partnerships have driven an impressive amount of momentum in this token over the past week. The fact that this momentum has continued is a very positive development for long-term investors, and suggests this AI-related play could have more room to run.

Today's price action certainly implies a dip could be on the horizon, or at least a mellowing out of some rather strong momentum in these tokens in recent days. No rally lasts forever, and a breather can turn out to be a good thing. This year, these three AI-related cryptos have been among the best performers, and I wouldn't be surprised to see that narrative carried through to the end of the year.

Story continues

For growth investors seeking some crypto exposure, (and in particular, projects with AI-related headwinds), these are three tokens that I think are worth adding to the watch list to potentially buy on dips. Each project has unique catalysts that could drive value for investors and users over time. That's what this space is supposed to be about, which is what makes assessing these cryptos so compelling.

Before you buy stock in Fetch, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the10 best stocks for investors to buy now and Fetch wasnt one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service hasmore than tripledthe return of S&P 500 since 2002*.

See the 10 stocks

*Stock Advisor returns as of April 15, 2024

Chris MacDonald has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Fetch and The Graph. The Motley Fool has a disclosure policy.

These 3 Artificial Intelligence (AI) Cryptos Are Rocketing Higher Today was originally published by The Motley Fool

Read this article:
These 3 Artificial Intelligence (AI) Cryptos Are Rocketing Higher Today - Yahoo Finance

Read More..

Machine Learning Uncovers New Ways to Kill Bacteria With Non-Antibiotic Drugs – ScienceAlert

Human history was forever changed with the discovery of antibiotics in 1928. Infectious diseases such as pneumonia, tuberculosis and sepsis were widespread and lethal until penicillin made them treatable.

Surgical procedures that once came with a high risk of infection became safer and more routine. Antibiotics marked a triumphant moment in science that transformed medical practice and saved countless lives.

But antibiotics have an inherent caveat: When overused, bacteria can evolve resistance to these drugs. The World Health Organization estimated that these superbugs caused 1.27 million deaths around the world in 2019 and will likely become an increasing threat to global public health in the coming years.

New discoveries are helping scientists face this challenge in innovative ways. Studies have found that nearly a quarter of drugs that aren't normally prescribed as antibiotics, such as medications used to treat cancer, diabetes and depression, can kill bacteria at doses typically prescribed for people.

Understanding the mechanisms underlying how certain drugs are toxic to bacteria may have far-reaching implications for medicine. If nonantibiotic drugs target bacteria in different ways from standard antibiotics, they could serve as leads in developing new antibiotics.

But if nonantibiotics kill bacteria in similar ways to known antibiotics, their prolonged use, such as in the treatment of chronic disease, might inadvertently promote antibiotic resistance.

In our recently published research, my colleagues and I developed a new machine learning method that not only identified how nonantibiotics kill bacteria but can also help find new bacterial targets for antibiotics.

Numerous scientists and physicians around the world are tackling the problem of drug resistance, including me and my colleagues in the Mitchell Lab at UMass Chan Medical School. We use the genetics of bacteria to study which mutations make bacteria more resistant or more sensitive to drugs.

When my team and I learned about the widespread antibacterial activity of nonantibiotics, we were consumed by the challenge it posed: figuring out how these drugs kill bacteria.

To answer this question, I used a genetic screening technique my colleagues recently developed to study how anticancer drugs target bacteria. This method identifies which specific genes and cellular processes change when bacteria mutate. Monitoring how these changes influence the survival of bacteria allows researchers to infer the mechanisms these drugs use to kill bacteria.

I collected and analyzed almost 2 million instances of toxicity between 200 drugs and thousands of mutant bacteria. Using a machine learning algorithm I developed to deduce similarities between different drugs, I grouped the drugs together in a network based on how they affected the mutant bacteria.

My maps clearly showed that known antibiotics were tightly grouped together by their known classes of killing mechanisms. For example, all antibiotics that target the cell wall the thick protective layer surrounding bacterial cells were grouped together and well separated from antibiotics that interfere with bacteria's DNA replication.

Intriguingly, when I added nonantibiotic drugs to my analysis, they formed separate hubs from antibiotics. This indicates that nonantibiotic and antibiotic drugs have different ways of killing bacterial cells. While these groupings don't reveal how each drug specifically kills antibiotics, they show that those clustered together likely work in similar ways.

The last piece of the puzzle whether we could find new drug targets in bacteria to kill them came from the research of my colleague Carmen Li.

She grew hundreds of generations of bacteria that were exposed to different nonantibiotic drugs normally prescribed to treat anxiety, parasite infections and cancer.

Sequencing the genomes of bacteria that evolved and adapted to the presence of these drugs allowed us to pinpoint the specific bacterial protein that triclabendazole a drug used to treat parasite infections targets to kill the bacteria. Importantly, current antibiotics don't typically target this protein.

Additionally, we found that two other nonantibiotics that used a similar mechanism as triclabendazole also target the same protein. This demonstrated the power of my drug similarity maps to identify drugs with similar killing mechanisms, even when that mechanism was yet unknown.

Our findings open multiple opportunities for researchers to study how nonantibiotic drugs work differently from standard antibiotics. Our method of mapping and testing drugs also has the potential to address a critical bottleneck in developing antibiotics.

Searching for new antibiotics typically involves sinking considerable resources into screening thousands of chemicals that kill bacteria and figuring out how they work. Most of these chemicals are found to work similarly to existing antibiotics and are discarded.

Our work shows that combining genetic screening with machine learning can help uncover the chemical needle in the haystack that can kill bacteria in ways researchers haven't used before.

There are different ways to kill bacteria we haven't exploited yet, and there are still roads we can take to fight the threat of bacterial infections and antibiotic resistance.

Mariana Noto Guillen, Ph.D. Candidate in Systems Biology, UMass Chan Medical School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

View original post here:
Machine Learning Uncovers New Ways to Kill Bacteria With Non-Antibiotic Drugs - ScienceAlert

Read More..

Machine learning reveals the control mechanics of an insect wing hinge – Nature.com

Grimaldi, D. & Engel, M. S. Evolution of the Insects (Cambridge Univ. Press, 2005).

Deora, T., Gundiah, N. & Sane, S. P. Mechanics of the thorax in flies. J. Exp. Biol. 220, 13821395 (2017).

Article PubMed Google Scholar

Gu, J. et al. Recent advances in convolutional neural networks. Pattern Recognit. 77, 354377 (2018).

Article ADS Google Scholar

Kramer, M. A. Nonlinear principal component analysis using autoassociative neural networks. AlChE J. 37, 233243 (1991).

Article ADS CAS Google Scholar

Pringle, J. W. S. The excitation and contraction of the flight muscles of insects. J. Physiol. 108, 226232 (1949).

Article CAS PubMed PubMed Central Google Scholar

Josephson, R. K., Malamud, J. G. & Stokes, D. R. Asynchronous muscle: a primer. J. Exp. Biol. 203, 27132722 (2000).

Article CAS PubMed Google Scholar

Gau, J. et al. Bridging two insect flight modes in evolution, physiology and robophysics. Nature 622, 767774 (2023).

Article ADS CAS PubMed PubMed Central Google Scholar

Boettiger, E. G. & Furshpan, E. The mechanics of flight movements in diptera. Biol. Bull. 102, 200211 (1952).

Article Google Scholar

Pringle, J. W. S. Insect Flight (Cambridge Univ. Press, 1957).

Miyan, J. A. & Ewing, A. W. How Diptera move their wings: a re-examination of the wing base articulation and muscle systems concerned with flight. Phil. Trans. R. Soc. B 311, 271302 (1985).

ADS Google Scholar

Wisser, A. Wing beat of Calliphora erythrocephala: turning axis and gearbox of the wing base (Insecta, Diptera). Zoomorph. 107, 359369 (1988).

Article Google Scholar

Ennos, R. A. A comparative study of the flight mechanism of diptera. J. Exp. Biol. 127, 355372 (1987).

Article Google Scholar

Dickinson, M. H. & Tu, M. S. The function of dipteran flight muscle. Comp. Biochem. Physiol. A 116, 223238 (1997).

Article Google Scholar

Nalbach, G. The gear change mechanism of the blowfly (Calliphora erythrocephala) in tethered flight. J. Comp. Physiol. A 165, 321331 (1989).

Article Google Scholar

Walker, S. M., Thomas, A. L. R. & Taylor, G. K. Operation of the alula as an indicator of gear change in hoverflies. J. R. Soc. Inter. 9, 11941207 (2011).

Article Google Scholar

Walker, S. M. et al. In vivo time-resolved microtomography reveals the mechanics of the blowfly flight motor. PLoS Biol. 12, e1001823 (2014).

Article PubMed PubMed Central Google Scholar

Wisser, A. & Nachtigall, W. Functional-morphological investigations on the flight muscles and their insertion points in the blowfly Calliphora erythrocephala (Insecta, Diptera). Zoomorph. 104, 188195 (1984).

Article Google Scholar

Heide, G. Funktion der nicht-fibrillaren Flugmuskeln von Calliphora. I. Lage Insertionsstellen und Innervierungsmuster der Muskeln. Zool. Jahrb., Abt. allg. Zool. Physiol. Tiere 76, 8798 (1971).

Google Scholar

Fabian, B., Schneeberg, K. & Beutel, R. G. Comparative thoracic anatomy of the wild type and wingless (wg1cn1) mutant of Drosophila melanogaster (Diptera). Arth. Struct. Dev. 45, 611636 (2016).

Article Google Scholar

Tu, M. & Dickinson, M. Modulation of negative work output from a steering muscle of the blowfly Calliphora vicina. J. Exp. Biol. 192, 207224 (1994).

Article CAS PubMed Google Scholar

Tu, M. S. & Dickinson, M. H. The control of wing kinematics by two steering muscles of the blowfly (Calliphora vicina). J. Comp. Physiol. A 178, 813830 (1996).

Article CAS PubMed Google Scholar

Muijres, F. T., Iwasaki, N. A., Elzinga, M. J., Melis, J. M. & Dickinson, M. H. Flies compensate for unilateral wing damage through modular adjustments of wing and body kinematics. Interface Focus 7, 20160103 (2017).

Article PubMed PubMed Central Google Scholar

OSullivan, A. et al. Multifunctional wing motor control of song and flight. Curr. Biol. 28, 27052717.e4 (2018).

Article PubMed Google Scholar

Azevedo, A. et al. Tools for comprehensive reconstruction and analysis of Drosophila motor circuits. Preprint at BioRxiv https://doi.org/10.1101/2022.12.15.520299 (2022).

Donovan, E. R. et al. Muscle activation patterns and motoranatomy of Annas hummingbirds Calypte anna and zebra finches Taeniopygia guttata. Physiol. Biochem. Zool. 86, 2746 (2013).

Article PubMed Google Scholar

Bashivan, P., Kar, K. & DiCarlo, J. J. Neural population control via deep image synthesis. Science 364, eaav9436 (2019).

Article CAS PubMed Google Scholar

Lindsay, T., Sustar, A. & Dickinson, M. The function and organization of the motor system controlling flight maneuvers in flies. Curr. Biol. 27, 345358 (2017).

Article CAS PubMed Google Scholar

Reiser, M. B. & Dickinson, M. H. A modular display system for insect behavioral neuroscience. J. Neurosci. Meth. 167, 127139 (2008).

Article Google Scholar

Albawi, S., Mohammed, T. A. & Al-Zawi, S. Understanding of a convolutional neural network. In 2017 International Conference on Engineering and Technology (ICET) 16 https://doi.org/10.1109/ICEngTechnol.2017.8308186 (2017).

Kennedy, J. & Eberhart, R. Particle swarm optimization. In Proc. ICNN95International Conference on Neural Networks Vol. 4, 19421948 (1995).

Dana, H. et al. High-performance calcium sensors for imaging activity in neuronal populations and microcompartments. Nat. Methods 16, 649657 (2019).

Article CAS PubMed Google Scholar

Muijres, F. T., Elzinga, M. J., Melis, J. M. & Dickinson, M. H. Flies evade looming targets by executing rapid visually directed banked turns. Science 344, 172177 (2014).

Article ADS CAS PubMed Google Scholar

Gordon, S. & Dickinson, M. H. Role of calcium in the regulation of mechanical power in insect flight. Proc. Natl Acad. Sci. USA 103, 43114315 (2006).

Article ADS CAS PubMed PubMed Central Google Scholar

Nachtigall, W. & Wilson, D. M. Neuro-muscular control of dipteran flight. J. Exp. Biol. 47, 7797 (1967).

Article CAS PubMed Google Scholar

Heide, G. & Gtz, K. G. Optomotor control of course and altitude in Drosophila melanogaster is correlated with distinct activities of at least three pairs of flight steering muscles. J. Exp. Biol. 199, 17111726 (1996).

Article CAS PubMed Google Scholar

Balint, C. N. & Dickinson, M. H. The correlation between wing kinematics and steering muscle activity in the blowfly Calliphora vicina. J. Exp. Biol. 204, 42134226 (2001).

Article CAS PubMed Google Scholar

Elzinga, M. J., Dickson, W. B. & Dickinson, M. H. The influence of sensory delay on the yaw dynamics of a flapping insect. J. R. Soc. Interface 9, 16851696 (2012).

Article PubMed Google Scholar

Dickinson, M. H., Lehmann, F.-O. & Sane, S. P. Wing rotation and the aerodynamic basis of insect flight. Science 284, 19541960 (1999).

Article CAS PubMed Google Scholar

Lehmann, F. O. & Dickinson, M. H. The changes in power requirements and muscle efficiency during elevated force production in the fruit fly Drosophila melanogaster. J. Exp. Biol. 200, 11331143 (1997).

Article CAS PubMed Google Scholar

Lucia, S., Ttulea-Codrean, A., Schoppmeyer, C. & Engell, S. Rapid development of modular and sustainable nonlinear model predictive control solutions. Control Eng. Pract. 60, 5162 (2017).

Article Google Scholar

Cheng, B., Fry, S. N., Huang, Q. & Deng, X. Aerodynamic damping during rapid flight maneuvers in the fruit fly Drosophila. J. Exp. Biol. 213, 602612 (2010).

Article CAS PubMed Google Scholar

Collett, T. S. & Land, M. F. Visual control of flight behaviour in the hoverfly, Syritta pipiens L. J. Comp. Physiol. 99, 166 (1975).

Article Google Scholar

Muijres, F. T., Elzinga, M. J., Iwasaki, N. A. & Dickinson, M. H. Body saccades of Drosophila consist of stereotyped banked turns. J. Exp. Biol. 218, 864875 (2015).

Article PubMed Google Scholar

Syme, D. A. & Josephson, R. K. How to build fast muscles: synchronous and asynchronous designs. Integr. Comp. Biol. 42, 762770 (2002).

Article PubMed Google Scholar

Snodgrass, R. E. Principles of Insect Morphology (Cornell Univ. Press, 2018).

Williams, C. M. & Williams, M. V. The flight muscles of Drosophila repleta. J. Morphol. 72, 589599 (1943).

Article Google Scholar

Wootton, R. The geometry and mechanics of insect wing deformations in flight: a modelling approach. Insects 11, 446 (2020).

Article PubMed PubMed Central Google Scholar

Lerch, S. et al. Resilin matrix distribution, variability and function in Drosophila. BMC Biol. 18, 195 (2020).

Article CAS PubMed PubMed Central Google Scholar

Weis-Fogh, T. A rubber-like protein in insect cuticle. J. Exp. Biol. 37, 889907 (1960).

Article CAS Google Scholar

Weis-Fogh, T. Energetics of hovering flight in hummingbirds and in Drosophila. J. Exp. Biol. 56, 79104 (1972).

Article Google Scholar

Ellington, C. P. The aerodynamics of hovering insect flight. VI. Lift and power requirements. Phil. Trans. R. Soc. B 305, 145181 (1984).

ADS Google Scholar

Alexander, R. M. & Bennet-Clark, H. C. Storage of elastic strain energy in muscle and other tissues. Nature 265, 114117 (1977).

Article ADS CAS PubMed Google Scholar

Mronz, M. & Lehmann, F.-O. The free-flight response of Drosophila to motion of the visual environment. J. Exp. Biol. 211, 20262045 (2008).

Article PubMed Google Scholar

Read more:
Machine learning reveals the control mechanics of an insect wing hinge - Nature.com

Read More..

AI and robotics demystify the workings of a fly’s wing – Nature.com

Machine learning and robotics have shed new light on one of the most sophisticated skeletal structures in the animal kingdom: the insect wing hinge.

Read the paper: Machine learning reveals the control mechanics of an insect wing hinge

But now a team of researchers have combined cutting edge imaging, machine learning and robotics to build a model that is shedding new light on the structure.

Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.

Read more here:
AI and robotics demystify the workings of a fly's wing - Nature.com

Read More..

A secure approach to generative AI with AWS | Amazon Web Services – AWS Blog

Generative artificial intelligence (AI) is transforming the customer experience in industries across the globe. Customers are building generative AI applications using large language models (LLMs) and other foundation models (FMs), which enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels.

FMs and the applications built around them represent extremely valuable investments for our customers. Theyre often used with highly sensitive business data, like personal data, compliance data, operational data, and financial information, to optimize the models output. The biggest concern we hear from customers as they explore the advantages of generative AI is how to protect their highly sensitive data and investments. Because their data and model weights are incredibly valuable, customers require them to stay protected, secure, and private, whether thats from their own administrators accounts, their customers, vulnerabilities in software running in their own environments, or even their cloud service provider from having access.

At AWS, our top priority is safeguarding the security and confidentiality of our customers workloads. We think about security across the three layers of our generative AI stack:

Each layer is important to making generative AI pervasive and transformative.

With the AWS Nitro System, we delivered a first-of-its-kind innovation on behalf of our customers. The Nitro System is an unparalleled computing backbone for AWS, with security and performance at its core. Its specialized hardware and associated firmware are designed to enforce restrictions so that nobody, including anyone in AWS, can access your workloads or data running on your Amazon Elastic Compute Cloud (Amazon EC2) instances. Customers have benefited from this confidentiality and isolation from AWS operators on all Nitro-based EC2 instances since 2017.

By design, there is no mechanism for any Amazon employee to access a Nitro EC2 instance that customers use to run their workloads, or to access data that customers send to a machine learning (ML) accelerator or GPU. This protection applies to all Nitro-based instances, including instances with ML accelerators like AWS Inferentia and AWS Trainium, and instances with GPUs like P4, P5, G5, and G6.

The Nitro System enables Elastic Fabric Adapter (EFA), which uses the AWS-built AWS Scalable Reliable Datagram (SRD) communication protocol for cloud-scale elastic and large-scale distributed training, enabling the only always-encrypted Remote Direct Memory Access (RDMA) capable network. All communication through EFA is encrypted with VPC encryption without incurring any performance penalty.

The design of the Nitro System has been validated by the NCC Group, an independent cybersecurity firm. AWS delivers a high level of protection for customer workloads, and we believe this is the level of security and confidentiality that customers should expect from their cloud provider. This level of protection is so critical that weve added it in our AWS Service Terms to provide an additional assurance to all of our customers.

From day one, AWS AI infrastructure and services have had built-in security and privacy features to give you control over your data. As customers move quickly to implement generative AI in their organizations, you need to know that your data is being handled securely across the AI lifecycle, including data preparation, training, and inferencing. The security of model weightsthe parameters that a model learns during training that are critical for its ability to make predictionsis paramount to protecting your data and maintaining model integrity.

This is why it is critical for AWS to continue to innovate on behalf of our customers to raise the bar on security across each layer of the generative AI stack. To do this, we believe that you must have security and confidentiality built in across each layer of the generative AI stack. You need to be able to secure the infrastructure to train LLMs and other FMs, build securely with tools to run LLMs and other FMs, and run applications that use FMs with built-in security and privacy that you can trust.

At AWS, securing AI infrastructure refers to zero access to sensitive AI data, such as AI model weights and data processed with those models, by any unauthorized person, either at the infrastructure operator or at the customer. Its comprised of three key principles:

The Nitro System fulfills the first principle of Secure AI Infrastructure by isolating your AI data from AWS operators. The second principle provides you with a way to remove administrative access of your own users and software to your AI data. AWS not only offers you a way to achieve that, but we also made it straightforward and practical by investing in building an integrated solution between AWS Nitro Enclaves and AWS Key Management Service (AWS KMS). With Nitro Enclaves and AWS KMS, you can encrypt your sensitive AI data using keys that you own and control, store that data in a location of your choice, and securely transfer the encrypted data to an isolated compute environment for inferencing. Throughout this entire process, the sensitive AI data is encrypted and isolated from your own users and software on your EC2 instance, and AWS operators cannot access this data. Use cases that have benefited from this flow include running LLM inferencing in an enclave. Until today, Nitro Enclaves operate only in the CPU, limiting the potential for larger generative AI models and more complex processing.

We announced our plans to extend this Nitro end-to-end encrypted flow to include first-class integration with ML accelerators and GPUs, fulfilling the third principle. You will be able to decrypt and load sensitive AI data into an ML accelerator for processing while providing isolation from your own operators and verified authenticity of the application used for processing the AI data. Through the Nitro System, you can cryptographically validate your applications to AWS KMS and decrypt data only when the necessary checks pass. This enhancement allows AWS to offer end-to-end encryption for your data as it flows through generative AI workloads.

We plan to offer this end-to-end encrypted flow in the upcoming AWS-designed Trainium2 as well as GPU instances based on NVIDIAs upcoming Blackwell architecture, which both offer secure communications between devices, the third principle of Secure AI Infrastructure. AWS and NVIDIA are collaborating closely to bring a joint solution to market, including NVIDIAs new NVIDIA Blackwell GPU platform, which couples NVIDIAs GB200 NVL72 solution with the Nitro System and EFA technologies to provide an industry-leading solution for securely building and deploying next-generation generative AI applications.

Today, tens of thousands of customers are using AWS to experiment and move transformative generative AI applications into production. Generative AI workloads contain highly valuable and sensitive data that needs the level of protection from your own operators and the cloud service provider. Customers using AWS Nitro-based EC2 instances have received this level of protection and isolation from AWS operators since 2017, when we launched our innovative Nitro System.

At AWS, were continuing that innovation as we invest in building performant and accessible capabilities to make it practical for our customers to secure their generative AI workloads across the three layers of the generative AI stack, so that you can focus on what you do best: building and extending the uses of the generative AI to more areas. Learn more here.

Anthony Liguori is an AWS VP and Distinguished Engineer for EC2

Colm MacCrthaigh is an AWS VP and Distinguished Engineer for EC2

Here is the original post:
A secure approach to generative AI with AWS | Amazon Web Services - AWS Blog

Read More..

AI, machine learning, and the future of metal fabrication – TheFabricator.com

On the first day of the 2024 Fabricators and Manufacturers Association Annual Meeting, held in Clearwater Beach, Fla., Gene Marks, speaker and columnist for Forbes magazine, pointed at an eye-opening chart tracking the cost of computer processing speed over the past few decades.

During the FMA Annual Meeting, a Navy SEAL turned leadership consultant brought up an idea that, for those who never served in the military, seemed a bit surprising: decentralized command. Not only does decentralized command allow you to grow into a role, but as a leader, it allows you to step back and look at the big picture. Theres no way I can have that view if Im constantly making decisions for my team.

That was veteran Carlos Mendez, a consultant with Texas-based Echelon Front. His insight went against the popular view of the military, shaped by movie scenes of sergeants screaming at subordinates. The reality is that soldiers can find themselves cut off from central command, and if they dont have the training or authority to think and act independently, they can be in a world of trouble.

Lives might not be at stake in the fab shop, but livelihoods certainly are. Most metal fabrication occurs in high-product-mix environments. With equipment and software juggling hundreds of jobs, some unexpected variables are bound to throw a wrench into the workday. Fabricators work to minimize the exceptions, but there will alwaysalwaysbe exceptions.

On the first day of the late-February conference, held in Clearwater Beach, Fla., Gene Marks, speaker and columnist for Forbes magazine, pointed at an eye-opening chart tracking the cost of computer processing speed over the past few decades. The price of processing speed today is roughly one-one-hundred-millionth of what it was in the 1970s. The fastest computers in 1993 could perform less than 1,000 operations in a millisecond. Thats gone up to over a billion operations. Thats every millisecond.

The extraordinary power of modern computing has created all sorts of AI tools, but theyre not total solutions. They can write an email, design a presentation, and automate certain email tasks. They help immensely in a thousand different ways, but they only get you 80% to 90% there. Humans still need to bring work over the finish line.

This scenario might reflect life on the shop floor one day, though weve got a ways to go. As several attendees from custom fabricators discussed during the conference breakout sessions, the challenge is data. Conventional wisdom has it that manufacturers are swimming in itbut how good is that data? Machines and software capture incredible amounts. But in most fab shops, not every machine is automated, and a lot has to happen before and after each manufacturing step. Instead of paper job travelers, operators now use laptops, tablets, even their phones, but theyre still probably keying in job information manually into an ERP system. What exactly happens at a specific work center between that initial clock-in and final clock-out often just isnt captured.

This sometimes leads to big surprises when fabricators integrate Industrial Internet of Things (IIoT) platforms. These can reveal just how little real uptime machines havethat is, when machines are actually cutting, bending, and welding, and good parts are actually being produced. Usually, its a fraction of the time people assumed. IIoT is revealing low-hanging-fruit improvements (material staging, standardizing procedures and work practices between shifts, etc.), but its also showing that, no matter how dialed-in an operation becomes, exceptions will exist. More so than in past years, discussions at the FMA Annual Meeting really focused on how to best manage them.

Lean principles entered the fray during the conference sessionslike optimizing machine utilization but not at the expense of a plants overall throughput. Less-talked-about inefficiencies also entered the debate. During one breakout session, Caleb Chamberlain, co-founder of OSH Cut (and fellow columnist for The Fabricator), brought attendees through the customer experience he and his team designed. OSH Cut doesnt make to print. In fact, it has no prints at all, which raised some eyebrows in the audience. Customers upload design files directly to the OSH Cut website, which performs a design-for-manufacturability (DFM) analysis. If theres an issue, the customer can make changes and then upload the design again. From there, nesting, machine programming, and myriad other order-prep tasks all happen automatically.

OSH Cut isnt the only fabricator to do this. A few in the U.S. offer similar services, and Europe has a collection of web shops, 247TailorSteel, a Dutch operation, being the most well known. The model wont work everywhere, but it does bring up larger questions about what activities in the metal fabrication supply chain truly add value, and where those activities (especially DFM) should happen.

Brian Steel, CEO of Cadrex Manufacturing Solutions and a panel participant at the conference, represented the other end of the metal fabrication spectrum. After acquiring a multitude of plants, Cadrex is now one of the largest contract metal fabricators in the country. The company also has adopted software that runs factory-wide simulations, weighs various production options, then suggests what should work best.

Fabricators and Manufacturers Association Annual Meeting attendees took a deep dive into operational excellence, skill, and what will define future success: good systems, good data, and, most important, good employees.

All this reveals the increasing importance of software innovations, the best of which are aiming to weigh the effects of thousands of variables in high-product-mix manufacturing and, ultimately, help skilled people make better decisions. Software wont account for everything, which brings the importance of problem-solving and decentralized command to the fore. The last thing FMA Annual Meeting attendees want is to lead an automated shop where software makes all the decisions and people just mindlessly do what theyre told.

Operators shouldnt avoid using new technology or change machine programs or tools just because thats what they prefer. But they also shouldnt run a machine program or job that truly doesnt work. They need to be able to identify what truly is an exception, then have enough knowledge and authority (supported by good systems and procedures) to act and get the job done. No matter what the future of software, machine learning, and AI looks like, employee skill and curiosity will remain a fabricators key competitive advantage.

Read more:
AI, machine learning, and the future of metal fabrication - TheFabricator.com

Read More..

AI and Machine Learning will not save the planet (yet) – TechRadar

Artificial General Intelligence, when it exists, will be able to do many tasks better than humans. For now, the machine learning systems and generative AI solutions available on the market are a stopgap to ease the cognitive load on engineers, until machines which think like people exist.

Generative AI is currently dominating headlines, but its backbone, neural networks, have been in use for decades. These Machine Learning (ML) systems historically acted as cruise control for large systems that would be difficult to constantly maintain by hand. The latest algorithms also proactively respond to errors and threats, alerting teams and recording logs of unusual activity. These systems have developed further and can even predict certain outcomes based on previously observed patterns.

This ability to learn and respond is being adapted to all kinds of technology. One that persists is the use of AI tools in envirotech. Whether it's enabling new technologies with vast data processing capabilities, or improving the efficiency of existing systems by intelligently adjusting inputs to maximize efficiency, AI at this stage of development is so open ended it could theoretically be applied to any task.

Social Links Navigation

Co-Founder of VictoriaMetrics.

GenAI isnt inherently energy intensive. A model or neural network is no more energy inefficient than any other piece of software when it is operating, but the development of these AI tools is what generates the majority of the energy costs. The justification for this energy consumption is that the future benefits of the technology are worth the cost in energy and resources.

Some reports suggest many AI applications are solutions in search of a problem, and many developers are using vast amounts of energy to develop tools that could produce dubious energy savings at best. One of the biggest benefits of machine learning is its ability to read through large amounts of data, and summarize insights for humans to act on. Reporting is a laborious and frequently manual process, time saved reporting can be shifted to actioning machine learning insights and actively addressing business-related emissions.

Businesses are under increasing pressure to start reporting on Scope 3 emissions, which are the hardest to measure, and the biggest contributor of emissions for most modern companies. Capturing and analyzing these disparate data sources would be a smart use of AI, but would still ultimately require regular human guidance. Monitoring solutions already exist on the market to reduce the demand on engineers, so taking this a step further with AI is an unnecessary and potentially damaging innovation.

Replacing the engineer with an AI agent reduces human labor, but removes a complex interface, just to add equally complex programming in front of it. That isnt to say innovation should be discouraged. Its a noble aim, but do not be sold a fairy tale that this will happen without any hiccups. Some engineers will be replaced eventually by this technology, but the industry should approach it carefully.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Consider self-driving cars. They're here, they're doing better than an average human-driver. But in some edge cases they can be dangerous. The difference is that it is very easy to see this danger, compared to the potential risks of AI.

AI agents at the present stage of development are comparable to human employees - they need training and supervision, and will gradually become out of date unless re-trained from time to time. Similarly, as has been observed with ChatGPT, models can degrade over time. The mechanics that drive this degradation are not clear, but these systems are delicately calibrated, and this calibration is not a permanent state. The more flexible the model, the more likely it can misfire and function suboptimally. This can manifest as data or concept drift, an issue where a model invalidates itself over time. This is one of many inherent issues with attaching probabilistic models to deterministic tools.

A concerning area of development is the use of AI in natural language inputs, trying to make it easier for less technical employees or decision makers to save on hiring engineers. Natural language outputs are ideal for translating the expert, subject specific outputs from monitoring systems, in a way that makes the data accessible for those who are less data literate. Despite this strength even summarizations can be subject to hallucinations where data is fabricated, this is an issue that persists in LLMs and could create costly errors where AI is used to summarize mission critical reports.

The risk is we create AI overlays for systems that require deterministic inputs. Trying to make the barrier to entry for complex systems lower is admirable, but these systems require precision. AI agents cannot explain their reasoning, or truly understand a natural language input and work out the real request in the way a human can. Moreover, it adds another layer of energy consuming software to a tech stack for minimal gain.

The rush to AI everything is producing a tremendous amount of wasted energy, with 14,000 AI startups currently in existence, how many will actually produce tools that will benefit humanity? While AI can improve the efficiency of a data center by managing resources, ultimately that doesn't manifest into a meaningful energy saving as in most cases that free capacity is then channeled into another application, using any saved resource headroom, plus the cost of yet more AI powered tools.

Can AI help achieve sustainability goals? Probably, but most of the advocates fall down at the how part of that question, in some cases suggesting that AI itself will come up with new technologies. Climate change is now an existential threat with so many variables to account for it stretches the comprehension of the human mind. Rather than tackling this problem directly, technophiles defer responsibility to AI in the hope it will provide a solution at some point in future. The future is unknown, and climate change is happening now. Banking on AI to save us is simply crossing our fingers and hoping for the best dressed up as neo-futurism.

We've listed the best collaboration platform for teams.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

View original post here:
AI and Machine Learning will not save the planet (yet) - TechRadar

Read More..

How AI can improve the deployment crisis in machine learning projects Dr. Eric Siegel – Atlanta Small Business Network

On todays episode of The Small Business Show, were exploring the realm of machine learning technology and its practical applications. Dr. Eric Siegel, author, founder of Machine Learning Week, and former Columbia University professor, share insights from his latest book, The AI Playbook, which offers readers a comprehensive understanding of how machine learning operates and strategies for leveraging it effectively.

1. Dr. Siegel explains that machine learning is a technology that predicts outcomes by learning from data or past experiences. This includes predicting various events or behaviors, like fraudulent activities or equipment failures, which is essential for businesses looking to leverage artificial intelligence (AI) for operational efficiency.

2. According to Dr. Siegel, he notes the distinction between predictive and generative AI. Predictive AI focuses on forecasting specific outcomes and is where most current financial returns are seen. Generative AI, which is increasingly popular in the media, creates new content like text, images, and music, showcasing the versatile applications of machine learning.

3. Moreover, Dr. Siegel discusses how businesses can use machine learning to enhance large-scale operations, including targeted marketing, fraud detection, supply chain management, and operational decision-making, thus improving efficiency and reducing costs.

4. The conversation sheds light on the challenges businesses face in deploying machine learning projects, with many failing to reach full implementation. Dr. Siegels book, The AI Playbook, is mentioned as a guide to navigating these challenges, emphasizing the need for a semi-technical understanding among business stakeholders to ensure successful deployment.

5. The interview conveys the importance of integrating machine learning into business operations to drive efficiency and innovation. Dr. Siegel encourages business leaders to develop an understanding of machine learning to effectively harness its predictive capabilities, illustrating this with success stories from companies like UPS, which significantly improved operational efficiency through predictive modeling.

Excerpt from:
How AI can improve the deployment crisis in machine learning projects Dr. Eric Siegel - Atlanta Small Business Network

Read More..

Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune | Amazon Web … – AWS Blog

In asset management, portfolio managers need to closely monitor companies in their investment universe to identify risks and opportunities, and guide investment decisions. Tracking direct events like earnings reports or credit downgrades is straightforwardyou can set up alerts to notify managers of news containing company names. However, detecting second and third-order impacts arising from events at suppliers, customers, partners, or other entities in a companys ecosystem is challenging.

For example, a supply chain disruption at a key vendor would likely negatively impact downstream manufacturers. Or the loss of a top customer for a major client poses a demand risk for the supplier. Very often, such events fail to make headlines featuring the impacted company directly, but are still important to pay attention to. In this post, we demonstrate an automated solution combining knowledge graphs and generative artificial intelligence (AI) to surface such risks by cross-referencing relationship maps with real-time news.

Broadly, this entails two steps: First, building the intricate relationships between companies (customers, suppliers, directors) into a knowledge graph. Second, using this graph database along with generative AI to detect second and third-order impacts from news events. For instance, this solution can highlight that delays at a parts supplier may disrupt production for downstream auto manufacturers in a portfolio though none are directly referenced.

With AWS, you can deploy this solution in a serverless, scalable, and fully event-driven architecture. This post demonstrates a proof of concept built on two key AWS services well suited for graph knowledge representation and natural language processing: Amazon Neptune and Amazon Bedrock. Neptune is a fast, reliable, fully managed graph database service that makes it straightforward to build and run applications that work with highly connected datasets. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Overall, this prototype demonstrates the art of possible with knowledge graphs and generative AIderiving signals by connecting disparate dots. The takeaway for investment professionals is the ability to stay on top of developments closer to the signal while avoiding noise.

The first step in this solution is building a knowledge graph, and a valuable yet often overlooked data source for knowledge graphs is company annual reports. Because official corporate publications undergo scrutiny before release, the information they contain is likely to be accurate and reliable. However, annual reports are written in an unstructured format meant for human reading rather than machine consumption. To unlock their potential, you need a way to systematically extract and structure the wealth of facts and relationships they contain.

With generative AI services like Amazon Bedrock, you now have the capability to automate this process. You can take an annual report and trigger a processing pipeline to ingest the report, break it down into smaller chunks, and apply natural language understanding to pull out salient entities and relationships.

For example, a sentence stating that [Company A] expanded its European electric delivery fleet with an order for 1,800 electric vans from [Company B] would allow Amazon Bedrock to identify the following:

Extracting such structured data from unstructured documents requires providing carefully crafted prompts to large language models (LLMs) so they can analyze text to pull out entities like companies and people, as well as relationships such as customers, suppliers, and more. The prompts contain clear instructions on what to look out for and the structure to return the data in. By repeating this process across the entire annual report, you can extract the relevant entities and relationships to construct a rich knowledge graph.

However, before committing the extracted information to the knowledge graph, you need to first disambiguate the entities. For instance, there may already be another [Company A] entity in the knowledge graph, but it could represent a different organization with the same name. Amazon Bedrock can reason and compare the attributes such as business focus area, industry, and revenue-generating industries and relationships to other entities to determine if the two entities are actually distinct. This prevents inaccurately merging unrelated companies into a single entity.

After disambiguation is complete, you can reliably add new entities and relationships into your Neptune knowledge graph, enriching it with the facts extracted from annual reports. Over time, the ingestion of reliable data and integration of more reliable data sources will help build a comprehensive knowledge graph that can support revealing insights through graph queries and analytics.

This automation enabled by generative AI makes it feasible to process thousands of annual reports and unlocks an invaluable asset for knowledge graph curation that would otherwise go untapped due to the prohibitively high manual effort needed.

The following screenshot shows an example of the visual exploration thats possible in a Neptune graph database using the Graph Explorer tool.

The next step of the solution is automatically enriching portfolio managers news feeds and highlighting articles relevant to their interests and investments. For the news feed, portfolio managers can subscribe to any third-party news provider through AWS Data Exchange or another news API of their choice.

When a news article enters the system, an ingestion pipeline is invoked to process the content. Using techniques similar to the processing of annual reports, Amazon Bedrock is used to extract entities, attributes, and relationships from the news article, which are then used to disambiguate against the knowledge graph to identify the corresponding entity in the knowledge graph.

The knowledge graph contains connections between companies and people, and by linking article entities to existing nodes, you can identify if any subjects are within two hops of the companies that the portfolio manager has invested in or is interested in. Finding such a connection indicates the article may be relevant to the portfolio manager, and because the underlying data is represented in a knowledge graph, it can be visualized to help the portfolio manager understand why and how this context is relevant. In addition to identifying connections to the portfolio, you can also use Amazon Bedrock to perform sentiment analysis on the entities referenced.

The final output is an enriched news feed surfacing articles likely to impact the portfolio managers areas of interest and investments.

The overall architecture of the solution looks like the following diagram.

The workflow consists of the following steps:

You can deploy the prototype solution and start experimenting yourself. The prototype is available from GitHub and includes details on the following:

This post demonstrated a proof of concept solution to help portfolio managers detect second- and third-order risks from news events, without direct references to companies they track. By combining a knowledge graph of intricate company relationships with real-time news analysis using generative AI, downstream impacts can be highlighted, such as production delays from supplier hiccups.

Although its only a prototype, this solution shows the promise of knowledge graphs and language models to connect dots and derive signals from noise. These technologies can aid investment professionals by revealing risks faster through relationship mappings and reasoning. Overall, this is a promising application of graph databases and AI that warrants exploration to augment investment analysis and decision-making.

If this example of generative AI in financial services is of interest to your business, or you have a similar idea, reach out to your AWS account manager, and we will be delighted to explore further with you.

Xan Huang is a Senior Solutions Architect with AWS and is based in Singapore. He works with major financial institutions to design and build secure, scalable, and highly available solutions in the cloud. Outside of work, Xan spends most of his free time with his family and getting bossed around by his 3-year-old daughter. You can find Xan on LinkedIn.

Go here to read the rest:
Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune | Amazon Web ... - AWS Blog

Read More..