Page 263«..1020..262263264265..270280..»

Machine Learning Uncovers New Ways to Kill Bacteria With Non-Antibiotic Drugs – ScienceAlert

Human history was forever changed with the discovery of antibiotics in 1928. Infectious diseases such as pneumonia, tuberculosis and sepsis were widespread and lethal until penicillin made them treatable.

Surgical procedures that once came with a high risk of infection became safer and more routine. Antibiotics marked a triumphant moment in science that transformed medical practice and saved countless lives.

But antibiotics have an inherent caveat: When overused, bacteria can evolve resistance to these drugs. The World Health Organization estimated that these superbugs caused 1.27 million deaths around the world in 2019 and will likely become an increasing threat to global public health in the coming years.

New discoveries are helping scientists face this challenge in innovative ways. Studies have found that nearly a quarter of drugs that aren't normally prescribed as antibiotics, such as medications used to treat cancer, diabetes and depression, can kill bacteria at doses typically prescribed for people.

Understanding the mechanisms underlying how certain drugs are toxic to bacteria may have far-reaching implications for medicine. If nonantibiotic drugs target bacteria in different ways from standard antibiotics, they could serve as leads in developing new antibiotics.

But if nonantibiotics kill bacteria in similar ways to known antibiotics, their prolonged use, such as in the treatment of chronic disease, might inadvertently promote antibiotic resistance.

In our recently published research, my colleagues and I developed a new machine learning method that not only identified how nonantibiotics kill bacteria but can also help find new bacterial targets for antibiotics.

Numerous scientists and physicians around the world are tackling the problem of drug resistance, including me and my colleagues in the Mitchell Lab at UMass Chan Medical School. We use the genetics of bacteria to study which mutations make bacteria more resistant or more sensitive to drugs.

When my team and I learned about the widespread antibacterial activity of nonantibiotics, we were consumed by the challenge it posed: figuring out how these drugs kill bacteria.

To answer this question, I used a genetic screening technique my colleagues recently developed to study how anticancer drugs target bacteria. This method identifies which specific genes and cellular processes change when bacteria mutate. Monitoring how these changes influence the survival of bacteria allows researchers to infer the mechanisms these drugs use to kill bacteria.

I collected and analyzed almost 2 million instances of toxicity between 200 drugs and thousands of mutant bacteria. Using a machine learning algorithm I developed to deduce similarities between different drugs, I grouped the drugs together in a network based on how they affected the mutant bacteria.

My maps clearly showed that known antibiotics were tightly grouped together by their known classes of killing mechanisms. For example, all antibiotics that target the cell wall the thick protective layer surrounding bacterial cells were grouped together and well separated from antibiotics that interfere with bacteria's DNA replication.

Intriguingly, when I added nonantibiotic drugs to my analysis, they formed separate hubs from antibiotics. This indicates that nonantibiotic and antibiotic drugs have different ways of killing bacterial cells. While these groupings don't reveal how each drug specifically kills antibiotics, they show that those clustered together likely work in similar ways.

The last piece of the puzzle whether we could find new drug targets in bacteria to kill them came from the research of my colleague Carmen Li.

She grew hundreds of generations of bacteria that were exposed to different nonantibiotic drugs normally prescribed to treat anxiety, parasite infections and cancer.

Sequencing the genomes of bacteria that evolved and adapted to the presence of these drugs allowed us to pinpoint the specific bacterial protein that triclabendazole a drug used to treat parasite infections targets to kill the bacteria. Importantly, current antibiotics don't typically target this protein.

Additionally, we found that two other nonantibiotics that used a similar mechanism as triclabendazole also target the same protein. This demonstrated the power of my drug similarity maps to identify drugs with similar killing mechanisms, even when that mechanism was yet unknown.

Our findings open multiple opportunities for researchers to study how nonantibiotic drugs work differently from standard antibiotics. Our method of mapping and testing drugs also has the potential to address a critical bottleneck in developing antibiotics.

Searching for new antibiotics typically involves sinking considerable resources into screening thousands of chemicals that kill bacteria and figuring out how they work. Most of these chemicals are found to work similarly to existing antibiotics and are discarded.

Our work shows that combining genetic screening with machine learning can help uncover the chemical needle in the haystack that can kill bacteria in ways researchers haven't used before.

There are different ways to kill bacteria we haven't exploited yet, and there are still roads we can take to fight the threat of bacterial infections and antibiotic resistance.

Mariana Noto Guillen, Ph.D. Candidate in Systems Biology, UMass Chan Medical School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

View original post here:
Machine Learning Uncovers New Ways to Kill Bacteria With Non-Antibiotic Drugs - ScienceAlert

Read More..

Machine learning reveals the control mechanics of an insect wing hinge – Nature.com

Grimaldi, D. & Engel, M. S. Evolution of the Insects (Cambridge Univ. Press, 2005).

Deora, T., Gundiah, N. & Sane, S. P. Mechanics of the thorax in flies. J. Exp. Biol. 220, 13821395 (2017).

Article PubMed Google Scholar

Gu, J. et al. Recent advances in convolutional neural networks. Pattern Recognit. 77, 354377 (2018).

Article ADS Google Scholar

Kramer, M. A. Nonlinear principal component analysis using autoassociative neural networks. AlChE J. 37, 233243 (1991).

Article ADS CAS Google Scholar

Pringle, J. W. S. The excitation and contraction of the flight muscles of insects. J. Physiol. 108, 226232 (1949).

Article CAS PubMed PubMed Central Google Scholar

Josephson, R. K., Malamud, J. G. & Stokes, D. R. Asynchronous muscle: a primer. J. Exp. Biol. 203, 27132722 (2000).

Article CAS PubMed Google Scholar

Gau, J. et al. Bridging two insect flight modes in evolution, physiology and robophysics. Nature 622, 767774 (2023).

Article ADS CAS PubMed PubMed Central Google Scholar

Boettiger, E. G. & Furshpan, E. The mechanics of flight movements in diptera. Biol. Bull. 102, 200211 (1952).

Article Google Scholar

Pringle, J. W. S. Insect Flight (Cambridge Univ. Press, 1957).

Miyan, J. A. & Ewing, A. W. How Diptera move their wings: a re-examination of the wing base articulation and muscle systems concerned with flight. Phil. Trans. R. Soc. B 311, 271302 (1985).

ADS Google Scholar

Wisser, A. Wing beat of Calliphora erythrocephala: turning axis and gearbox of the wing base (Insecta, Diptera). Zoomorph. 107, 359369 (1988).

Article Google Scholar

Ennos, R. A. A comparative study of the flight mechanism of diptera. J. Exp. Biol. 127, 355372 (1987).

Article Google Scholar

Dickinson, M. H. & Tu, M. S. The function of dipteran flight muscle. Comp. Biochem. Physiol. A 116, 223238 (1997).

Article Google Scholar

Nalbach, G. The gear change mechanism of the blowfly (Calliphora erythrocephala) in tethered flight. J. Comp. Physiol. A 165, 321331 (1989).

Article Google Scholar

Walker, S. M., Thomas, A. L. R. & Taylor, G. K. Operation of the alula as an indicator of gear change in hoverflies. J. R. Soc. Inter. 9, 11941207 (2011).

Article Google Scholar

Walker, S. M. et al. In vivo time-resolved microtomography reveals the mechanics of the blowfly flight motor. PLoS Biol. 12, e1001823 (2014).

Article PubMed PubMed Central Google Scholar

Wisser, A. & Nachtigall, W. Functional-morphological investigations on the flight muscles and their insertion points in the blowfly Calliphora erythrocephala (Insecta, Diptera). Zoomorph. 104, 188195 (1984).

Article Google Scholar

Heide, G. Funktion der nicht-fibrillaren Flugmuskeln von Calliphora. I. Lage Insertionsstellen und Innervierungsmuster der Muskeln. Zool. Jahrb., Abt. allg. Zool. Physiol. Tiere 76, 8798 (1971).

Google Scholar

Fabian, B., Schneeberg, K. & Beutel, R. G. Comparative thoracic anatomy of the wild type and wingless (wg1cn1) mutant of Drosophila melanogaster (Diptera). Arth. Struct. Dev. 45, 611636 (2016).

Article Google Scholar

Tu, M. & Dickinson, M. Modulation of negative work output from a steering muscle of the blowfly Calliphora vicina. J. Exp. Biol. 192, 207224 (1994).

Article CAS PubMed Google Scholar

Tu, M. S. & Dickinson, M. H. The control of wing kinematics by two steering muscles of the blowfly (Calliphora vicina). J. Comp. Physiol. A 178, 813830 (1996).

Article CAS PubMed Google Scholar

Muijres, F. T., Iwasaki, N. A., Elzinga, M. J., Melis, J. M. & Dickinson, M. H. Flies compensate for unilateral wing damage through modular adjustments of wing and body kinematics. Interface Focus 7, 20160103 (2017).

Article PubMed PubMed Central Google Scholar

OSullivan, A. et al. Multifunctional wing motor control of song and flight. Curr. Biol. 28, 27052717.e4 (2018).

Article PubMed Google Scholar

Azevedo, A. et al. Tools for comprehensive reconstruction and analysis of Drosophila motor circuits. Preprint at BioRxiv https://doi.org/10.1101/2022.12.15.520299 (2022).

Donovan, E. R. et al. Muscle activation patterns and motoranatomy of Annas hummingbirds Calypte anna and zebra finches Taeniopygia guttata. Physiol. Biochem. Zool. 86, 2746 (2013).

Article PubMed Google Scholar

Bashivan, P., Kar, K. & DiCarlo, J. J. Neural population control via deep image synthesis. Science 364, eaav9436 (2019).

Article CAS PubMed Google Scholar

Lindsay, T., Sustar, A. & Dickinson, M. The function and organization of the motor system controlling flight maneuvers in flies. Curr. Biol. 27, 345358 (2017).

Article CAS PubMed Google Scholar

Reiser, M. B. & Dickinson, M. H. A modular display system for insect behavioral neuroscience. J. Neurosci. Meth. 167, 127139 (2008).

Article Google Scholar

Albawi, S., Mohammed, T. A. & Al-Zawi, S. Understanding of a convolutional neural network. In 2017 International Conference on Engineering and Technology (ICET) 16 https://doi.org/10.1109/ICEngTechnol.2017.8308186 (2017).

Kennedy, J. & Eberhart, R. Particle swarm optimization. In Proc. ICNN95International Conference on Neural Networks Vol. 4, 19421948 (1995).

Dana, H. et al. High-performance calcium sensors for imaging activity in neuronal populations and microcompartments. Nat. Methods 16, 649657 (2019).

Article CAS PubMed Google Scholar

Muijres, F. T., Elzinga, M. J., Melis, J. M. & Dickinson, M. H. Flies evade looming targets by executing rapid visually directed banked turns. Science 344, 172177 (2014).

Article ADS CAS PubMed Google Scholar

Gordon, S. & Dickinson, M. H. Role of calcium in the regulation of mechanical power in insect flight. Proc. Natl Acad. Sci. USA 103, 43114315 (2006).

Article ADS CAS PubMed PubMed Central Google Scholar

Nachtigall, W. & Wilson, D. M. Neuro-muscular control of dipteran flight. J. Exp. Biol. 47, 7797 (1967).

Article CAS PubMed Google Scholar

Heide, G. & Gtz, K. G. Optomotor control of course and altitude in Drosophila melanogaster is correlated with distinct activities of at least three pairs of flight steering muscles. J. Exp. Biol. 199, 17111726 (1996).

Article CAS PubMed Google Scholar

Balint, C. N. & Dickinson, M. H. The correlation between wing kinematics and steering muscle activity in the blowfly Calliphora vicina. J. Exp. Biol. 204, 42134226 (2001).

Article CAS PubMed Google Scholar

Elzinga, M. J., Dickson, W. B. & Dickinson, M. H. The influence of sensory delay on the yaw dynamics of a flapping insect. J. R. Soc. Interface 9, 16851696 (2012).

Article PubMed Google Scholar

Dickinson, M. H., Lehmann, F.-O. & Sane, S. P. Wing rotation and the aerodynamic basis of insect flight. Science 284, 19541960 (1999).

Article CAS PubMed Google Scholar

Lehmann, F. O. & Dickinson, M. H. The changes in power requirements and muscle efficiency during elevated force production in the fruit fly Drosophila melanogaster. J. Exp. Biol. 200, 11331143 (1997).

Article CAS PubMed Google Scholar

Lucia, S., Ttulea-Codrean, A., Schoppmeyer, C. & Engell, S. Rapid development of modular and sustainable nonlinear model predictive control solutions. Control Eng. Pract. 60, 5162 (2017).

Article Google Scholar

Cheng, B., Fry, S. N., Huang, Q. & Deng, X. Aerodynamic damping during rapid flight maneuvers in the fruit fly Drosophila. J. Exp. Biol. 213, 602612 (2010).

Article CAS PubMed Google Scholar

Collett, T. S. & Land, M. F. Visual control of flight behaviour in the hoverfly, Syritta pipiens L. J. Comp. Physiol. 99, 166 (1975).

Article Google Scholar

Muijres, F. T., Elzinga, M. J., Iwasaki, N. A. & Dickinson, M. H. Body saccades of Drosophila consist of stereotyped banked turns. J. Exp. Biol. 218, 864875 (2015).

Article PubMed Google Scholar

Syme, D. A. & Josephson, R. K. How to build fast muscles: synchronous and asynchronous designs. Integr. Comp. Biol. 42, 762770 (2002).

Article PubMed Google Scholar

Snodgrass, R. E. Principles of Insect Morphology (Cornell Univ. Press, 2018).

Williams, C. M. & Williams, M. V. The flight muscles of Drosophila repleta. J. Morphol. 72, 589599 (1943).

Article Google Scholar

Wootton, R. The geometry and mechanics of insect wing deformations in flight: a modelling approach. Insects 11, 446 (2020).

Article PubMed PubMed Central Google Scholar

Lerch, S. et al. Resilin matrix distribution, variability and function in Drosophila. BMC Biol. 18, 195 (2020).

Article CAS PubMed PubMed Central Google Scholar

Weis-Fogh, T. A rubber-like protein in insect cuticle. J. Exp. Biol. 37, 889907 (1960).

Article CAS Google Scholar

Weis-Fogh, T. Energetics of hovering flight in hummingbirds and in Drosophila. J. Exp. Biol. 56, 79104 (1972).

Article Google Scholar

Ellington, C. P. The aerodynamics of hovering insect flight. VI. Lift and power requirements. Phil. Trans. R. Soc. B 305, 145181 (1984).

ADS Google Scholar

Alexander, R. M. & Bennet-Clark, H. C. Storage of elastic strain energy in muscle and other tissues. Nature 265, 114117 (1977).

Article ADS CAS PubMed Google Scholar

Mronz, M. & Lehmann, F.-O. The free-flight response of Drosophila to motion of the visual environment. J. Exp. Biol. 211, 20262045 (2008).

Article PubMed Google Scholar

Read more:
Machine learning reveals the control mechanics of an insect wing hinge - Nature.com

Read More..

A secure approach to generative AI with AWS | Amazon Web Services – AWS Blog

Generative artificial intelligence (AI) is transforming the customer experience in industries across the globe. Customers are building generative AI applications using large language models (LLMs) and other foundation models (FMs), which enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels.

FMs and the applications built around them represent extremely valuable investments for our customers. Theyre often used with highly sensitive business data, like personal data, compliance data, operational data, and financial information, to optimize the models output. The biggest concern we hear from customers as they explore the advantages of generative AI is how to protect their highly sensitive data and investments. Because their data and model weights are incredibly valuable, customers require them to stay protected, secure, and private, whether thats from their own administrators accounts, their customers, vulnerabilities in software running in their own environments, or even their cloud service provider from having access.

At AWS, our top priority is safeguarding the security and confidentiality of our customers workloads. We think about security across the three layers of our generative AI stack:

Each layer is important to making generative AI pervasive and transformative.

With the AWS Nitro System, we delivered a first-of-its-kind innovation on behalf of our customers. The Nitro System is an unparalleled computing backbone for AWS, with security and performance at its core. Its specialized hardware and associated firmware are designed to enforce restrictions so that nobody, including anyone in AWS, can access your workloads or data running on your Amazon Elastic Compute Cloud (Amazon EC2) instances. Customers have benefited from this confidentiality and isolation from AWS operators on all Nitro-based EC2 instances since 2017.

By design, there is no mechanism for any Amazon employee to access a Nitro EC2 instance that customers use to run their workloads, or to access data that customers send to a machine learning (ML) accelerator or GPU. This protection applies to all Nitro-based instances, including instances with ML accelerators like AWS Inferentia and AWS Trainium, and instances with GPUs like P4, P5, G5, and G6.

The Nitro System enables Elastic Fabric Adapter (EFA), which uses the AWS-built AWS Scalable Reliable Datagram (SRD) communication protocol for cloud-scale elastic and large-scale distributed training, enabling the only always-encrypted Remote Direct Memory Access (RDMA) capable network. All communication through EFA is encrypted with VPC encryption without incurring any performance penalty.

The design of the Nitro System has been validated by the NCC Group, an independent cybersecurity firm. AWS delivers a high level of protection for customer workloads, and we believe this is the level of security and confidentiality that customers should expect from their cloud provider. This level of protection is so critical that weve added it in our AWS Service Terms to provide an additional assurance to all of our customers.

From day one, AWS AI infrastructure and services have had built-in security and privacy features to give you control over your data. As customers move quickly to implement generative AI in their organizations, you need to know that your data is being handled securely across the AI lifecycle, including data preparation, training, and inferencing. The security of model weightsthe parameters that a model learns during training that are critical for its ability to make predictionsis paramount to protecting your data and maintaining model integrity.

This is why it is critical for AWS to continue to innovate on behalf of our customers to raise the bar on security across each layer of the generative AI stack. To do this, we believe that you must have security and confidentiality built in across each layer of the generative AI stack. You need to be able to secure the infrastructure to train LLMs and other FMs, build securely with tools to run LLMs and other FMs, and run applications that use FMs with built-in security and privacy that you can trust.

At AWS, securing AI infrastructure refers to zero access to sensitive AI data, such as AI model weights and data processed with those models, by any unauthorized person, either at the infrastructure operator or at the customer. Its comprised of three key principles:

The Nitro System fulfills the first principle of Secure AI Infrastructure by isolating your AI data from AWS operators. The second principle provides you with a way to remove administrative access of your own users and software to your AI data. AWS not only offers you a way to achieve that, but we also made it straightforward and practical by investing in building an integrated solution between AWS Nitro Enclaves and AWS Key Management Service (AWS KMS). With Nitro Enclaves and AWS KMS, you can encrypt your sensitive AI data using keys that you own and control, store that data in a location of your choice, and securely transfer the encrypted data to an isolated compute environment for inferencing. Throughout this entire process, the sensitive AI data is encrypted and isolated from your own users and software on your EC2 instance, and AWS operators cannot access this data. Use cases that have benefited from this flow include running LLM inferencing in an enclave. Until today, Nitro Enclaves operate only in the CPU, limiting the potential for larger generative AI models and more complex processing.

We announced our plans to extend this Nitro end-to-end encrypted flow to include first-class integration with ML accelerators and GPUs, fulfilling the third principle. You will be able to decrypt and load sensitive AI data into an ML accelerator for processing while providing isolation from your own operators and verified authenticity of the application used for processing the AI data. Through the Nitro System, you can cryptographically validate your applications to AWS KMS and decrypt data only when the necessary checks pass. This enhancement allows AWS to offer end-to-end encryption for your data as it flows through generative AI workloads.

We plan to offer this end-to-end encrypted flow in the upcoming AWS-designed Trainium2 as well as GPU instances based on NVIDIAs upcoming Blackwell architecture, which both offer secure communications between devices, the third principle of Secure AI Infrastructure. AWS and NVIDIA are collaborating closely to bring a joint solution to market, including NVIDIAs new NVIDIA Blackwell GPU platform, which couples NVIDIAs GB200 NVL72 solution with the Nitro System and EFA technologies to provide an industry-leading solution for securely building and deploying next-generation generative AI applications.

Today, tens of thousands of customers are using AWS to experiment and move transformative generative AI applications into production. Generative AI workloads contain highly valuable and sensitive data that needs the level of protection from your own operators and the cloud service provider. Customers using AWS Nitro-based EC2 instances have received this level of protection and isolation from AWS operators since 2017, when we launched our innovative Nitro System.

At AWS, were continuing that innovation as we invest in building performant and accessible capabilities to make it practical for our customers to secure their generative AI workloads across the three layers of the generative AI stack, so that you can focus on what you do best: building and extending the uses of the generative AI to more areas. Learn more here.

Anthony Liguori is an AWS VP and Distinguished Engineer for EC2

Colm MacCrthaigh is an AWS VP and Distinguished Engineer for EC2

Here is the original post:
A secure approach to generative AI with AWS | Amazon Web Services - AWS Blog

Read More..

AI, machine learning, and the future of metal fabrication – TheFabricator.com

On the first day of the 2024 Fabricators and Manufacturers Association Annual Meeting, held in Clearwater Beach, Fla., Gene Marks, speaker and columnist for Forbes magazine, pointed at an eye-opening chart tracking the cost of computer processing speed over the past few decades.

During the FMA Annual Meeting, a Navy SEAL turned leadership consultant brought up an idea that, for those who never served in the military, seemed a bit surprising: decentralized command. Not only does decentralized command allow you to grow into a role, but as a leader, it allows you to step back and look at the big picture. Theres no way I can have that view if Im constantly making decisions for my team.

That was veteran Carlos Mendez, a consultant with Texas-based Echelon Front. His insight went against the popular view of the military, shaped by movie scenes of sergeants screaming at subordinates. The reality is that soldiers can find themselves cut off from central command, and if they dont have the training or authority to think and act independently, they can be in a world of trouble.

Lives might not be at stake in the fab shop, but livelihoods certainly are. Most metal fabrication occurs in high-product-mix environments. With equipment and software juggling hundreds of jobs, some unexpected variables are bound to throw a wrench into the workday. Fabricators work to minimize the exceptions, but there will alwaysalwaysbe exceptions.

On the first day of the late-February conference, held in Clearwater Beach, Fla., Gene Marks, speaker and columnist for Forbes magazine, pointed at an eye-opening chart tracking the cost of computer processing speed over the past few decades. The price of processing speed today is roughly one-one-hundred-millionth of what it was in the 1970s. The fastest computers in 1993 could perform less than 1,000 operations in a millisecond. Thats gone up to over a billion operations. Thats every millisecond.

The extraordinary power of modern computing has created all sorts of AI tools, but theyre not total solutions. They can write an email, design a presentation, and automate certain email tasks. They help immensely in a thousand different ways, but they only get you 80% to 90% there. Humans still need to bring work over the finish line.

This scenario might reflect life on the shop floor one day, though weve got a ways to go. As several attendees from custom fabricators discussed during the conference breakout sessions, the challenge is data. Conventional wisdom has it that manufacturers are swimming in itbut how good is that data? Machines and software capture incredible amounts. But in most fab shops, not every machine is automated, and a lot has to happen before and after each manufacturing step. Instead of paper job travelers, operators now use laptops, tablets, even their phones, but theyre still probably keying in job information manually into an ERP system. What exactly happens at a specific work center between that initial clock-in and final clock-out often just isnt captured.

This sometimes leads to big surprises when fabricators integrate Industrial Internet of Things (IIoT) platforms. These can reveal just how little real uptime machines havethat is, when machines are actually cutting, bending, and welding, and good parts are actually being produced. Usually, its a fraction of the time people assumed. IIoT is revealing low-hanging-fruit improvements (material staging, standardizing procedures and work practices between shifts, etc.), but its also showing that, no matter how dialed-in an operation becomes, exceptions will exist. More so than in past years, discussions at the FMA Annual Meeting really focused on how to best manage them.

Lean principles entered the fray during the conference sessionslike optimizing machine utilization but not at the expense of a plants overall throughput. Less-talked-about inefficiencies also entered the debate. During one breakout session, Caleb Chamberlain, co-founder of OSH Cut (and fellow columnist for The Fabricator), brought attendees through the customer experience he and his team designed. OSH Cut doesnt make to print. In fact, it has no prints at all, which raised some eyebrows in the audience. Customers upload design files directly to the OSH Cut website, which performs a design-for-manufacturability (DFM) analysis. If theres an issue, the customer can make changes and then upload the design again. From there, nesting, machine programming, and myriad other order-prep tasks all happen automatically.

OSH Cut isnt the only fabricator to do this. A few in the U.S. offer similar services, and Europe has a collection of web shops, 247TailorSteel, a Dutch operation, being the most well known. The model wont work everywhere, but it does bring up larger questions about what activities in the metal fabrication supply chain truly add value, and where those activities (especially DFM) should happen.

Brian Steel, CEO of Cadrex Manufacturing Solutions and a panel participant at the conference, represented the other end of the metal fabrication spectrum. After acquiring a multitude of plants, Cadrex is now one of the largest contract metal fabricators in the country. The company also has adopted software that runs factory-wide simulations, weighs various production options, then suggests what should work best.

Fabricators and Manufacturers Association Annual Meeting attendees took a deep dive into operational excellence, skill, and what will define future success: good systems, good data, and, most important, good employees.

All this reveals the increasing importance of software innovations, the best of which are aiming to weigh the effects of thousands of variables in high-product-mix manufacturing and, ultimately, help skilled people make better decisions. Software wont account for everything, which brings the importance of problem-solving and decentralized command to the fore. The last thing FMA Annual Meeting attendees want is to lead an automated shop where software makes all the decisions and people just mindlessly do what theyre told.

Operators shouldnt avoid using new technology or change machine programs or tools just because thats what they prefer. But they also shouldnt run a machine program or job that truly doesnt work. They need to be able to identify what truly is an exception, then have enough knowledge and authority (supported by good systems and procedures) to act and get the job done. No matter what the future of software, machine learning, and AI looks like, employee skill and curiosity will remain a fabricators key competitive advantage.

Read more:
AI, machine learning, and the future of metal fabrication - TheFabricator.com

Read More..

AI and Machine Learning will not save the planet (yet) – TechRadar

Artificial General Intelligence, when it exists, will be able to do many tasks better than humans. For now, the machine learning systems and generative AI solutions available on the market are a stopgap to ease the cognitive load on engineers, until machines which think like people exist.

Generative AI is currently dominating headlines, but its backbone, neural networks, have been in use for decades. These Machine Learning (ML) systems historically acted as cruise control for large systems that would be difficult to constantly maintain by hand. The latest algorithms also proactively respond to errors and threats, alerting teams and recording logs of unusual activity. These systems have developed further and can even predict certain outcomes based on previously observed patterns.

This ability to learn and respond is being adapted to all kinds of technology. One that persists is the use of AI tools in envirotech. Whether it's enabling new technologies with vast data processing capabilities, or improving the efficiency of existing systems by intelligently adjusting inputs to maximize efficiency, AI at this stage of development is so open ended it could theoretically be applied to any task.

Social Links Navigation

Co-Founder of VictoriaMetrics.

GenAI isnt inherently energy intensive. A model or neural network is no more energy inefficient than any other piece of software when it is operating, but the development of these AI tools is what generates the majority of the energy costs. The justification for this energy consumption is that the future benefits of the technology are worth the cost in energy and resources.

Some reports suggest many AI applications are solutions in search of a problem, and many developers are using vast amounts of energy to develop tools that could produce dubious energy savings at best. One of the biggest benefits of machine learning is its ability to read through large amounts of data, and summarize insights for humans to act on. Reporting is a laborious and frequently manual process, time saved reporting can be shifted to actioning machine learning insights and actively addressing business-related emissions.

Businesses are under increasing pressure to start reporting on Scope 3 emissions, which are the hardest to measure, and the biggest contributor of emissions for most modern companies. Capturing and analyzing these disparate data sources would be a smart use of AI, but would still ultimately require regular human guidance. Monitoring solutions already exist on the market to reduce the demand on engineers, so taking this a step further with AI is an unnecessary and potentially damaging innovation.

Replacing the engineer with an AI agent reduces human labor, but removes a complex interface, just to add equally complex programming in front of it. That isnt to say innovation should be discouraged. Its a noble aim, but do not be sold a fairy tale that this will happen without any hiccups. Some engineers will be replaced eventually by this technology, but the industry should approach it carefully.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Consider self-driving cars. They're here, they're doing better than an average human-driver. But in some edge cases they can be dangerous. The difference is that it is very easy to see this danger, compared to the potential risks of AI.

AI agents at the present stage of development are comparable to human employees - they need training and supervision, and will gradually become out of date unless re-trained from time to time. Similarly, as has been observed with ChatGPT, models can degrade over time. The mechanics that drive this degradation are not clear, but these systems are delicately calibrated, and this calibration is not a permanent state. The more flexible the model, the more likely it can misfire and function suboptimally. This can manifest as data or concept drift, an issue where a model invalidates itself over time. This is one of many inherent issues with attaching probabilistic models to deterministic tools.

A concerning area of development is the use of AI in natural language inputs, trying to make it easier for less technical employees or decision makers to save on hiring engineers. Natural language outputs are ideal for translating the expert, subject specific outputs from monitoring systems, in a way that makes the data accessible for those who are less data literate. Despite this strength even summarizations can be subject to hallucinations where data is fabricated, this is an issue that persists in LLMs and could create costly errors where AI is used to summarize mission critical reports.

The risk is we create AI overlays for systems that require deterministic inputs. Trying to make the barrier to entry for complex systems lower is admirable, but these systems require precision. AI agents cannot explain their reasoning, or truly understand a natural language input and work out the real request in the way a human can. Moreover, it adds another layer of energy consuming software to a tech stack for minimal gain.

The rush to AI everything is producing a tremendous amount of wasted energy, with 14,000 AI startups currently in existence, how many will actually produce tools that will benefit humanity? While AI can improve the efficiency of a data center by managing resources, ultimately that doesn't manifest into a meaningful energy saving as in most cases that free capacity is then channeled into another application, using any saved resource headroom, plus the cost of yet more AI powered tools.

Can AI help achieve sustainability goals? Probably, but most of the advocates fall down at the how part of that question, in some cases suggesting that AI itself will come up with new technologies. Climate change is now an existential threat with so many variables to account for it stretches the comprehension of the human mind. Rather than tackling this problem directly, technophiles defer responsibility to AI in the hope it will provide a solution at some point in future. The future is unknown, and climate change is happening now. Banking on AI to save us is simply crossing our fingers and hoping for the best dressed up as neo-futurism.

We've listed the best collaboration platform for teams.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

View original post here:
AI and Machine Learning will not save the planet (yet) - TechRadar

Read More..

How AI can improve the deployment crisis in machine learning projects Dr. Eric Siegel – Atlanta Small Business Network

On todays episode of The Small Business Show, were exploring the realm of machine learning technology and its practical applications. Dr. Eric Siegel, author, founder of Machine Learning Week, and former Columbia University professor, share insights from his latest book, The AI Playbook, which offers readers a comprehensive understanding of how machine learning operates and strategies for leveraging it effectively.

1. Dr. Siegel explains that machine learning is a technology that predicts outcomes by learning from data or past experiences. This includes predicting various events or behaviors, like fraudulent activities or equipment failures, which is essential for businesses looking to leverage artificial intelligence (AI) for operational efficiency.

2. According to Dr. Siegel, he notes the distinction between predictive and generative AI. Predictive AI focuses on forecasting specific outcomes and is where most current financial returns are seen. Generative AI, which is increasingly popular in the media, creates new content like text, images, and music, showcasing the versatile applications of machine learning.

3. Moreover, Dr. Siegel discusses how businesses can use machine learning to enhance large-scale operations, including targeted marketing, fraud detection, supply chain management, and operational decision-making, thus improving efficiency and reducing costs.

4. The conversation sheds light on the challenges businesses face in deploying machine learning projects, with many failing to reach full implementation. Dr. Siegels book, The AI Playbook, is mentioned as a guide to navigating these challenges, emphasizing the need for a semi-technical understanding among business stakeholders to ensure successful deployment.

5. The interview conveys the importance of integrating machine learning into business operations to drive efficiency and innovation. Dr. Siegel encourages business leaders to develop an understanding of machine learning to effectively harness its predictive capabilities, illustrating this with success stories from companies like UPS, which significantly improved operational efficiency through predictive modeling.

Excerpt from:
How AI can improve the deployment crisis in machine learning projects Dr. Eric Siegel - Atlanta Small Business Network

Read More..

Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune | Amazon Web … – AWS Blog

In asset management, portfolio managers need to closely monitor companies in their investment universe to identify risks and opportunities, and guide investment decisions. Tracking direct events like earnings reports or credit downgrades is straightforwardyou can set up alerts to notify managers of news containing company names. However, detecting second and third-order impacts arising from events at suppliers, customers, partners, or other entities in a companys ecosystem is challenging.

For example, a supply chain disruption at a key vendor would likely negatively impact downstream manufacturers. Or the loss of a top customer for a major client poses a demand risk for the supplier. Very often, such events fail to make headlines featuring the impacted company directly, but are still important to pay attention to. In this post, we demonstrate an automated solution combining knowledge graphs and generative artificial intelligence (AI) to surface such risks by cross-referencing relationship maps with real-time news.

Broadly, this entails two steps: First, building the intricate relationships between companies (customers, suppliers, directors) into a knowledge graph. Second, using this graph database along with generative AI to detect second and third-order impacts from news events. For instance, this solution can highlight that delays at a parts supplier may disrupt production for downstream auto manufacturers in a portfolio though none are directly referenced.

With AWS, you can deploy this solution in a serverless, scalable, and fully event-driven architecture. This post demonstrates a proof of concept built on two key AWS services well suited for graph knowledge representation and natural language processing: Amazon Neptune and Amazon Bedrock. Neptune is a fast, reliable, fully managed graph database service that makes it straightforward to build and run applications that work with highly connected datasets. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Overall, this prototype demonstrates the art of possible with knowledge graphs and generative AIderiving signals by connecting disparate dots. The takeaway for investment professionals is the ability to stay on top of developments closer to the signal while avoiding noise.

The first step in this solution is building a knowledge graph, and a valuable yet often overlooked data source for knowledge graphs is company annual reports. Because official corporate publications undergo scrutiny before release, the information they contain is likely to be accurate and reliable. However, annual reports are written in an unstructured format meant for human reading rather than machine consumption. To unlock their potential, you need a way to systematically extract and structure the wealth of facts and relationships they contain.

With generative AI services like Amazon Bedrock, you now have the capability to automate this process. You can take an annual report and trigger a processing pipeline to ingest the report, break it down into smaller chunks, and apply natural language understanding to pull out salient entities and relationships.

For example, a sentence stating that [Company A] expanded its European electric delivery fleet with an order for 1,800 electric vans from [Company B] would allow Amazon Bedrock to identify the following:

Extracting such structured data from unstructured documents requires providing carefully crafted prompts to large language models (LLMs) so they can analyze text to pull out entities like companies and people, as well as relationships such as customers, suppliers, and more. The prompts contain clear instructions on what to look out for and the structure to return the data in. By repeating this process across the entire annual report, you can extract the relevant entities and relationships to construct a rich knowledge graph.

However, before committing the extracted information to the knowledge graph, you need to first disambiguate the entities. For instance, there may already be another [Company A] entity in the knowledge graph, but it could represent a different organization with the same name. Amazon Bedrock can reason and compare the attributes such as business focus area, industry, and revenue-generating industries and relationships to other entities to determine if the two entities are actually distinct. This prevents inaccurately merging unrelated companies into a single entity.

After disambiguation is complete, you can reliably add new entities and relationships into your Neptune knowledge graph, enriching it with the facts extracted from annual reports. Over time, the ingestion of reliable data and integration of more reliable data sources will help build a comprehensive knowledge graph that can support revealing insights through graph queries and analytics.

This automation enabled by generative AI makes it feasible to process thousands of annual reports and unlocks an invaluable asset for knowledge graph curation that would otherwise go untapped due to the prohibitively high manual effort needed.

The following screenshot shows an example of the visual exploration thats possible in a Neptune graph database using the Graph Explorer tool.

The next step of the solution is automatically enriching portfolio managers news feeds and highlighting articles relevant to their interests and investments. For the news feed, portfolio managers can subscribe to any third-party news provider through AWS Data Exchange or another news API of their choice.

When a news article enters the system, an ingestion pipeline is invoked to process the content. Using techniques similar to the processing of annual reports, Amazon Bedrock is used to extract entities, attributes, and relationships from the news article, which are then used to disambiguate against the knowledge graph to identify the corresponding entity in the knowledge graph.

The knowledge graph contains connections between companies and people, and by linking article entities to existing nodes, you can identify if any subjects are within two hops of the companies that the portfolio manager has invested in or is interested in. Finding such a connection indicates the article may be relevant to the portfolio manager, and because the underlying data is represented in a knowledge graph, it can be visualized to help the portfolio manager understand why and how this context is relevant. In addition to identifying connections to the portfolio, you can also use Amazon Bedrock to perform sentiment analysis on the entities referenced.

The final output is an enriched news feed surfacing articles likely to impact the portfolio managers areas of interest and investments.

The overall architecture of the solution looks like the following diagram.

The workflow consists of the following steps:

You can deploy the prototype solution and start experimenting yourself. The prototype is available from GitHub and includes details on the following:

This post demonstrated a proof of concept solution to help portfolio managers detect second- and third-order risks from news events, without direct references to companies they track. By combining a knowledge graph of intricate company relationships with real-time news analysis using generative AI, downstream impacts can be highlighted, such as production delays from supplier hiccups.

Although its only a prototype, this solution shows the promise of knowledge graphs and language models to connect dots and derive signals from noise. These technologies can aid investment professionals by revealing risks faster through relationship mappings and reasoning. Overall, this is a promising application of graph databases and AI that warrants exploration to augment investment analysis and decision-making.

If this example of generative AI in financial services is of interest to your business, or you have a similar idea, reach out to your AWS account manager, and we will be delighted to explore further with you.

Xan Huang is a Senior Solutions Architect with AWS and is based in Singapore. He works with major financial institutions to design and build secure, scalable, and highly available solutions in the cloud. Outside of work, Xan spends most of his free time with his family and getting bossed around by his 3-year-old daughter. You can find Xan on LinkedIn.

Go here to read the rest:
Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune | Amazon Web ... - AWS Blog

Read More..

Synthetic Lagrangian turbulence by generative diffusion models – Nature.com

Shraiman, I. B. & D. Siggia, D. E. Scalar turbulence. Nature 405, 639646 (2000).

Article Google Scholar

La Porta, A., Voth, G. A., Crawford, A. M., Alexander, J. & Bodenschatz, E. Fluid particle accelerations in fully developed turbulence. Nature 409, 10171019 (2001).

Article Google Scholar

Mordant, N., Metz, P., Michel, O. & Pinton, J.-F. Measurement of lagrangian velocity in fully developed turbulence. Phys. Rev. Lett. 87, 214501 (2001).

Article Google Scholar

Falkovich, G., Gawdzki, K. & Vergassola, M. Particles and fields in fluid turbulence. Rev. Mod. Phys. 73, 913975 (2001).

Article MathSciNet Google Scholar

Yeung, P. Lagrangian investigations of turbulence. Annu. Rev. Fluid Mech. 34, 115142 (2002).

Article MathSciNet Google Scholar

Pomeau, Y. The long and winding road. Nat. Phys. 12, 198199 (2016).

Article Google Scholar

Falkovich, G. & Sreenivasan, K. R. Lessons from hydrodynamic turbulence. Phys. Today 59, 43 (2006).

Article Google Scholar

Toschi, F. & Bodenschatz, E. Lagrangian properties of particles in turbulence. Annu. Rev. fluid Mech. 41, 375404 (2009).

Article MathSciNet Google Scholar

Shaw, R. A. Particle-turbulence interactions in atmospheric clouds. Annu. Rev. Fluid Mech. 35, 183227 (2003).

Article Google Scholar

McKee, C. F. & Stone, J. M. Turbulence in the heavens. Nat. Astron. 5, 342343 (2021).

Article Google Scholar

Bentkamp, L., Lalescu, C. C. & Wilczek, M. Persistent accelerations disentangle lagrangian turbulence. Nat. Commun. 10, 3550 (2019).

Article Google Scholar

Sawford, B. L. & Pinton, J.-F. in Ten Chapters in Turbulance (eds. Davidson, P. A., Kaneda, Y. & Sreenivasan, K. R.) 132175 (Cambridge Univ. Press, 2013).

Xia, H., Francois, N., Punzmann, H. & Shats, M. Lagrangian scale of particle dispersion in turbulence. Nat. Commun. 4, 2013 (2013).

Article Google Scholar

Barenghi, C. F., Skrbek, L. & Sreenivasan, K. R. Introduction to quantum turbulence. Proc. Natl Acad. Sci. USA 111, 46474652 (2014).

Article MathSciNet Google Scholar

Xu, H. et al. Flightcrash events in turbulence. Proc. Natl Acad. Sci. USA 111, 75587563 (2014).

Article Google Scholar

Laussy, F. P. Shining light on turbulence. Nat. Photonics 17, 381382 (2023).

Article Google Scholar

Frisch, U.Turbulence: The Legacy of AN Kolmogorov (Cambridge Univ. Press, 1995).

Sawford, B. L. Reynolds number effects in Lagrangian stochastic models of turbulent dispersion. Phys. Fluids A 3, 15771586 (1991).

Article Google Scholar

Pope, S. B. Simple models of turbulent flows. Phys. Fluids 23, 011301 (2011).

Article Google Scholar

Viggiano, B. et al. Modelling lagrangian velocity and acceleration in turbulent flows as infinitely differentiable stochastic processes. J. Fluid Mech. 900, A27 (2020).

Article MathSciNet Google Scholar

Lamorgese, A., Pope, S. B., Yeung, P. & Sawford, B. L. A conditionally cubic-gaussian stochastic lagrangian model for acceleration in isotropic turbulence. J. Fluid Mech. 582, 423448 (2007).

Article MathSciNet Google Scholar

Minier, J.-P., Chibbaro, S. & Pope, S. B. Guidelines for the formulation of lagrangian stochastic models for particle simulations of single-phase and dispersed two-phase turbulent flows. Phys. Fluids 26, 113303 (2014).

Article Google Scholar

Wilson, J. D. & Sawford, B. L. Review of lagrangian stochastic models for trajectories in the turbulent atmosphere. Bound.-Layer. Meteorol. 78, 191210 (1996).

Article Google Scholar

Bourlioux, A., Majda, A. & Volkov, O. Conditional statistics for a passive scalar with a mean gradient and intermittency. Phys. Fluids https://doi.org/10.1063/1.2353880 (2006).

Majda, A. J. & Gershgorin, B. Elementary models for turbulent diffusion with complex physical features: eddy diffusivity, spectrum and intermittency. Philos. Trans. R. Soc. A 371, 20120184 (2013).

Article MathSciNet Google Scholar

Biferale, L., Boffetta, G., Celani, A., Crisanti, A. & Vulpiani, A. Mimicking a turbulent signal: sequential multiaffine processes. Phys. Rev. E 57, R6261 (1998).

Article Google Scholar

Arneodo, A., Bacry, E. & Muzy, J.-F. Random cascades on wavelet dyadic trees. J. Math. Phys. 39, 41424164 (1998).

Article MathSciNet Google Scholar

Bacry, E. & Muzy, J. F. Log-infinitely divisible multifractal processes. Commun. Math. Phys. 236, 449475 (2003).

Article MathSciNet Google Scholar

Chevillard, L. et al. On a skewed and multifractal unidimensional random field, as a probabilistic representation of Kolmogorovs views on turbulence. Ann. Henri Poincar 20, 36933741 (2019).

Article MathSciNet Google Scholar

Sinhuber, M., Friedrich, J., Grauer, R. & Wilczek, M. Multi-level stochastic refinement for complex time series and fields: a data-driven approach. N. J. Phys. 23, 063063 (2021).

Article MathSciNet Google Scholar

Lbke, J., Friedrich, J. & Grauer, R. Stochastic interpolation of sparsely sampled time series by a superstatistical random process and its synthesis in fourier and wavelet space. J. Phys.: Complex. 4, 015005 (2022).

Google Scholar

Zamansky, R. Acceleration scaling and stochastic dynamics of a fluid particle in turbulence. Phys. Rev. Fluids 7, 084608 (2022).

Article Google Scholar

Arnodo, A. et al. Universal intermittent properties of particle trajectories in highly turbulent flows. Phys. Rev. Lett. 100, 254504 (2008).

Article Google Scholar

Kingma, D. P. & Welling, M. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations: Conference Track Proceedings (ICLR, 2014); https://doi.org/10.48550/arXiv.1312.6114

Goodfellow, I. et al. Generative adversarial nets. Adv. Neural Infor. Process. Syst. 27, 26722680 (2014).

Google Scholar

Ho, J., Jain, A. & Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 33, 68406851 (2020).

Google Scholar

Dhariwal, P. & Nichol, A. Diffusion models beat gans on image synthesis. Adv. Neural Inf. Process. Syst. 34, 87808794 (2021).

Google Scholar

van den Oord, A. et al. WaveNet: a generative model for raw audio. Preprint at https://doi.org/10.48550/arXiv.1609.03499 (2016).

Brown, T. et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33, 18771901 (2020).

Google Scholar

Chen, R. J., Lu, M. Y., Chen, T. Y., Williamson, D. F. & Mahmood, F. Synthetic data in machine learning for medicine and healthcare. Nat. Biomed. Eng. 5, 493497 (2021).

Article Google Scholar

Duraisamy, K., Iaccarino, G. & Xiao, H. Turbulence modeling in the age of data. Annu. Rev. Fluid Mech. 51, 357377 (2019).

Article MathSciNet Google Scholar

Brunton, S. L., Noack, B. R. & Koumoutsakos, P. Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 52, 477508 (2020).

Article MathSciNet Google Scholar

Vlachas, P. R., Byeon, W., Wan, Z. Y., Sapsis, T. P. & Koumoutsakos, P. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks. Proc. R. Soc. A 474, 20170844 (2018).

Article MathSciNet Google Scholar

Pathak, J., Hunt, B., Girvan, M., Lu, Z. & Ott, E. Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Phys. Rev. Lett. 120, 024102 (2018).

Article Google Scholar

Mohan, A. T., Tretiak, D., Chertkov, M. & Livescu, D. Spatio-temporal deep learning models of 3d turbulence with physics informed diagnostics. J. Turbul. 21, 484524 (2020).

Article MathSciNet Google Scholar

Kim, J. & Lee, C. Deep unsupervised learning of turbulence for inflow generation at various reynolds numbers. J. Comput. Phys. 406, 109216 (2020).

Article MathSciNet Google Scholar

Guastoni, L. et al. Convolutional-network models to predict wall-bounded turbulence from wall quantities. J. Fluid Mech. 928, A27 (2021).

Article MathSciNet Google Scholar

Buzzicotti, M., Bonaccorso, F., Di Leoni, P. C. & Biferale, L. Reconstruction of turbulent data with deep generative models for semantic inpainting from turb-rot database. Phys. Rev. Fluids 6, 050503 (2021).

Article Google Scholar

Yousif, M. Z., Yu, L., Hoyas, S., Vinuesa, R. & Lim, H. A deep-learning approach for reconstructing 3d turbulent flows from 2d observation data. Sci. Rep. 13, 2529 (2023).

Article Google Scholar

Shu, D., Li, Z. & Farimani, A. B. A physics-informed diffusion model for high-fidelity flow field reconstruction. J. Comput. Phys. 478, 111972 (2023).

Article MathSciNet Google Scholar

Buzzicotti, M. Data reconstruction for complex flows using AI: recent progress, obstacles, and perspectives. Europhys. Lett. 142, 23001 (2023).

Article Google Scholar

Granero-Belinchon, C. Neural network based generation of a 1-dimensional stochastic field with turbulent velocity statistics. Phys. D 458, 133997 (2024).

Article MathSciNet Google Scholar

Nichol, A. Q. & Dhariwal, P. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning (eds. Meila, M. et al.) 81628171 (PMLR, 2021).

The rest is here:
Synthetic Lagrangian turbulence by generative diffusion models - Nature.com

Read More..

Google Colab: the power of the cloud for machine learning – DataScientest

Hosting in the cloud

A key feature of Google Colab is that it is hosted in the cloud. This means that there is no need to install Python or other libraries on your computer. Everything happens directly in a web browser. All you have to do is sign in to your Google Account and youre ready to go.

Pre-installation of numerous libraries

Google Colab comes with many Python libraries pre-installed. This includes libraries commonly used for data science such as NumPy, Pandas, Scikit-learn, TensorFlow and PyTorch, as well as visualisation libraries such as Matplotlib, Seaborn and Plotly, making it easy to create graphs, charts and visualisations to explore and present data. You dont need to worry about installing these libraries, which greatly simplifies the configuration of your environment.

Google Colab allows you to run system commands directly from a notebook. So if you need specific libraries that arent pre-installed, you can install them directly from a notebook using the pip command. This allows you to extend the functionality of your environment.

Access to computing resources

Google Colab offers free access to graphics processing units (GPUs) and tensor processing units (TPUs), which are extremely useful for computationally intensive tasks such as deep learning models. You can activate these hardware accelerations with just a few clicks. This speeds up the model training process, reducing the time needed to obtain results.

Read the original:
Google Colab: the power of the cloud for machine learning - DataScientest

Read More..

Only 6 altcoins in the top 50 have outperformed Bitcoin this year – Cointelegraph

Only six altcoins among the top 50 tokens by market capitalization have managed to outperform Bitcoin (BTC) so far this year, as Bitcoin dominance reached a three-year high over the weekend.

The memecoin Dogecoin (DOGE) stands as the best-performing altcoin in the top 50, having posted year-to-date gains of just over 77% climbing from $0.09 on Jan. 1 to $0.15 at the time of publication, per TradingView data.

Included in the remaining outperformers are fellow memecoin Shiba Inu (SHIB), Bitcoin smart contract network Stacks (STX), Binances BNB (BNB), Ethereum layer-2 network Mantle (MNT) and GPU-sharing blockchain network Render (RNDR).

Bitcoin has grown from a price of $44,100 on Jan. 1 to $65,000 at the time of publication, a year-to-date gain of 54%.

Many have pegged the price rise to consistent institutional inflows into the 10 United States-traded spot Bitcoin exchange-traded funds (ETFs) approved in January this year, generating more than $12 billion in cumulative net inflows, per Farside Investors data.

Notably, Bitcoin dominance pushed to a new year three-year high of 56.5% on April 13, as the cryptocurrency bounced back sharply from a marketwide sell-off sparked by escalating geopolitical tensions in the Middle East.

The Bitcoin dominance metric refers to the ratio of Bitcoins market cap compared to the cumulative market cap of all other cryptocurrencies.

While Bitcoin recovered ground in the following days, the majority of smaller altcoins failed to find their footing and tumbled significantly in price.

Alternative layer-1 network Aptos (APT) and decentralized crypto exchange Uniswap (UNI) led the decline among the top tokens 50 by market cap, posting losses of 35% and 31%, respectively, in the last seven days.

Related: Bitcoins normal drop leads to $256M longs liquidated Analysts

In an April 14 investment note viewed by Cointelegraph, IG Market analyst Tony Sycamore said Bitcoin appears to be on track for its fourth weekly decline, with the expectations of no further U.S. Federal Reserve rate weighing on crypto investing sentiment.

Despite the current negative-leaning sentiment toward risk assets, Sycamore predicted that Bitcoin would gradually climb to around $80,000 in the coming months depending on whether or not it can hold above its key support mark.

Providing Bitcoin remains above the [$60,000$58,000] support zone, we expect the uptrend to resume towards $80,000, Sycamore wrote.

Magazine: 5 dangers to beware when apeing into Solana memecoins

Read more here:
Only 6 altcoins in the top 50 have outperformed Bitcoin this year - Cointelegraph

Read More..