Page 73«..1020..72737475..8090..»

SandboxAQ Joins the FIDO Alliance to Further Drive the Use of Secure Protocols instead of Passwords – IT Security Guru

SandboxAQ has announced its membership in the FIDO Alliance, an open industry consortium focused on minimizing the worlds dependence on passwordsa prevalent source of security and usability problems. By joining forces with prominent FIDO Alliance members such as Amazon, Apple, Google, Intel, Microsoft, RSA, VISA, Yubico, and others, SandboxAQ aims to enhance and advance the FIDO2 protocol.

At SandboxAQ, we recognize the intrinsic connection between Modern Cryptography Management and contemporary user authentication protocols like FIDO. The FIDO Alliance champions the creation and adoption of open standards for user authentication that rely on cryptographic keys and protocols, rather than traditional passwords.

We are excited to join the FIDO Alliance in its mission to provide secure user authentication. We look forward to applying our research and expertise to evolve the specifications and ensure continued security in the quantum era, said Carlos Aguilar Melchor, Chief Scientist of cybersecurity at SandboxAQ.

The FIDO Alliance welcomes SandboxAQ to its growing roster of identity cybersecurity experts dedicated to protecting the worlds most sensitive data through the use of stronger authentication solutions that eliminate reliance on passwords, said Andrew Shikiar, CEO of the FIDO Alliance. Contributions of SandboxAQs expertise in encryption management and post-quantum security stand to help the FIDO Alliance ensure web authentication remains secure against current and future cyber threats, including those posed by quantum computers.

SandboxAQ is committed to modernizing cryptography management through its AQtive Guard platform. This platform enables enterprises to identify and manage vulnerable cryptographic algorithms and keys with unprecedented scale and precision across their entire IT infrastructure, leveraging AI techniques and modern monitoring and deployment frameworks. Cryptography management often involves hardware components that are harder to upgrade and less agile than software solutions.

The FIDO2 protocol exemplifies this hardware dependency, offering secure, passwordless authentication for web applications using hardware tokens with cryptographic credentials. While these tokens provide high security, they cannot be easily updated with newer hardware. FIDO solutions are extensively used by large organizations like Google and Cloudflare for employee authentication and are available to consumers for secure access to services on platforms such as Apple, Google, and Microsoft. The FIDO2 protocol is also used for secure access to health services, government services, and financial applications.

A recent FIDO Alliance white paper on addressing the quantum threat outlines two primary objectives: selecting the most appropriate post-quantum cryptographic algorithms and ensuring a seamless transition to post-quantum security. The R&D team at SandboxAQs Cybersecurity Group has made significant contributions in this area, analyzing the (post-quantum) security and privacy of the FIDO2 protocol and its attestation modes and open-sourcing the first end-to-end post-quantum secure FIDO2 implementation. SandboxAQ is enthusiastic about collaborating with the FIDO Alliance to ensure web authentication remains secure in the face of quantum computing threats.

See the original post here:
SandboxAQ Joins the FIDO Alliance to Further Drive the Use of Secure Protocols instead of Passwords - IT Security Guru

Read More..

Promising directions of machine learning for partial differential equations – Nature.com

Brezis, H. & Browder, F. Partial differential equations in the 20th century. Adv. Math. 135, 76144 (1998).

Article MathSciNet Google Scholar

Dissanayake, M. & Phan-Thien, N. Neural-network-based approximations for solving partial differential equations. Commun. Numer. Methods Eng. 10, 195201 (1994).

Article Google Scholar

Rico-Martinez, R. & Kevrekidis, I. G. Continuous time modeling of nonlinear systems: a neural network-based approach. In Proc. IEEE International Conference on Neural Networks 15221525 (IEEE, 1993).

Gonzlez-Garca, R., Rico-Martnez, R. & Kevrekidis, I. G. Identification of distributed parameter systems: a neural net based approach. Comput. Chem. Eng. 22, S965S968 (1998).

Article Google Scholar

Raissi, M., Perdikaris, P. & Karniadakis, G. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686707 (2019).

Article MathSciNet Google Scholar

Yu, B. et al. The Deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat. 6, 112 (2018).

Article MathSciNet Google Scholar

Mller, J. & Zeinhofer, M. Deep Ritz revisited. Preprint at https://arxiv.org/abs/1912.03937 (2019).

Gao, H., Zahr, M. J. & Wang, J.-X. Physics-informed graph neural Galerkin networks: a unified framework for solving PDE-governed forward and inverse problems. Comput. Methods Appl. Mech. Eng. 390, 114502 (2022).

Article MathSciNet Google Scholar

Bruna, J., Peherstorfer, B. & Vanden-Eijnden, E. Neural Galerkin schemes with active learning for high-dimensional evolution equations. J. Comput. Phys. 496, 112588 (2024).

Article MathSciNet Google Scholar

Battaglia, P. W. et al. Relational inductive biases, deep learning and graph networks. Preprint at https://arxiv.org/abs/1806.01261 (2018).

Sanchez-Gonzalez, A. et al. Learning to simulate complex physics with graph networks. In Proc. International Conference on Machine Learning 84598468 (PMLR, 2020).

Burger, M. et al. Connections between deep learning and partial differential equations. Eur. J. Appl. Math. 32, 395396 (2021).

Article Google Scholar

Loiseau, J.-C. & Brunton, S. L. Constrained sparse Galerkin regression. J. Fluid Mech. 838, 4267 (2018).

Article MathSciNet Google Scholar

Cranmer, M. et al. Lagrangian neural networks. Preprint at https://arxiv.org/abs/2003.04630 (2020).

Brunton, S. L., Noack, B. R. & Koumoutsakos, P. Machine learning for fluid mechanics. Annu. Rev. Fluid Mech. 52, 477508 (2020).

Article MathSciNet Google Scholar

Wang, R., Walters, R. & Yu, R. Incorporating symmetry into deep dynamics models for improved generalization. In International Conference on Learning Representations (ICLR, 2021).

Wang, R., Kashinath, K., Mustafa, M., Albert, A. & Yu, R. Towards physics-informed deep learning for turbulent flow prediction. In Proc. 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 14571466 (ACM, 2020).

Brandstetter, J., Berg, R. V. D., Welling, M. & Gupta, J. K. Clifford neural layers for PDE modeling. In Eleventh International Conference on Learning Representations (ICLR, 2023)

De Haan, P., Weiler, M., Cohen, T. & Welling, M. Gauge equivariant mesh CNNS: anisotropic convolutions on geometric graphs. In International Conference on Learning Representations (ICLR, 2021).

Brandstetter, J., Welling, M. & Worrall, D. E. Lie point symmetry data augmentation for neural PDE solvers. In Proc. International Conference on Machine Learning 22412256 (PMLR, 2022).

Brandstetter, J., Worrall, D. & Welling, M. Message passing neural PDE solvers. Preprint at https://arxiv.org/abs/2202.03376 (2022).

Karniadakis, G. E. et al. Physics-informed machine learning. Nat. Rev. Phys. 3, 422440 (2021).

Article Google Scholar

Brunton, S. L. & Kutz, J. N. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems and Control 2nd edn (Cambridge Univ. Press, 2022).

Bongard, J. & Lipson, H. Automated reverse engineering of nonlinear dynamical systems. Proc. Natl Acad. Sci. USA 104, 99439948 (2007).

Article Google Scholar

Schmidt, M. & Lipson, H. Distilling free-form natural laws from experimental data. Science 324, 8185 (2009).

Article Google Scholar

Brunton, S. L., Proctor, J. L. & Kutz, J. N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl Acad. Sci. USA 113, 39323937 (2016).

Article MathSciNet Google Scholar

Cranmer, M. Interpretable machine learning for science with PySR and SymbolicRegression.jl. Preprint at https://arxiv.org/abs/2305.01582 (2023).

Rudy, S. H., Brunton, S. L., Proctor, J. L. & Kutz, J. N. Data-driven discovery of partial differential equations. Sci. Adv 3, e1602614 (2017).

Article Google Scholar

Schaeffer, H. Learning partial differential equations via data discovery and sparse optimization. Proc. Math. Phys. Eng. Sci. 473, 20160446 (2017).

MathSciNet Google Scholar

Zanna, L. & Bolton, T. Data-driven equation discovery of ocean mesoscale closures. Geophys. Res. Lett. 47, e2020GL088376 (2020).

Article Google Scholar

Schmelzer, M., Dwight, R. P. & Cinnella, P. Discovery of algebraic Reynolds-stress models using sparse symbolic regression. Flow Turbulence Combustion 104, 579603 (2020).

Article Google Scholar

Beetham, S. & Capecelatro, J. Formulating turbulence closures using sparse regression with embedded form invariance. Phys. Rev. Fluids 5, 084611 (2020).

Article Google Scholar

Beetham, S., Fox, R. O. & Capecelatro, J. Sparse identification of multiphase turbulence closures for coupled fluid-particle flows. J. Fluid Mech. 914, A11 (2021).

Article MathSciNet Google Scholar

Bakarji, J. & Tartakovsky, D. M. Data-driven discovery of coarse-grained equations. J. Comput. Phys. 434, 110219 (2021).

Article MathSciNet Google Scholar

Maslyaev, M., Hvatov, A. & Kalyuzhnaya, A. Data-driven partial derivative equations discovery with evolutionary approach. In Proc. Computational ScienceICCS 2019: 19th International Conference Part V 19, 635641 (Springer, 2019).

Xu, H., Zhang, D. & Wang, N. Deep-learning based discovery of partial differential equations in integral form from sparse and noisy data. J. Comput. Phys. 445, 110592 (2021).

Article MathSciNet Google Scholar

Xu, H., Chang, H. & Zhang, D. DLGA-PDE: Discovery of PDEs with incomplete candidate library via combination of deep learning and genetic algorithm. J. Comput. Phys. 418, 109584 (2020).

Article MathSciNet Google Scholar

Xu, H., Zhang, D. & Zeng, J. Deep-learning of parametric partial differential equations from sparse and noisy data. Phys. Fluids 33, 037132 (2021).

Article Google Scholar

Xu, H. & Zhang, D. Robust discovery of partial differential equations in complex situations. Phys. Rev. Res. 3, 033270 (2021).

Article Google Scholar

Chen, Y., Luo, Y., Liu, Q., Xu, H. & Zhang, D. Symbolic genetic algorithm for discovering open-form partial differential equations (SGA-PDE). Phys. Rev. Res. 4, 023174 (2022).

Article Google Scholar

Taira, K. & Colonius, T. The immersed boundary method: a projection approach. J. Comput. Phys. 225, 21182137 (2007).

Article MathSciNet Google Scholar

Colonius, T. & Taira, K. A fast immersed boundary method using a nullspace approach and multi-domain far-field boundary conditions. Comput. Methods Appl. Mech. Eng. 197, 21312146 (2008).

Article MathSciNet Google Scholar

Van Breugel, F., Kutz, J. N. & Brunton, B. W. Numerical differentiation of noisy data: a unifying multi-objective optimization framework. IEEE Access 8, 196865196877 (2020).

Article Google Scholar

Messenger, D. A. & Bortz, D. M. Weak SINDy: Galerkin-based data-driven model selection. Multiscale Model. Simul. 19, 14741497 (2021).

Article MathSciNet Google Scholar

Messenger, D. A. & Bortz, D. M. Weak SINDy for partial differential equations. J. Comput. Phys. 443, 110525 (2021).

Article MathSciNet Google Scholar

Schaeffer, H. & McCalla, S. G. Sparse model selection via integral terms. Phys. Rev. E 96, 023302 (2017).

Article MathSciNet Google Scholar

Fasel, U., Kutz, J. N., Brunton, B. W. & Brunton, S. L. Ensemble-SINDy: robust sparse model discovery in the low-data, high-noise limit, with active learning and control. Proc. R. Soc. A 478, 20210904 (2022).

Article MathSciNet Google Scholar

Gurevich, D. R., Reinbold, P. A. & Grigoriev, R. O. Robust and optimal sparse regression for nonlinear PDE models. Chaos 29, 103113 (2019).

Article MathSciNet Google Scholar

Alves, E. P. & Fiuza, F. Data-driven discovery of reduced plasma physics models from fully kinetic simulations. Phys. Rev. Res. 4, 033192 (2022).

Article Google Scholar

Reinbold, P. A., Gurevich, D. R. & Grigoriev, R. O. Using noisy or incomplete data to discover models of spatiotemporal dynamics. Phys. Rev. E 101, 010203 (2020).

Article Google Scholar

Suri, B., Kageorge, L., Grigoriev, R. O. & Schatz, M. F. Capturing turbulent dynamics and statistics in experiments with unstable periodic orbits. Phys. Rev. Lett. 125, 064501 (2020).

Article Google Scholar

Reinbold, P. A., Kageorge, L. M., Schatz, M. F. & Grigoriev, R. O. Robust learning from noisy, incomplete, high-dimensional experimental data via physically constrained symbolic regression. Nat. Commun. 12, 3219 (2021).

Article Google Scholar

Pope, S. A more general effective-viscosity hypothesis. J. Fluid Mech. 72, 331340 (1975).

Article Google Scholar

Ling, J., Kurzawski, A. & Templeton, J. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 807, 155166 (2016).

Article MathSciNet Google Scholar

Duraisamy, K., Iaccarino, G. & Xiao, H. Turbulence modeling in the age of data. Annu. Rev. Fluid Mech. 51, 357377 (2019).

Article MathSciNet Google Scholar

Ahmed, S. E. et al. On closures for reduced order modelsa spectrum of first-principle to machine-learned avenues. Phys. Fluids 33, 091301 (2021).

Article Google Scholar

Supekar, R. et al. Learning hydrodynamic equations for active matter from particle simulations and experiments. Proc. Natl Acad. Sci. USA 120, e2206994120 (2023).

Article MathSciNet Google Scholar

Read the rest here:
Promising directions of machine learning for partial differential equations - Nature.com

Read More..

The future of productivity agents with NinjaTech AI and AWS Trainium | Amazon Web Services – AWS Blog

This is a guest post by Arash Sadrieh, Tahir Azim, and Tengfui Xue from NinjaTech AI.

NinjaTech AIs mission is to make everyone more productive by taking care of time-consuming complex tasks with fast and affordable artificial intelligence (AI) agents. We recently launched MyNinja.ai, one of the worlds first multi-agent personal AI assistants, to drive towards our mission. MyNinja.ai is built from the ground up using specialized agents that are capable of completing tasks on your behalf, including scheduling meetings, conducting deep research from the web, generating code, and helping with writing. These agents can break down complicated, multi-step tasks into branched solutions, and are capable of evaluating the generated solutions dynamically while continually learning from past experiences. All of these tasks are accomplished in a fully autonomous and asynchronous manner, freeing you up to continue your day while Ninja works on these tasks in the background, and engaging when your input is required.

Because no single large language model (LLM) is perfect for every task, we knew that building a personal AI assistant would require multiple LLMs optimized specifically for a variety of tasks. In order to deliver the accuracy and capabilities to delight our users, we also knew that we would require these multiple models to work together in tandem. Finally, we needed scalable and cost-effective methods for training these various modelsan undertaking that has historically been costly to pursue for most startups. In this post, we describe how we built our cutting-edge productivity agent NinjaLLM, the backbone of MyNinja.ai, using AWS Trainium chips.

We recognized early that to deliver on the mission of tackling tasks on a users behalf, we needed multiple models that were optimized for specific tasks. Examples include our Deep Researcher, Deep Coder, and Advisor models. After testing available open source models, we felt that the out-of-the-box capabilities and responses were insufficient with prompt engineering alone to meet our needs. Specifically, in our testing with open source models, we wanted to make sure each model was optimized for a ReAct/chain-of-thought style of prompting. Additionally, we wanted to make sure the model would, when deployed as part of a Retrieval Augmented Generation (RAG) system, accurately cite each source, as well as any bias towards saying I dont know as opposed to generating false answers. For that purpose, we chose to fine-tune the models for the various downstream tasks.

In constructing our training dataset, our goal was twofold: adapt each model for its suited downstream task and persona (Researcher, Advisor, Coder, and so on), and adapt the models to follow a specific output structure. To that end, we followed the Lima approach for fine-tuning. We used a training sample size of roughly 20 million tokens, focusing on the format and tone of the output while using a diverse but relatively small sample size. To construct our supervised fine-tuning dataset, we began by creating initial seed tasks for each model. With these seed tasks, we generated an initial synthetic dataset using Metas Llama 2 model. We were able to use the synthetic dataset to perform an initial round of fine-tuning. To initially evaluate the performance of this fine-tuned model, we crowd-sourced user feedback to iteratively create more samples. We also used a series of benchmarksinternal and publicto assess model performance and continued to iterate.

We elected to start with the Llama models for a pre-trained base model for several reasons: most notably the great out-of-the-box performance, strong ecosystem support from various libraries, and the truly open source and permissive license. At the time, we began with Llama 2, testing across the various sizes (7B, 13B, and 70B). For training, we chose to use a cluster of trn1.32xlarge instances to take advantage of Trainium chips. We used a cluster of 32 instances in order to efficiently parallelize the training. We also used AWS ParallelCluster to manage cluster orchestration. By using a cluster of Trainium instances, each fine-tuning iteration took less than 3 hours, at a cost of less than $1,000. This quick iteration time and low cost, allowed us to quickly tune and test our models and improve our model accuracy. To achieve the accuracies discussed in the following sections, we only had to spend around $30k, savings hundreds of thousands, if not millions of dollars if we had to train on traditional training accelerators.

The following diagram illustrates our training architecture.

After we had established our fine-tuning pipelines built on top of Trainium, we were able to fine-tune and refine our models thanks to the Neuron Distributed training libraries. This was exceptionally useful and timely, because leading up to the launch of MyNinja.ai, Metas Llama 3 models were released. Llama 3 and Llama 2 share similar architecture, so we were able to rapidly upgrade to the newer model. This velocity in switching allowed us to take advantage of the inherent gains in model accuracy, and very quickly run through another round of fine-tuning with the Llama 3 weights and prepare for launch.

For evaluating the model, there were two objectives: evaluate the models ability to answer user questions, and evaluate the systems ability to answer questions with provided sources, because this is our personal AI assistants primary interface. We selected the HotPotQA and Natural Questions (NQ) Open datasets, both of which are a good fit because of their open benchmarking datasets with public leaderboards.

We calculated accuracy by matching the models answer to the expected answer, using the top 10 passages retrieved from a Wikipedia corpus. We performed content filtering and ranking using ColBERTv2, a BERT-based retrieval model. We achieved accuracies of 62.22% on the NQ Open dataset and 58.84% on HotPotQA by using our enhanced Llama 3 RAG model, demonstrating notable improvements over other baseline models. The following figure summarizes our results.

Looking ahead, were working on several developments to continue improving our models performance and user experience. First, we intend to use ORPO to fine-tune our models. ORPO combines traditional fine-tuning with preference alignment, while using a single preference alignment dataset for both. We believe this will allow us to better align models to achieve better results for users.

Additionally, we intend to build a custom ensemble model from the various models we have fine-tuned thus far. Inspired by Mixture of Expert (MoE) model architectures, we intend to introduce a routing layer to our various models. We believe this will radically simplify our model serving and scaling architecture, while maintaining the quality in various tasks that our users have come to expect from our personal AI assistant.

Building next-gen AI agents to make everyone more productive is NinjaTech AIs pathway to achieving its mission. To democratize access to this transformative technology, it is critical to have access to high-powered compute, open source models, and an ecosystem of tools that make training each new agent affordable and fast. AWSs purpose-built AI chips, access to the top open source models, and its training architecture make this possible.

To learn more about how we built NinjaTech AIs multi-agent personal AI, you can read our whitepaper. You can also try these AI agents for free at MyNinja.ai.

Arash Sadrieh is the Co-Founder and Chief Science Officer at Ninjatech.ai. Arash co-founded Ninjatech.ai with a vision to make everyone more productive by taking care of time-consuming tasks with AI agents. This vision was shaped during his tenure as a Senior Applied Scientist at AWS, where he drove key research initiatives that significantly improved infrastructure efficiency over six years, earning him multiple patents for optimizing core infrastructure. His academic background includes a PhD in computer modeling and simulation, with collaborations with esteemed institutions such as Oxford University, Sydney University, and CSIRO. Prior to his industry tenure, Arash had a postdoctoral research tenure marked by publications in high-impact journals, including Nature Communications.

Tahir Azim is a Staff Software Engineer at NinjaTech. Tahir focuses on NinjaTechs Inf2 and Trn1 based training and inference platforms, its unified gateway for accessing these platforms, and its RAG-based research skill. He previously worked at Amazon as a senior software engineer, building data-driven systems for optimal utilization of Amazons global Internet edge infrastructure, driving down cost, congestion and latency. Before moving to industry, Tahir earned an M.S. and Ph.D. in Computer Science from Stanford University, taught for three years as an assistant professor at NUST(Pakistan), and did a post-doc in fast data analytics systems at EPFL. Tahir has authored several publications presented at top-tier conferences such as VLDB, USENIX ATC, MobiCom and MobiHoc.

Tengfei Xue is an Applied Scientist at NinjaTech AI. His current research interests include natural language processing and multimodal learning, particularly using large language models and large multimodal models. Tengfei completed his PhD studies at the School of Computer Science, University of Sydney, where he focused on deep learning for healthcare using various modalities. He was also a visiting PhD candidate at the Laboratory of Mathematics in Imaging (LMI) at Harvard University, where he worked on 3D computer vision for complex geometric data.

Read the original here:
The future of productivity agents with NinjaTech AI and AWS Trainium | Amazon Web Services - AWS Blog

Read More..

Generative AI vs. AI: Advantages, Limitations, Ethical Considerations – eWeek

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Generative artificial intelligence (AI) is valued for its ability to create new content, including text, images, video, and music. It uses AI algorithms to analyze patterns in datasets to mimic style or structure to replicate different types of content, and can be used to create deep-fake videos and voice messages.

Generative AI is a subset of artificial intelligence, which also includes a broad range of technologies that enable machines to perform tasks that once required human intelligence and judgment. Its often used to build systems with the cognitive capacity to mine data, and it continuously boosts its performance over the course of repeated events. Heres what you need to know about the benefits and logistics of using AI and generative AI as well as the ethical concerns of which to be aware.

Both generative AI and artificial intelligence,sometimes called traditional AI, use machine learning algorithms to obtain their results. However, they have different goals and purposes. Generative AI is intended to create new content, while AI goes much broader and deeperin essence, to wherever the algorithm coder wants to take it. AIs possible deployments include better decision-making, removing the tedium from repetitive tasks, or spotting anomalies and issuing alerts for cybersecurity. The following summary spells out the common differences between generative AI and AI:

To fully understand the relationship between generative AI and AI, its necessary to understand each of these technologies at a deeper level.

Generative AI is an open-ended and rapidly evolving form of artificial intelligence. Its major characteristics include the following:

With its ability to use source data for any number of creative tasks, generative AIs use cases range from product design to software development to fraud detection.

Generative AI helps in creating innovative designs that meet specific performance criteria, from prototyping to optimization of design, while minimizing not only material use but also waste. Additionally, generative AI succeeds at creating highly personalized product experiences by analyzing user data to create products that align with the preferences and needs of individual users. This personalization can help with cheating marketing and sales campaigns.

For the creative industries, generative AI can mimic various artistic styles, compose original music and even generate complete pieces of artwork. This application is expanding the horizons of creative expression and is being used by artists, musicians, and other content creators to increase their output.

Generative AI provides the ability to automate code generation, bug fixes, and optimization. This results in more efficient development cycles and higher-quality software. AI tools can also generate synthetic data for training and testing purposes, which plays an important role in developing robust AI applications.

Generative AI-powered chatbots and virtual assistants provide 24/7 assistance, personalize interactions, and handle complex queries. These tools raise customer satisfaction and operational efficiency by automating routine support tasks and offering faster responses than human operators.

In finance and insurance, generative AI is used to detect fraud and manage risk. It analyzes transaction patterns and identifies anomalies, then helps in creating detailed reports and summaries that aid in decision-making, thereby enhancing the overall security and reliability of financial operations.

Based on the significant advancements that keep enhancing generative AIs capabilities, its future is incredibly promising. Expect to see models becoming larger and more powerful, like GPT-4 and PaLM2, which are revolutionizing content creation and personalized customer communications. Such models enable businesses to generate high-quality, human-like outputs more efficiently, with impact seen across many market sectors.

We can also expect to see generative AI models run on a wider variety of hardware devices, which will open up an array of use cases. A notable trend is the rise of multimodal AI models that can understand and generate content across several forms of data, such as text, images, and audio. The result? Users will get more immersive and natural user experiences, especially in fields like virtual reality and augmented reality.

Additionally, generative AI is driving new levels of personalization by improving how it adapts products and services to individual preferences. Its therefore seen as a particularly aggressive driver of change across retail, marketing and ecommerce sectors.

Although artificial intelligence has enjoyed an enormously higher profile over the last few years, the history of AI stretches back to the 1940s. This traditional AI is the basis for generative AI, and while there are major differences, there is major overlap between these two technologies. To fully understand the topic, heres a deeper look at artificial intelligence itself.

Overall, traditional AI is focused on explicit programming to execute tasks with precision. The following are its core characteristics:

Artificial intelligence can compute exponentially faster than the fastest team of human experts, even as it handles far greater complexity. This capability enables an array of use cases, ranging from business automation to research and development to cybersecurity.

AI-driven automation is streamlining repetitive and manual business operations. Robotic process automation (RPA) uses AI to automate routine administrative tasks, freeing up human workers for more complex activities. AI algorithms are used to optimize supply chain management by predicting demand, managing inventory, and optimizing logistics.

In research and development (R&D), traditional AI accelerates innovation by analyzing huge datasets to identify patterns, predict outcomes, and generate new insights. In pharmaceuticals, AI helps drug discovery by predicting the efficacy of compounds and optimizing clinical trials. In engineering, AI models can be used to optimize product designs, which helps to lower the time and cost associated with bringing new products to market.

AI is increasingly used for predictive maintenance, with use cases like analyzing data from machinery to predict failures before they occur. This proactive approach helps schedule maintenance activities at optimal times. The benefits include lower downtime and extended equipment lifespans. Industries such as manufacturing, energy, and transportation are the biggest beneficiaries of predictive maintenance.

AIs role in cybersecurity and fraud detection includes analyzing network traffic and identifying potential threats in real time. AI algorithms detect anomalies and patterns associated with cyber attacks, which leads to faster and more accurate responses. AI-driven systems can automate responses to a variety of threats and reduce the risk of breaches and enhance overall security.

AI-enabled forecasting models help financial leaders predict future trends. AI systems incorporate variables like mixed economic forecasts and non-traditional data sources. It then allows for more reliable and comprehensive financial scenario planning and more specific revenue projections.

The future of AI involves handling ever more complex and multifaceted real-world scenarios. Innovations will likely focus on enhancing the adaptability of rule-based systems, making them more flexible and capable of dealing with unforeseen situations. Expect to see enhanced flexibility and the rise of multimodal systems capable of processing many data types simultaneously. This will allow AI to tackle more complex enterprise challenges across multiple domains and significantly broaden its impact.

Self-improving AI systems are also emerging. They leverage reinforcement learning and dynamic analysis for autonomous optimization of performance over time. This will further enhance adaptability and efficiency without constant human intervention.

The integration of traditional AI with generative AI is expected to create hybrid systems that deliver an exponentially more powerful combination. Innovations in AI hardware and infrastructure, including specialized AI processors, will support these advanced systems. This will allow traditional AI to provide more sophisticated solutions across an expanding array of use cases.

Generative AI and traditional AI face largely similar challenges in terms of ethics, including biases built into systems, job displacement and potential environmental impact.

AI systems can inadvertently magnify biases that were built into their training data. These biases can lead to unfair outcomes, particularly for marginalized groups. To ensure fairness in AI, whether generative or traditional AI, there needs to be meticulous scrutiny of the training data, implementation of bias mitigation strategies, and continual monitoring of AI systems for biased behavior. Techniques like algorithmic fairness reviews and bias audits are a step toward promoting equity and inclusivity in AI applications.

The security and privacy concerns raised by the deployment of AI technologies are pervasive. AI systems often need vast amounts of data, including personal and sensitive information, to function effectively. Whether generative or traditional, ensuring robust data protection measures and maintaining privacy throughout the AI lifecycle are critical. This includes implementing strong encryption, data anonymization techniques, and complying with regulations such as GDPR. Transparency about data usage and incorporating user consent is also essential in building trust and safeguarding privacy.

For the ever-increasing reach and use cases of AI, we need to be able to trust AI and hold the technology accountableand many users do not trust AI systems. This trust is enabled by transparency in AI systems. Explainable AI (XAI) practices allow users and stakeholders to understand how AI algorithms make decisions. By providing clear and understandable explanations of AI processes, organizations can enhance user trust and facilitate better decision-making. A transparent system makes it easy to identify and address any ethical issues and to ensure AI systems are used responsibly.

One of the greatest concerns about the rise of AI has been job displacement as automated systems replace human roles. Alleviating this issue calls for strategies for transitioning workforces to new or evolved roles, such as reskilling and upskilling programs to prepare employees for roles created by AI advancements. Organizations need to consider the broader social implications of deploying AI solutions and work to implement practices that strike a balance between technological progress and socioeconomic stability.

The deployment and training of large AI models, especially generative AI, requires significant computational resources, which leads to substantial energy consumption and environmental impact. Organizations using AI need to develop and implement energy-efficient AI models. They also need to optimize computational resources to minimize carbon footprints. Encouraging sustainable practices in AI development and operation is a must for reducing the environmental impact and promoting green AI technologies.

Artificial intelligence in all its forms is advancing at a remarkable rate, so its advantageous for tech professionals to be knowledgeable about AI skills and developments. Here are relevant courses to help you use these technologies effectively. Please be aware that while each title below refers to generative AI, these courses all teach fundamental concepts that also cover overall AI technology.

This course provides a solid foundation in generative AI, covering fundamental concepts, model types, and practical applications. Its suitable for those who are new to the field and want to explore the potential of generative AI using Google Cloud tools like Vertex AI.

Andrew Ngs course offers a comprehensive introduction to generative AI. It cuts across the workings, uses, and impact of generative AI in various industries. The course also includes hands-on exercises for applying the concepts you learn practically.

Based on a partnership between AWS and DeepLearning.AI, this intermediate-level course goes into using large language models (LLMs) like GPT-4 for generative AI. It covers the architecture, training processes, practical applications of LLMs, and more. The course is designed for data scientists, AI developers, and anyone interested in mastering LLMs and applying them effectively in their work.

No, conversational AI and generative AI are related but distinct subsets of artificial intelligence. Conversational AI is designed to interact with users through dialogue, often used in chatbots and virtual assistants like Siri, Alexa, or Google Assistant. It focuses on understanding and generating human-like responses to deliver meaningful interactions. Generative AI, on the other hand, refers to AI systems that create new content based on learned patterns from existing data. While conversational AI can use generative AI techniques to give responses, generative AI covers a broader range of creative applications beyond just conversation.

Predictive AI focuses on analyzing existing data to forecast future events or trends. It uses techniques like regression analysis, time series analysis, and machine learning models to predict outcomes such as stock prices, weather conditions, or customer behaviors. Generative AI, however, aims to create new data rather than predict future events. It uses models like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) to generate new content that is similar to the training data.

Generative AI has rapidly gained popularity due to several key factors. The development of sophisticated models like GPT-4, GANs, and VAEs has significantly improved the quality and realism of generated content. Increased access to high-performance computing resources such as GPUs and cloud computing has enabled the training of complex generative models. The vast amount of data available for training these models has allowed them to learn from diverse and extensive datasets, enhancing their capabilities. Plus, the wide range of applicationsfrom creative industries like art and music to practical uses such as text generation and synthetic data creationhas driven interest and investment in generative AI.

Generative AI and traditional AI each bring unique strengths and challenges to the table. Generative AI is geared for creativity, generating new and innovative content, and is seeing more integration into fields like art, music, and content creation. In contrast, traditional AI focuses on analyzing existing data to improve efficiency, accuracy, and decision-making, making it invaluable in sectors that value consistency and predictability such as finance, healthcare, and manufacturing.

As both these technologies continue to evolve rapidly, the differences between them will likely lessen, with generative AIs creativity and AIs data crunching strength found side by side in many advanced applications.

Read our guide to the Top 20 Generative AI Tools and Apps 2024 to learn more about what platforms organizations are using to deploy these dynamic technologies across their businesses.

Follow this link:
Generative AI vs. AI: Advantages, Limitations, Ethical Considerations - eWeek

Read More..

Microsoft is a Leader in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning Platforms – Microsoft

Microsoft is a Leader in this years Gartner Magic Quadrant for Data Science and Machine Learning Platforms. Azure AI provides a powerful, flexible end-to-end platform for accelerating data science and machine learning innovation.

Microsoft is a Leader in this years Gartner Magic Quadrant for Data Science and Machine Learning Platforms. Azure AI provides a powerful, flexible end-to-end platform for accelerating data science and machine learning innovation while providing the enterprise governance that every organization needs in the era of AI.

In May 2024, Microsoft was also named a Leader for the fifth year in a row in the Gartner Magic Quadrant for Cloud AI Developer Services, where we placed furthest for our Completeness of Vision. Were pleased by these recognitions from Gartner as we continue helping customers, from large enterprises to agile startups, bring their AI and machine learning models and applications into production securely and at scale.

Azure AI is at the forefront of purpose-built AI infrastructure, responsible AI tooling, and helping cross-functional teams collaborate effectively using Machine Learning Operations (MLOps) for generative AI and traditional machine learning projects. Azure Machine Learning provides access to a broad selection of foundation models in the Azure AI model catalogincluding the recent releases of Phi-3, JAIS, and GPT-4oand tools to fine-tune or build your own machine learning models. Additionally, the platform supports a rich library of open-source frameworks, tools, and algorithms so that data science and machine learning teams can innovate in their own way, all on a trusted foundation.

Microsoft is named a Leader in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning Platforms

Were now able to get a functioning model with relevant insights up and running in just a couple of weeks thanks to Azure Machine Learning. Weve even managed to produce verified models in just four to six weeks.

Azure Machine Learning helps organizations build, deploy, and manage high-quality AI solutions quickly and efficiently, whether building large models from scratch, running inference on pre-trained models, consuming models as a service, or fine-tuning models for specific domains. Azure Machine Learning runs on the same powerful AI infrastructure that powers some of the worlds most popular AI services, such as ChatGPT, Bing, and Azure OpenAI Service. Additionally, Azure Machine Learnings compatibility with ONNX Runtime and DeepSpeed can help customers further optimize training and inference time for performance, scalability, and power efficiency.

Whether your organization is training a deep learning model from scratch using open source frameworks or bringing an existing model into the cloud, Azure Machine Learning enables data science teams to scale out training jobs using elastic cloud compute resources and seamlessly transition from training to deployment. With managed online endpoints, customers can deploy models across powerful CPU and graphics processing unit (GPU) machines without needing to manage the underlying infrastructuresaving time and effort. Similarly, customers do not need to provision or manage infrastructure when deploying foundation models as a service from the Azure AI model catalog. This means customers can easily deploy and manage thousands of models across production environmentsfrom on-premises to the edgefor batch and real-time predictions.

Prompt flow helped streamline our development and testing cycles, which established the groundedness we required for making sure the customer and the solution were interacting in a realistic way.

Machine learning operations (MLOps) and large language model operations (LLMOps) sit at the intersection of people, processes, and platforms. As data science projects scale and applications become more complex, effective automation and collaboration tools become essential for achieving high-quality, repeatable outcomes.

Azure Machine Learning is a flexible MLOps platform, built to support data science teams of any size. The platform makes it easy for teams to share and govern machine learning assets, build repeatable pipelines using built-in interoperability with Azure DevOps and GitHub Actions, and continuously monitor model performance in production. Data connectors with Microsoft sources such as Microsoft Fabric and external sources such as Snowflake and Amazon S3, further simplify MLOps. Interoperability with MLflow also makes it seamless for data scientists to scale existing workloads from local execution to the cloud and edge, while storing all MLflow experiments, run metrics, parameters, and model artifacts in a centralized workspace.

Azure Machine Learning prompt flow helps streamline the entire development cycle for generative AI applications with its LLMOps capabilities, orchestrating executable flows comprised of models, prompts, APIs, Python code, and tools for vector database lookup and content filtering. Azure AI prompt flow can be used together with popular open-source frameworks like LangChain and Semantic Kernel, enabling developers to bring experimental flows into prompt flow to scale those experiments and run comprehensive evaluations. Developers can debug, share, and iterate on applications collaboratively, integrating built-in testing, tracing, and evaluation tools into their CI/CD system to continually reassess the quality and safety of their application. Then, developers can deploy applications when ready with one click and monitor flows for key metrics such as latency, token usage, and generation quality in production. The result is end-to-end observability and continuous improvement.

The responsible AI dashboard provides valuable insights into the performance and behavior of computer vision models, providing a better level of understanding into why some models perform differently than others, and insights into how various underlying algorithms or parameters influence performance. The benefit is better-performing models, enabled and optimized with less time and effort.

AI principles such as fairness, safety, and transparency are not self-executing. Thats why Azure Machine Learning provides data scientists and developers with practical tools to operationalize responsible AI right in their flow of work, whether they need to assess and debug a traditional machine learning model for bias, protect a foundation model from prompt injection attacks, or monitor model accuracy, quality, and safety in production.

The Responsible AI dashboard helps data scientists assess and debug traditional machine learning models for fairness, accuracy, and explainability throughout the machine learning lifecycle. Users can also generate a Responsible AI scorecard to document and share model performance details with business stakeholders, for more informed decision-making. Similarly, developers in Azure Machine Learning can review model cards and benchmarks and perform their own evaluations to select the best foundation model for their use case from the Azure AI model catalog. Then they can apply a defense-in-depth approach to mitigating AI risks using built-in capabilities for content filtering, grounding on fresh data, and prompt engineering with safety system messages. Evaluation tools in prompt flow enable developers to iteratively measure, improve, and document the impact of their mitigations at scale, using built-in metrics and custom metrics. That way, data science teams can deploy solutions with confidence while providing transparency for business stakeholders.

Read more on Responsible AI with Azure.

We needed to choose a platform that provided best-in-class security and compliance due to the sensitive data we require and one that also offered best-in-class services as we didnt want to be an infrastructure hosting company. We chose Azure because of its scalability, security, and the immense support it offers in terms of infrastructure management.

In todays data-driven world, effective data security, governance, and privacy require every organization to have a comprehensive understanding of their data and AI and machine learning systems. AI governance also requires effective collaboration between diverse stakeholders, such as IT administrators, AI and machine learning engineers, data scientists, and risk and compliance roles. In addition to enabling enterprise observability through MLOps and LLMOps, Azure Machine Learning helps organizations ensure that data and models are protected and compliant with the highest standards of security and privacy.

With Azure Machine Learning, IT administrators can restrict access to resources and operations by user account or groups, control incoming and outgoing network communications, encrypt data both in transit and at rest, scan for vulnerabilities, and centrally manage and audit configuration policies through Azure Policy. Data governance teams can also connect Azure Machine Learning to Microsoft Purview, so that metadata on AI assetsincluding models, datasets, and jobsis automatically published to the Microsoft Purview Data Map. This enables data scientists and data engineers to observe how components are shared and reused and examine the lineage and transformations of training data to understand the impact of any issues in dependencies. Likewise, risk and compliance professionals can track what data is used to train models, how base models are fine-tuned or extended, and where models are employed across different production applications, and use this as evidence in compliance reports and audits.

Lastly, with the Azure Machine Learning Kubernetes extension enabled by Azure Arc, organizations can run machine learning workloads on any Kubernetes clusters, ensuring data residency, security, and privacy compliance across hybrid public clouds and on-premises environments. This allows organizations to process data where it resides, meeting stringent regulatory requirements while maintaining flexibility and control over their MLOps. Customers using federated learning techniques along with Azure Machine Learning and Azure confidential computing can also train powerful models on disparate data sources, all without copying or moving data from secure locations.

Machine learning continues to transform the way businesses operate and compete in the digital erawhether you want to optimize your business operations, enhance customer experiences, or innovate.Azure Machine Learning provides a powerful, flexible machine learning and data science platform to operationalize AI innovation responsibly.

*Gartner, Magic Quadrant for Data Science and Machine Learning Platforms, By Afraz Jaffri, Aura Popa, Peter Krensky, Jim Hare, Raghvender Bhati, Maryam Hassanlou, Tong Zhang, 17 June 2024.

Gartner, Magic Quadrant for Cloud AI Developer Services, Jim Scheibmeir, Arun Batchu, Mike Fang, Published 29 April 2024.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartners Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from this link.

Here is the original post:
Microsoft is a Leader in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning Platforms - Microsoft

Read More..

The 10 Hottest Data Science And Machine Learning Tools Of 2024 (So Far) – CRN

Heres a look at 10 data science and machine learning tools that solution and service providers should be aware of.

Deep Thoughts

Data science and machine learning technologies have long been important for data analytics tasks and predictive analytical software. But with the wave of artificial intelligence and generative AI development in 2023, the importance of data science and machine learning tools has risen to new heights.

One absolute truth about AI systems is that they need huge amounts of data to be effective.

Data science combines math and statistics, advanced analytics, specialized programming and other skills and tools to help uncover actionable insights within an organizations data. The global data science tool market reached $8.73 billion last year and will nearly double to $16.85 billion by 2030, according to 24MarketReports.

Machine learning systems make business-outcome decisions and predictions based on algorithms and statistical models that analyze and draw inferences from huge amounts of data. The worldwide machine learning market is expected to reach $79.29 billion this year, according to Statista, and grow at a 36 percent CAGR to $503.40 billion by 2030

Heres a look at some of the hottest data science and machine learning tools in use today. Some of the following tools are relatively new to the market while others have been around for a while and recently updated. The list also includes both commercial products and open-source software.

Amazon SageMaker

Amazon SageMaker is one of Amazon Web Services (AWS) flagship AI and machine learning software tools and is one of the most prominent machine learning products in the industry.

In November, at the AWS re:Invent extravaganza, AWS expanded SageMakers functionalities with five new capabilities that the company said helps accelerate the building, training and deployment of large language models and other foundation machine learning models that power generative AI.

One new capability enhances SageMakers ability to scale models by accelerating model training time while another optimizes managed ML infrastructure operations by reducing deployment costs and model latency.

The new SageMaker Clarify tool makes it easier to select the right model based on quality parameters that support responsible use of AI. A new no-code feature in SageMaker Canvas makes it possible to prepare data using natural language instructions. And Canvas continues to democratize model building and customization, AWS said, by making it easier to use models to extract insights, make predictions and generate content using an organizations proprietary data.

AWS also offers Amazon Machine Learning, a more highly automated tool for building machine learning models.

Anaconda Distribution for Python

Python has become the most popular programming language overall, but it has long been used by data scientists for development in data analytics, AI and machine learning. Anacondas distribution of the open-source Python system is one of the most widely used data science and AI platforms.

In addition to its distribution of Python, Anaconda offers its Data Science and AI Workbench platform that data science and machine learning teams use for expediting model development and deployment while adhering to security and governance requirements.

Over the last year Anaconda has established alliances with major IT vendors to expand the use of its platform. In April Anaconda announced a partnership to integrate its Anaconda Python Repository with Teradatas VantageCloud and ClearScape Analytics. A collaboration with IBM announced in February provides watsonx.ai users with access to the Anaconda software repository. And in August 2023 the company unveiled the Anaconda Distribution for Python in Microsoft Excel.

ClearML

ClearMLs platform, designed for data scientists and data engineers, automates and simplifies the development and management of machine learning solutions. The system provides a comprehensive lineup of capabilities spanning data science, data management, MLOps, and model orchestration and deployment.

In March startup ClearML added new orchestration capabilities to its platform to expand control over AI infrastructure management and compute costs while maximizing the use of compute resources and improving model serving visibility.

Also in March introduced an open-source fractional GPU tool to help business utilize their GPU utilization by enabling multi-tenancy for all Nvidia GPUs.

Databricks Mosaic AI

At Databricks recent Data + AI Summit the company unveiled a number of new capabilities for its Mosaic AI software for building and deploying production-quality ML and GenAI applications.

Databricks acquired MosaicML in June 2023 in a blockbuster $1.3-billion deal and has been integrating the startups technology with its data lakehouse platform. (Databricks has since rebranded the product as Mosaic AI.)

The latest capabilities in Mosaic AI include support for building compound AI systems, new functionality to improve model quality, and AI governance tools. Databricks said the innovations give users the confidence to build and measure production-quality applications, delivering on the promises of generative AI for their business.

Dataiku

The Dataiku platform offers a comprehensive lineup of data science, machine learning and AI capabilities including machine learning development, MLOps, data preparation, DataOps, visualization, analytical applications and generative AI.

In September 2023, Dataiku launched LLM Mesh, a new tool for integrating large language models within the enterprise that the company called the common backbone for Gen AI applications. LLM Mesh capabilities include universal AI service routing, secure access and auditing for AI services, performance and cost tracking, and safety provisions for private data screening and response moderation.

In April, Dataiku debuted LLM Cost Guard, a new capability within LLM Mesh that creates standards for tracking and optimizing generative AI use cases.

dotData Feature Factory 1.1

dotDatas Feature Factory is an automated feature discovery and engineering platform that helps data scientists find and use data features within large-scale data sets for use in AI and machine learning projects.

In Feature Factory version 1.1, introduced in May, the company provided significant enhancements including new data quality assessment capabilities, support for user-defined features and interactive feature selection, improved support for AutoML through the Python-based PyCaret AutoML library, and preview support for generative AI feature discovery.

Hopsworks MLOps Platform

The Hopsworks platform is used to develop, deploy and monitor AI/ML models at scale.

The core of the serverless system is its machine learning feature store for storing data for ML models running on AWS, Azure and Google Cloud platforms and in on-premises systems. The Hopsworks platform also provides machine learning pipelines and a comprehensive development toolkit.

Hopsworks 3.7, what the company called the GenAI release, became generally available in March with new capabilities to support GenAI and large language model use cases. It also introduced feature monitoring, a new notification service to track changes to specific features, and support for the Delta Lake data storage format.

Founded in Sweden in 2016, Hopsworks has offices in Stockholm, London and Palo Alto, Calif.

Obviously AI

A problem faced by many businesses is the shortage of people with data science and machine learning expertise. Obviously AI looks to close that gap with its no-code AI/ML platform that allows people without technical backgrounds to build and train machine learning models.

The platform helps quickly build models that run predictions on historical data, everything from sales and revenue forecasting to predictions about energy consumption and population growth.

Because data science shouldnt feel like rocket science, the companys web site says.

PyTorch

PyTorch is a powerful open-source framework and deep learning library for data scientists who are building and training deep learning models.

PyTorch is popular for such applications as computer vision, natural language processing, image classification and text generation. It can be used for a variety of algorithms including convolutional neural networks, recurrent neural networks and generative adversarial networks, according to a LinkedIn posting by data scientist and analysis expert Vitor Mesquita.

PyTorch 2.3 was released on April 24.

PyTorch was created out of the Lua-based Torch framework that came out of Facebooks AI research lab in 2017. Today PyTorch is part of the Linux Foundation and is available through the pytorch.org website.

PyTorch and TensorFlow are generally seen as the top alternative even competing open-source data science and machine learning systems, according to a Projectpro.com comparison. PyTorch is often considered better for smaller-scale research projects while TensorFlow is more widely used for production-scale projects.

TensorFlow

TensorFlow is a popular open-source, end-to-end machine learning platform and library for building ML models that can run in any environment. The system handles data preprocessing, model building and model training tasks.

TensorFlow, generally seen as an alternative to PyTorch, was originally developed by the Google Brain team for internal research and production tasks, particularly around machine learning and deep leaning neural networks. It was originally released as open-source software under the Apache License 2.0 in November 2015.

Google continues to own and maintain TensorFlow, which is available through the tensorflow.org community website. A major update, TensorFlow 2.0, was released in September 2019.

TensorFlow and PyTorch are generally seen as the top alternative even competing open-source data science and machine learning systems, according to a Projectpro.com comparison. PyTorch is often considered better for smaller-scale research projects while TensorFlow is more widely used for production-scale projects.

Go here to read the rest:
The 10 Hottest Data Science And Machine Learning Tools Of 2024 (So Far) - CRN

Read More..

AI goes nuclear: INL expo showcases machine learning and artificial intelligence – East Idaho News

IDAHO FALLS Artificial intelligence is transforming the way the nuclear industry works, and Idaho National Laboratory is leading the way developing applications to streamline processes while improving safety at nuclear power plants.

INL scientists showcased 15 projects on Artificial Intelligence (AI) and Machine Learning at an expo at the Energy Innovation Laboratory in Idaho Falls on Tuesday.

Were here to learn about some of the incredible science happening related to artificial intelligence and machine learning, said Katya Le Blanc, human factors scientist at Idaho National Laboratory. Were also developing technologies that can eventually be deployed by the nuclear industry and be used by nuclear utilities.

According to a lab news release, computers that mimic cognitive functions and apply advanced algorithms can help researchers analyze and solve a variety of complex technical challenges. This new approach helps everything from improving materials design for advanced reactors to enhancing nuclear power plant control rooms so they become more effective and efficient.

Technologies on display at the conference included RAVEN a Risk Analysis Virtual ENvironment that provides an open source multi-purpose framework for machine learning, artificial intelligence and digital twinning.

One machine learning technology called Inspection Portal is part of the light water reactor sustainability program that analyzes and aggregates data from human-submitted reports to identify trends and help optimize the operation of nuclear power plants.

The programs machine learning operation is trained on millions of records from across the industry.

We can do things here at the INL that no one else can do, said Brian Wilcken, nuclear science and technology data scientist. Utility companies try to do things like this. They cant touch it. We have so much data we can train extremely powerful models.

Other AI systems provide image detection to read gauges, pinpoint anomalies and determine if a valve has been turned, if a screw is corroded or if a fire breaks out in a nuclear plant. These advancements could reduce the need for personnel to perform menial checks at a nuclear power plant and free up manpower for higher-level work and applications.

Additional tools evaluate the economics of different energy mixes and how to analyze the best cost-benefit and other factors (such as) the reliability associated with energy systems, Le Blanc said.

These systems can determine the proper output needed from a nuclear power plant, a hydro plant and a solar facility in order to meet peoples demand for electricity when they need it while optimizing economic benefit as well, she said.

Some of the applications utilize existing AI programs, while others were created in-house at Idaho National Laboratory.

Sometimes, it requires that you develop it. Theres not a model that can do what you need it to do, but sometimes theres something that already exists that you can adapt, Le Blanc said. It varies depending on (the situation), but theres no reason to start from scratch.

The Artificial Intelligence and Machine Learning Expo is in its second year.

In the future, organizers hope to expand and collaborate with other experts in the AI space to further share the research occurring at Idaho National Laboratory.

I read a lot of papers inside scientific journals related to AI, Le Blanc said. Seeing how this stuff actually works, being able to mess around with it, play with it, talk to the researchers, see what theyre doing and get direct access and ask them questions thats just exciting!

Read the original here:
AI goes nuclear: INL expo showcases machine learning and artificial intelligence - East Idaho News

Read More..

How Has Machine Learning Optimized Lending Decisions? – Block Telegraph

In the evolving landscape of financial services, machine learning is revolutionizing how institutions make lending decisions. From enhancing loan propensity and risk scoring to modernizing credit scoring, weve gathered insights from a Staff Machine Learning Engineer and a Chief AI Officer, among others, to share how this technology has optimized lending decisions. Here are the top five expert perspectives on the transformative impact of machine learning in the sector.

Statistical analysis has always been used in the financial lending space. We are now seeing machine learning supplementing the use of just plain old statistics. The ML models deployed nowadays serve two main purposes, viz., loan propensity scoring and risk scoring.

The former determines the propensity of a user for taking out a loan, and the latter determines the probability of that loan being paid off. These two models together determine which users are reached out to by marketing and sales teams, thereby optimizing the size and the quality of the user group to reach out to.

With the rise in digital payment platforms, credit card companies now have access to high-quality spending data of their potential customers.

Although companies have always used traditional machine-learning models for computing credit scores and identifying target customers, they now implement reinforcement learning as the data is more readily available.

They create self-improving models, which usein addition to customer metricstheir own systems feedback in the correct identification of target customers.

Vertigo Bank is at the forefront of utilizing machine-learning technology to revolutionize lending decisions in a real-world setting. By leveraging machine-learning algorithms, the bank is able to optimize risk assessment, tailor offers to individual customers, detect fraudulent activities, and streamline the lending process for increased efficiency.

One of the key examples showcased in the text is the case study of Ryan Baldwin, a graphic designer seeking a personal loan from Vertigo Bank. Through the application of machine learning, the bank is able to analyze various data points related to Ryans credit history, income, spending habits, and other relevant information to make an informed lending decision. This not only streamlines the loan approval process but also ensures that the offer presented to Ryan is personalized to his specific financial situation and needs.

Furthermore, the integration of machine-learning algorithms into Vertigo Banks lending system allows for improved customer segmentation, fraud detection, process automation, decision-making, and regulatory compliance. By accurately segmenting customers based on their financial profiles, the bank can tailor offers and services to meet the unique needs of each segment. Additionally, the advanced fraud detection capabilities of machine-learning technology help in identifying and preventing potential fraudulent activities, safeguarding both the bank and its customers.

Moreover, the automation of various processes through machine-learning algorithms results in a more efficient and streamlined lending system. From loan application processing to approval decisions, machine learning helps in reducing manual intervention, minimizing errors, and speeding up the overall process. This not only enhances operational efficiency but also leads to a more seamless and convenient experience for customers like Ryan.

Overall, the implementation of machine-learning technology at Vertigo Bank leads to swift, personalized, and efficient loan approval experiences for customers. This, in turn, improves customer satisfaction, risk management, operational efficiency, and regulatory compliance within the lending system. By embracing the power of machine learning, Vertigo Bank is able to stay ahead of the curve in the competitive financial industry and provide its customers with cutting-edge lending solutions.

In my experience, one of the most transformative aspects of machine learning in financial institutions has been the use of predictive analytics to evaluate a borrowers creditworthiness. Previously, loan officers relied on credit scores and a handful of factors, sometimes excluding creditworthy borrowers who didnt fit the model. Now, machine learning algorithms can analyze vast datasets, including alternative data sources like cash-flow management or utility bill payments.

A couple of years ago, I helped a private lending institution in South Dakota develop an ML model that evaluated a businesss cash-flow patterns and utility payments to assess its creditworthiness. For individuals, the ML model evaluated non-traditional indicators of reliability such as mobile phone usage, data usage, income analysis, alternative sources of income, etc. This helped them approve microloans to a new segment of the population who previously wouldnt have qualified, boosting financial inclusion.

As I have witnessed lending institutions struggle to acquire new customers and an entire demographic that remained untapped, I was quickly able to understand the need to focus on margin maximization and not just risk minimization. So, my advice is: Dont just rely on traditional credit scores. Look for data that reflects a borrowers financial responsibility. This way, both the lender and the borrower will benefit.

If you ask me, its not a big change that will uproot a lenders established business but rather, an intuitive one that molds itself based on the unique requirements of each lending institution, whether it is banks, CDFIs, or private lenders. After all, the technology is rightly named: Machine Learning, which means the machine will keep on learning and modifying its functions to give lending institutions the power to make informed decisions, better serve their customers, and foster a more resilient and sustainable lending ecosystem, all while seamlessly integrating with their current operations.

Traditional scorecards are costly and time-consuming, requiring dedicated teams to manually adjust data for each client or product. They adapt slowly to economic changes and can introduce biases that affect lending fairness. In contrast, ML offers a much smarter solution. By analyzing historical data like demographics, transaction histories, and credit records, ML models evaluate a wide range of borrower traits. Advanced models like LightGBM and XGBoost handle complex data with high precision, processing over 600 data points to enhance credit score accuracy and provide a deeper understanding of credit risk.

In practice, the results are impressive. For example, fintech company Nextbank, which supplies banking software to leading Asian banks, asked us to help build one of the first ML-powered credit scoring systems. Using LightGBM and XGBoost, the system achieved a 97% accuracy rate, processing over 500 million loan applications and significantly reducing default risks.

One major advantage of ML in lending decisions is its ability to continuously improve by learning from new information. This ensures lending decisions are based on the most current and comprehensive data, leading to better risk management. Moreover, ML reduces bias in lending. By relying on actual repayment data instead of human judgment, ML models ensure fair and objective decision-making, meeting regulatory standards and promoting fair financial practices.

Traditional financial institutions often rely on manual processes for loan underwriting, resulting in slow decision-making. On average, closing a home loan takes 35 to 40 days. ML credit scoring can speed up this process by up to 30% through a smart combination of automation and predictive analytics for risk assessments.

As the financial sector continues to digitize, MLs role in lending will only grow. Its ability to analyze vast amounts of data, predict outcomes accurately, and adapt to new information not only optimizes lending decisions but also modernizes the financial services industry.

Original post:
How Has Machine Learning Optimized Lending Decisions? - Block Telegraph

Read More..

CZI Sci-Tech Convening Discusses AI Advances & Biology – Blog – Chan Zuckerberg Initiative

Seventy-five years ago, the mathematician and computer scientist Alan Turing posed a simple but powerful question that changed the course of technology: Are machines capable of thought?

Since then, artificial intelligence has advanced at an extraordinary pace, and today, its opening the door to the digital age of biology.

From leveraging machine learning to help visualize the location and interactions of proteins within live cells to training a deep-learning model that can predict the impact of gene perturbations in cell types or genes, the application of AI methodologies to make sense of and draw insights from massive amounts of scientific data is ushering in a new level of insights into human health and disease.

This theme was the main focus of CZIs recent science technology convening, which brought together computational biologists, engineers, data scientists, product designers, and leaders from across the organization and our family of scientific institutes to explore the frontiers of AI for biomedical research. CZIers and our collaborators led sessions on topics ranging from how machine learning is expediting the annotation of tomograms from cryo-electron tomography experiments to building customized ultraviolet microscopes to detect and diagnose malaria in low-resourced settings.

Industry AI experts including Boris Power, head of applied research at OpenAI, and Bryan Catanzaro, vice president of applied deep learning research at NVIDIA also led talks about the promise of training AI models to expand the scientific communitys foundational understanding of human biology.

Three main themes surfaced after two days of enriching discussions:

Take a closer look at these takeaways below.

Biologists are going to have very strong simulations enabled by virtual cell models in a way thats not possible today, said Steve Quake, CZIs head of science, during the opening remarks. His point emphasized how AI will fundamentally change and accelerate the way scientists do research in the coming years.

For example, the virtual cell models CZI is building will be able to predict the response of immune cells to different genetic mutations faster and in more robust combinations than current methods without the need to collect costly and invasive physical samples from patients. Its like having a combination lock for human biology once you have the code, it will open up a host of new information about what happens when cells become diseased and what it takes for them to become healthy again.

Marinka Zitnik, assistant professor of biomedical informatics at Harvard Medical School and associate faculty at the Kempner Institute for the Study of Natural and Artificial Intelligence, led a session that further highlighted AIs role in transforming scientific research in the context of her day-to-day work. Zitnik, a CZI collaborator and Science in Society grant partner, highlighted how machine learning algorithms are being used to augment research and provide new insights at different time and spatial scales.

One example is SHEPHERD, a deep learning approach built by Zitniks team that can provide individualized diagnoses of rare genetic diseases. Given the limited data on rare diseases, the model is pre-trained on known associations between variants, genes and phenotypes from patient-agnostic data. The model is then trained on simulated patient data before being fine-tuned in the real world, potentially speeding up diagnoses and improving patient outcomes.

When evaluated across 12 sites throughout the United States, SHEPHERD was able to nominate disease-causing genes for 75% of patients from a cohort affiliated with the Undiagnosed Diseases Network. The model also narrowed down the top five possible genes responsible for those diseases among the tens of thousands of genes prioritized by the model. By providing a broad characterization of novel diseases helping researchers identify genes harboring mutations that can lead to disease, and connecting patients with similar genetic and phenotypic features for potential clinical follow-ups SHEPHERD is fundamentally changing the way researchers like Zitnik study and develop potential therapeutic targets for rare diseases. This can shorten the time for diagnosis and improve outcomes for patients.

Over the last decade, scientists, academic research labs and philanthropic organizations like CZI have been collecting, aggregating and curating enormous amounts of detailed, high-resolution biological information about the trillions of cells within the human body. These datasets are sequence- or image-based two complementary modalities that are fundamental to advancing biomedical research.

Manu Leonetti, director of systems biology at the Chan Zuckerberg Biohub San Francisco (CZ Biohub SF), and James Zou, associate professor of biomedical data science at Stanford University, led discussions about the opportunities with training AI on multimodal datasets. Leonetti, a cell biologist, described imaging as one of the foundational modalities for biology, allowing scientists to explore advanced techniques like transcriptomics under a microscope.

Imaging has the power of being able to give us extremely dense multimodal profiles of cells, said Leonetti. We can ask questions across scales while following cells in the context of their native environment, whether looking at cells in a dish, or tissues, or even at the scale of an entire organism.

New developments in deep learning are fueling the power of imaging. At the CZ Biohub SF, for example, Leonetti and his colleague Loic Royer are developing new algorithms to extract functional information from biological images. Royers imaging AI team has also trained a de-noising algorithm called Aydin that dramatically increases the usability of microscopy images and tools that can recognize and quantify biological objects from complex images to accelerate analysis.

Zou also shared examples of how generative AI is transforming biomedicine, including a case study showing how models can help identify and synthesize molecules to guide the development of antibiotics.

Generative AI can really help us expand the search space, said Zou. If we can use the help of AI to explore small molecules that we have not seen before in nature and are likely good drug candidates, that can likely be transformative for drug discovery.

The deep learning method Zou cited pinpointed a candidate molecule that could fight against various pathogens, including antibiotic-resistant bacteria. This breakthrough comes at an especially critical time, given the rise in antibiotic-resistant pathogens globally.

On the topic of modalities, Zou shared his perspective on why language can be a unifying framework for integrating vast amounts of biological information.

The reason why Im particularly excited about language is the knowledge thats summarized in written text, he said. Theres a lot more information captured in language beyond whats shared in numerical data.

To illustrate this point, Zou zeroed in on recent advances in protein modeling like ESMFold and AlphaFold, which draw correlation patterns from sequences. While these models are powerful tools for making predictions about protein structure, they arent trained on existing literature about the role and function of different proteins.

However, Zou also said that fine-tuning these protein language models with information from existing literature decades of knowledge summarized in papers leads to a notable boost in the capability of these models.

Today, most of the fields AI models are designed for applications in specific research areas, whether in the context of identifying genetic mutations that can lead to rare diseases or identifying new molecules that can overpower antibiotic-resistant pathogens.

But in the future, CZIs goal is to build and train a general-purpose model or virtual cells that can transfer information across datasets and conditions, serve multiple queries concurrently, and unify data from different modalities.

Explore more: How AI Can Uncover the Laws of Biology

Theofanis Karalestos, CZIs head of AI for science, provided attendees with a closer look at our vision for building a general-purpose model that can serve as a foundational resource for biomedical research. Karaletsos started his talk by highlighting the extraordinary amount of biological information generated over the last decade, which is breaking Moores Law.

By bridging the gap between these datasets and advances in AI, we get to the heart of where we want to be as machine learners, said Karaletsos. We want to simulate a generative process such that in some coarse-grain level of casualty even if it doesnt get things exactly right at a fine level but at some level, well have useful models that will allow us to ask questions about the data and query them in interesting ways for counterfactuals.

To bring these virtual cell models online, the early days of CZIs AI strategy will focus on training models and making these models and the datasets used for training and validation available to the community, which will require deep cross-functional collaboration with our teams, AI/machine learning experts, and biologists using these models.

Ultimately, this approach will pave the way for an open, accessible digital platform for biology, which will house next-generation models and systems trained on expansive multimodal datasets. Scientists will be able to access these models via APIs and visualizations to pose complex questions and test theories about the fundamental mechanisms of human biology faster and more accurately than traditional experimentation methods and existing, more specialized generative AI models.

Over time, we want this to handle basic biology tasks, Karaletos concluded. We hope itll be useful for disease and ultimately for cellular engineering because we want to understand cells in a generative way.

Learn more about CZIs AI strategy for science and our vision to build predictive models of cells and cell systems.

Read more:
CZI Sci-Tech Convening Discusses AI Advances & Biology - Blog - Chan Zuckerberg Initiative

Read More..

Idaho National Laboratory hosts second annual AI and machine learning expo – Post Register

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Read the original:
Idaho National Laboratory hosts second annual AI and machine learning expo - Post Register

Read More..