Page 279«..1020..278279280281..290300..»

Sea-surface pCO2 maps for the Bay of Bengal based on advanced machine learning algorithms | Scientific Data – Nature.com

Friedlingstein, P. et al. Global carbon budget 2020. Earth System Science Data 12, 32693340 (2020).

Article ADS Google Scholar

Friedlingstein, P. et al. Global carbon budget 2021. Earth System Science Data Discussions 1191 (2021).

Friedlingstein, P. et al. Global carbon budget 2022. Earth System Science Data Discussions 2022, 1159 (2022).

Google Scholar

Chen, C.-T. et al. Airsea exchanges of CO2 in the worlds coastal seas. Biogeosciences 10, 65096544 (2013).

Article ADS CAS Google Scholar

Laruelle, G. G., Lauerwald, R., Pfeil, B. & Regnier, P. Regionalized global budget of the CO2 exchange at the air-water interface in continental shelf seas. Global biogeochemical cycles 28, 11991214 (2014).

Article ADS CAS Google Scholar

Laruelle, G. G. et al. Continental shelves as a variable but increasing global sink for atmospheric carbon dioxide. Nature communications 9, 454 (2018).

Article ADS PubMed PubMed Central Google Scholar

Dai, M. et al. Why are some marginal seas sources of atmospheric CO2? Geophysical Research Letters 40, 21542158 (2013).

Article ADS CAS Google Scholar

Zhai, W.-D. et al. Seasonal variations of the seaair CO2 fluxes in the largest tropical marginal sea (South China sea) based on multiple-year underway measurements. Biogeosciences 10, 77757791 (2013).

Article ADS Google Scholar

Li, Q., Guo, X., Zhai, W., Xu, Y. & Dai, M. Partial pressure of CO2 and air-sea CO2 fluxes in the South China sea: Synthesis of an 18-year dataset. Progress in Oceanography 182, 102272 (2020).

Article Google Scholar

Borges, A. V. Do we have enough pieces of the jigsaw to integrate CO2 fluxes in the coastal ocean? Estuaries 28, 327 (2005).

Article CAS Google Scholar

Anderson, T. R. Plankton functional type modelling: running before we can walk? Journal of Plankton Research 27, 10731081 (2005).

Article Google Scholar

Anderson, T. R. Progress in marine ecosystem modelling and the unreasonable effectiveness of mathematics. Journal of Marine Systems 81, 411 (2010).

Article ADS Google Scholar

Sarma, V., Krishna, M. & Srinivas, T. Sources of organic matter and tracing of nutrient pollution in the coastal Bay of Bengal. Marine Pollution Bulletin 159, 111477 (2020).

Article CAS PubMed Google Scholar

Sarma, V., Prasad, M. & Dalabehera, H. Influence of phytoplankton pigment composition and primary production on pCO2 levels in the Indian ocean. Journal of Earth System Science 130, 116 (2021).

Article Google Scholar

Joshi, A., Chowdhury, R. R., Warrior, H. & Kumar, V. Influence of the freshwater plume dynamics and the barrier layer thickness on the CO2 source and sink characteristics of the Bay of Bengal. Marine Chemistry 236, 104030 (2021).

Article CAS Google Scholar

Sarma, V. et al. East India coastal current controls the Dissolved Inorganic Carbon in the coastal Bay of Bengal. Marine Chemistry 205, 3747 (2018).

Article ADS CAS Google Scholar

Joshi, A., Roychowdhury, R., Kumar, V. & Warrior, H. Configuration and skill assessment of the coupled biogeochemical model for the carbonate system in the Bay of Bengal. Marine Chemistry 103871 (2020).

Joshi, A. & Warrior, H. Comprehending the role of different mechanisms and drivers affecting the sea-surface pCO2 and the air-sea CO2 fluxes in the Bay of Bengal: A modelling study. Marine Chemistry 243, 104120 (2022).

Article CAS Google Scholar

Chakraborty, K., Valsala, V., Bhattacharya, T. & Ghosh, J. Seasonal cycle of surface ocean pCO2 and pH in the northern Indian ocean and their controlling factors. Progress in Oceanography 198, 102683 (2021).

Article Google Scholar

Chakraborty, K., Valsala, V., Gupta, G. & Sarma, V. Dominant biological control over upwelling on pCO2 in sea east of sri lanka. Journal of Geophysical Research: Biogeosciences 123, 32503261 (2018).

Article ADS CAS Google Scholar

Sutton, A. J. et al. A high-frequency atmospheric and seawater pCO2 data set from 14 open-ocean sites using a moored autonomous system. Earth System Science Data 6, 353366 (2014).

Article ADS Google Scholar

Bakker, D. C. et al. Surface ocean CO2 atlas database version 2022 (SOCATv2022)(ncei accession 0253659). Earth System Science Data (2022).

Lauvset, S. K. et al. GLODAPv2. 2022: the latest version of the global interior ocean biogeochemical data product. Earth System Science Data Discussions 2022, 137 (2022).

Google Scholar

Takahashi, T. et al. Climatological distributions of pH, pCO2, total CO2, alkalinity, and CaCO3 saturation in the global surface ocean, and temporal changes at selected locations. Marine Chemistry 164, 95125 (2014).

Article CAS Google Scholar

Chau, T. T. T., Gehlen, M. & Chevallier, F. A seamless ensemble-based reconstruction of surface ocean pCO2 and airsea CO2 fluxes over the global coastal and open oceans. Biogeosciences 19, 10871109 (2022).

Article ADS CAS Google Scholar

Gregor, L., Lebehot, A. D., Kok, S. & Scheel Monteiro, P. M. A comparative assessment of the uncertainties of global surface ocean CO2 estimates using a machine-learning ensemble (csir-ml6 version 2019a)have we hit the wall? Geoscientific Model Development 12, 51135136 (2019).

Article ADS Google Scholar

Dixit, A., Lekshmi, K., Bharti, R. & Mahanta, C. Net seaair CO2 fluxes and modeled partial pressure of CO2 in open ocean of Bay of Bengal. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12, 24622469 (2019).

Article ADS Google Scholar

Sridevi, B. & Sarma, V. Role of river discharge and warming on ocean acidification and pCO2 levels in the Bay of Bengal. Tellus B: Chemical and Physical Meteorology 73, 120 (2021).

Article CAS Google Scholar

Mohanty, S., Raman, M., Mitra, D. & Chauhan, P. Surface pCO2 variability in two contrasting basins of north Indian ocean using satellite data. Deep Sea Research Part I: Oceanographic Research Papers 179, 103665 (2022).

Article CAS Google Scholar

Joshi, A., Kumar, V. & Warrior, H. Modeling the sea-surface pCO2 of the central Bay of Bengal region using machine learning algorithms. Ocean Modelling 178, 102094 (2022).

Article Google Scholar

Sathyendranath, S. et al. An ocean-colour time series for use in climate studies: the experience of the ocean-colour climate change initiative (oc-cci). Sensors 19, 4285 (2019).

Article ADS CAS PubMed PubMed Central Google Scholar

Chevallier, F. et al. Inferring CO2 sources and sinks from satellite observations: Method and application to tovs data. Journal of Geophysical Research: Atmospheres 110 (2005).

Chevallier, F. et al. CO2 surface fluxes at grid point scale estimated from a global 21 year reanalysis of atmospheric measurements. Journal of Geophysical Research: Atmospheres 115 (2010).

Chevallier, F. On the parallelization of atmospheric inversions of CO2 surface fluxes within a variational framework. Geoscientific Model Development 6, 783790 (2013).

Article ADS Google Scholar

Pedregosa, F. et al. Scikit-learn: Machine learning in python. the Journal of machine Learning research 12, 28252830 (2011).

MathSciNet Google Scholar

Friedrich, T. & Oschlies, A. Neural network-based estimates of north Atlantic surface pCO2 from satellite data: A methodological study. Journal of Geophysical Research: Oceans 114 (2009).

Jo, Y.-H., Dai, M., Zhai, W., Yan, X.-H. & Shang, S. On the variations of sea surface pCO2 in the northern South China sea: A remote sensing based neural network approach. Journal of Geophysical Research: Oceans 117 (2012).

Moussa, H., Benallal, M., Goyet, C. & Lefvre, N. Satellite-derived CO2 fugacity in surface seawater of the tropical atlantic ocean using a feedforward neural network. International Journal of Remote Sensing 37, 580598 (2016).

Article ADS Google Scholar

Wang, Y. et al. Carbon sinks and variations of pCO2 in the southern ocean from 1998 to 2018 based on a deep learning approach. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 34953503 (2021).

Article ADS Google Scholar

OMalley, T. et al. Keras tuner. Retrieved May 21, 2020 (2019).

Google Scholar

Agarap, A. F. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375 (2018).

Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. Anon. International Conference on Learning Representations. SanDego: ICLR 7 (2015).

Chen, T. & Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 785794 (2016).

Akiba, T., Sano, S., Yanase, T., Ohta, T. & Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 26232631 (2019).

Breiman, L. Random forests. Machine learning 45, 532 (2001).

Article Google Scholar

Lawrence, R. L., Wood, S. D. & Sheley, R. L. Mapping invasive plants using hyperspectral imagery and breiman cutler classifications (randomforest). Remote Sensing of Environment 100, 356362 (2006).

Article ADS Google Scholar

Akhil, V. P. et al. Bay of Bengal sea surface salinity variability using a decade of improved smos re-processing. Remote Sensing of Environment 248, 111964 (2020).

Article Google Scholar

Wanninkhof, R. Relationship between wind speed and gas exchange over the ocean. Journal of Geophysical Research: Oceans 97, 73737382 (1992).

Article Google Scholar

Hersbach, H. et al. The ERA5 global reanalysis. Quarterly Journal of the Royal Meteorological Society 146, 19992049 (2020).

Article ADS Google Scholar

Wanninkhof, R. Relationship between wind speed and gas exchange over the ocean revisited. Limnology and Oceanography: Methods 12, 351362 (2014).

Google Scholar

Weiss, R. Carbon dioxide in water and seawater: the solubility of a non-ideal gas. Marine chemistry 2, 203215 (1974).

Article CAS Google Scholar

Joshi, A., Ghoshal, K., Prasanna, Chakraborty, K. & Sarma, V. Sea-surface pCO2 maps for the Bay of Bengal based on machine learning algorithms. Zenodo https://doi.org/10.5281/zenodo.8375320 (2024).

Taylor, K. E. Summarizing multiple aspects of model performance in a single diagram. Journal of Geophysical Research: Atmospheres 106, 71837192 (2001).

Article Google Scholar

Willmott, C. J. On the validation of models. Physical geography 2, 184194 (1981).

Article Google Scholar

Sabine, C., Wanninkhof, R., Key, R., Goyet, C. & Millero, F. Seasonal CO2 fluxes in the tropical and subtropical Indian ocean. Marine Chemistry 72, 3353 (2000).

Article CAS Google Scholar

Bates, N. R., Pequignet, A. C. & Sabine, C. L. Ocean carbon cycling in the Indian ocean: 1. spatiotemporal variability of inorganic carbon and air-sea CO2 gas exchange. Global Biogeochemical Cycles 20 (2006).

Schott, F. A. & McCreary, J. P. Jr The monsoon circulation of the Indian ocean. Progress in Oceanography 51, 1123 (2001).

Read this article:
Sea-surface pCO2 maps for the Bay of Bengal based on advanced machine learning algorithms | Scientific Data - Nature.com

Read More..

Reducing Toxic AI Responses – Neuroscience News

Summary: Researchers developed a new machine learning technique to improve red-teaming, a process used to test AI models for safety by identifying prompts that trigger toxic responses. By employing a curiosity-driven exploration method, their approach encourages a red-team model to generate diverse and novel prompts that reveal potential weaknesses in AI systems.

This method has proven more effective than traditional techniques, producing a broader range of toxic responses and enhancing the robustness of AI safety measures. The research, set to be presented at the International Conference on Learning Representations, marks a significant step toward ensuring that AI behaviors align with desired outcomes in real-world applications.

Key Facts:

Source: MIT

A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.

To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.

But this only works effectively if engineers know which toxic prompts to use. If human testers miss some prompts, which is likely given the number of possibilities, a chatbot regarded as safe might still be capable of generating unsafe answers.

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested.

They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model.

The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.

Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments.

Our method provides a faster and more effective way to do this quality assurance, says Zhang-Wei Hong, an electrical engineering and computer science (EECS) graduate student in the Improbable AI lab and lead author of apaper on this red-teaming approach.

Hongs co-authors include EECS graduate students Idan Shenfield, Tsun-Hsuan Wang, and Yung-Sung Chuang; Aldo Pareja and Akash Srivastava, research scientists at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Pulkit Agrawal, director of Improbable AI Lab and an assistant professor in CSAIL. The research will be presented at the International Conference on Learning Representations.

Automated red-teaming

Large language models, like those that power AI chatbots, are often trained by showing them enormous amounts of text from billions of public websites. So, not only can they learn to generate toxic words or describe illegal activities, the models could also leak personal information they may have picked up.

The tedious and costly nature of human red-teaming, which is often ineffective at generating a wide enough variety of prompts to fully safeguard a model, has encouraged researchers to automate the process using machine learning.

Such techniques often train a red-team model using reinforcement learning. This trial-and-error process rewards the red-team model for generating prompts that trigger toxic responses from the chatbot being tested.

But due to the way reinforcement learning works, the red-team model will often keep generating a few similar prompts that are highly toxic to maximize its reward.

For their reinforcement learning approach, the MIT researchers utilized a technique called curiosity-driven exploration. The red-team model is incentivized to be curious about the consequences of each prompt it generates, so it will try prompts with different words, sentence patterns, or meanings.

If the red-team model has already seen a specific prompt, then reproducing it will not generate any curiosity in the red-team model, so it will be pushed to create new prompts, Hong says.

During its training process, the red-team model generates a prompt and interacts with the chatbot. The chatbot responds, and a safety classifier rates the toxicity of its response, rewarding the red-team model based on that rating.

Rewarding curiosity

The red-team models objective is to maximize its reward by eliciting an even more toxic response with a novel prompt. The researchers enable curiosity in the red-team model by modifying the reward signal in the reinforcement learning set up.

First, in addition to maximizing toxicity, they include an entropy bonus that encourages the red-team model to be more random as it explores different prompts. Second, to make the agent curious they include two novelty rewards.

One rewards the model based on the similarity of words in its prompts, and the other rewards the model based on semantic similarity. (Less similarity yields a higher reward.)

To prevent the red-team model from generating random, nonsensical text, which can trick the classifier into awarding a high toxicity score, the researchers also added a naturalistic language bonus to the training objective.

With these additions in place, the researchers compared the toxicity and diversity of responses their red-team model generated with other automated techniques. Their model outperformed the baselines on both metrics.

They also used their red-team model to test a chatbot that had been fine-tuned with human feedback so it would not give toxic replies. Their curiosity-driven approach was able to quickly produce 196 prompts that elicited toxic responses from this safe chatbot.

We are seeing a surge of models, which is only expected to rise. Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and its important that they are verified before released for public consumption.

Manual verification of models is simply not scalable, and our work is an attempt to reduce the human effort to ensure a safer and trustworthy AI future, says Agrawal.

In the future, the researchers want to enable the red-team model to generate prompts about a wider variety of topics. They also want to explore the use of a large language model as the toxicity classifier. In this way, a user could train the toxicity classifier using a company policy document, for instance, so a red-team model could test a chatbot for company policy violations.

If you are releasing a new AI model and are concerned about whether it will behave as expected, consider using curiosity-driven red-teaming, says Agrawal.

Funding: This research is funded, in part, by Hyundai Motor Company, Quanta Computer Inc., the MIT-IBM Watson AI Lab, an Amazon Web Services MLRA research grant, the U.S. Army Research Office, the U.S. Defense Advanced Research Projects Agency Machine Common Sense Program, the U.S. Office of Naval Research, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

Author: Adam Zewe Source: MIT Contact: Adam Zewe MIT Image: The image is credited to Neuroscience News

Original Research: The findings will be presented at the International Conference on Learning Representations

More here:
Reducing Toxic AI Responses - Neuroscience News

Read More..

A Vision of the Future: Machine Learning in Packaging Inspection – Packaging Digest

As we navigate through the corridors of modern manufacturing, the influence of machine vision and machine learning on the packaging industry stands as a testament to technological evolution. This integration, though largely beneficial, introduces a spectrum of complexities, weaving a narrative that merits a closer examination.

In unpacking the layers of this technological marvel, we should not only tout its enhancements but also recognize its challenges and ethical considerations.

Machine vision, equipped with the power of machine learning algorithms, has ushered in a new era for packaging. This synergy has transcended traditional boundaries, offering precision, efficiency, and adaptability previously unattainable. With the ability to analyze visual data and learn from it, these systems have revolutionized quality control, ensuring that products meet the high standards consumers have come to expect.

Machine vision systems, with their tireless eyes, can inspect products at speeds and accuracies far beyond human capabilities.

The benefits are manifold. Machine vision systems, with their tireless eyes, can inspect products at speeds and accuracies far beyond human capabilities. They detect even the minutest defects, from misaligned labels to imperfect seals, ensuring that only flawless products reach the market. This not only enhances brand reputation but also significantly reduces waste, contributing to more sustainable manufacturing practices.

Moreover, machine learning algorithms enable these systems to improve over time. They learn from every product inspected, becoming more adept at identifying defects and adapting to new packaging designs without the need for extensive reprogramming. This adaptability is crucial in an era where product cycles are rapid and consumer demands are ever-evolving.

One of the most significant impacts of machine vision and learning in packaging is the leap in operational efficiency it enables. Automated inspection lines reduce downtime, allowing for continuous production that keeps pace with demand.

Furthermore, the integration of these technologies facilitates personalized packaging at scale. Machine vision systems can adjust to package products according to individual specifications, catering to the growing market for personalized goods, from custom-labeled beverages to bespoke cosmetic kits.

Yet, as with any technological advancement, the integration of machine vision and machine learning in packaging is not without its challenges.

The initial investment in sophisticated equipment and the ongoing need for skilled personnel to manage and interpret data can widen the technological divide, potentially pushing smaller players out of the competition.

The complexity of these systems necessitates a high level of expertise, posing a significant hurdle for smaller manufacturers. The initial investment in sophisticated equipment and the ongoing need for skilled personnel to manage and interpret data can widen the technological divide, potentially pushing smaller players out of the competition.

Data privacy and security emerge as paramount concerns. Machine learning algorithms thrive on data, raising questions about the ownership and protection of the data collected during the packaging process. As these systems become more integrated into manufacturing operations, ensuring the security of sensitive information against breaches becomes a critical issue that manufacturers must address.

Moreover, the reliance on machine vision and learning systems introduces the risk of over-automation. While these technologies can enhance efficiency, there is a fine line between leveraging them to support human workers and replacing them altogether. The potential for job displacement raises ethical questions about the responsibility of manufacturers to their workforce and the broader societal implications of widespread automation.

The path forward requires a careful balancing act. Manufacturers must embrace the benefits of machine vision and learning while remaining cognizant of the potential pitfalls.

Investing in training and development programs can help mitigate the risk of job displacement, ensuring that workers are equipped with the skills needed to thrive in a technologically advanced workplace.

manufacturers can adopt a phased approach to the integration of these technologies, allowing for gradual adaptation and minimizing disruption.

Transparency in data collection and processing, coupled with robust cybersecurity measures, can address privacy concerns, building trust among consumers and stakeholders. Moreover, manufacturers can adopt a phased approach to the integration of these technologies, allowing for gradual adaptation and minimizing disruption.

The impact of machine vision and machine learning on the packaging industry is undeniable, offering unparalleled enhancements in quality control, efficiency, and customization. Yet, as we chart this course of technological integration, we must navigate the complexities it introduces with foresight and responsibility.

By addressing the challenges head-on and adhering to ethical standards, the packaging industry can harness the full potential of these advancements, propelling itself towards a future that is not only more efficient and adaptable but also equitable and secure.

In this journey, the clear sight of progress must be guided by the wisdom to recognize its potential shadows, ensuring that the path we tread is illuminated by both innovation and integrity.

View post:
A Vision of the Future: Machine Learning in Packaging Inspection - Packaging Digest

Read More..

From data to decision-making: the role of machine learning and digital twins in Alzheimers Disease – UCI MIND

For patients experiencing cognitive decline due to Alzheimers Disease (AD), choosing the most appropriate treatment course at the right time is of great importance. A key element to these decisions is the careful consideration of the available scientific evidence, particularly from randomized clinical trials (RCTs) such as the recent lecanemab trial. Translating RCT results into patient-level decisions, however, can be challenging. This is because trial results tell us about the outcomes of groups rather than individuals. A doctor must judge how similar their patient is to the groups studied in trials. For AD, where patients vary widely in clinical presentations and rates of cognitive decline, this may be a difficult task.

As a step towards more personalized decision-making, prescribing physicians may focus on specific patient characteristics that would affect the disease course and response to treatment, like demographics (e.g., sex, age, education) or genetic factors. In fact, subgroup analyses from some RCTs suggest that at least some drugs could differ in safety or efficacy based on these factors. Nevertheless, the main limitations of these types of results are that the group sizes are often small, increasing the risk of spurious findings. Furthermore, they do not consider the overall impact of many different factors simultaneously. This is where machine learning (ML) may close the gap between data and decision-making.

ML uses patterns found in large datasets to predict health outcomes and treatment response by considering many patient characteristics at once and, further, how they may interact. This underlying model can subsequently be used to form a digital twin for a patient, or the best possible copy of their characteristics and health status. We can use this twin to ask what if questions. For example, If we prescribed this patient this drug at this time, what would be their most likely outcome six months from now? Under the hood, an ML algorithm would utilize previously collected data, such as from RCTs, to locate potential twins and use their outcomes to formulate a response. This could give us a more pinpointed prediction of patient outcomes compared to subgroup analyses. Ideally, this targeted view on patients would help facilitate better care for AD patients.

Roy S. Zawadzki

The stage is set for digital twins to play a bigger role in clinical research and practice in AD: we have the methodology, the data, and, most importantly, a large unmet clinical need for new and more effective treatments. Digital twins can be integrated in a wide variety of contexts that can potentially save clinical trial costs, quicken the time until approval, and better utilize the treatments we already have for the patients that need them the most. For these reasons, biotech companies, academic researchers, and healthcare systems alike should be investigating how digital twins can help assist their particular goals.

To learn more about real-world opportunities and considerations surrounding digital twins, please check out my latest post on my Substack

Roy S. Zawadzki, graduate trainee with Professor Daniel Gillen and supported by the TITAN T32 training grant

Read this article:
From data to decision-making: the role of machine learning and digital twins in Alzheimers Disease - UCI MIND

Read More..

AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS | Amazon Web Services – AWS Blog

AWS was delighted to present to and connect with over 18,000 in-person and 267,000 virtual attendees at NVIDIA GTC, a global artificial intelligence (AI) conference that took place March 2024 in San Jose, California, returning to a hybrid, in-person experience for the first time since 2019.

AWS has had a long-standing collaboration with NVIDIA for over 13 years. AWS was the first Cloud Service Provider (CSP) to offer NVIDIA GPUs in the public cloud, and remains among the first to deploy NVIDIAs latest technologies.

Looking back at AWS re:Invent 2023, Jensen Huang, founder and CEO of NVIDIA, chatted with AWS CEO Adam Selipsky on stage, discussing how NVIDIA and AWS are working together to enable millions of developers to access powerful technologies needed to rapidly innovate with generative AI. NVIDIA is known for its cutting-edge accelerators and full-stack solutions that contribute to advancements in AI. The company is combining this expertise with the highly scalable, reliable, and secure AWS Cloud infrastructure to help customers run advanced graphics, machine learning, and generative AI workloads at an accelerated pace.

The collaboration between AWS and NVIDIA further expanded at GTC 2024, with the CEOs from both companies sharing their perspectives on the collaboration and state of AI in a press release:

The deep collaboration between our two organizations goes back more than 13 years, when together we launched the worlds first GPU cloud instance on AWS, and today we offer the widest range of NVIDIA GPU solutions for customers, says Adam Selipsky, CEO of AWS. NVIDIAs next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWSs powerful Elastic Fabric Adapter networking, Amazon EC2 UltraClusters hyper-scale clustering, and our unique AWS Nitro Systems advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else. Together, we continue to innovate to make AWS the best place to run NVIDIA GPUs in the cloud.

AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries, says Jensen Huang, founder and CEO of NVIDIA. Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of whats possible.

On the first day of the NVIDIA GTC, AWS and NVIDIA made a joint announcement focused on their strategic collaboration to advance generative AI. Huang included the AWS and NVIDIA collaboration on a slide during his keynote, highlighting the following announcements. The GTC keynote had over 21 million views within the first 72 hours.

By March 22, AWSs announcement with NVIDIA had generated 104 articles mentioning AWS and Amazon. The vast majority of coverage mentioned AWSs plans to offer Blackwell-based instances. Adam Selipsky appeared on CNBCs Mad Money to discuss the long-standing collaboration between AWS and NVIDIA, among the many other ways AWS is innovating in generative AI, stating that AWS has been the first to bring many of its GPUs to the cloud to drive efficiency and scalability for customers.

Project Ceiba has also been a focus in media coverage. Forbes referred to Project Ceiba as the most exciting project by AWS and NVIDIA, stating that it should accelerate the pace of innovation in AI, making it possible to tackle more complex problems, develop more sophisticated models, and achieve previously unattainable breakthroughs. The Next Platform ran an in-depth piece on Ceiba, stating that the size and the aggregate compute of Ceiba cluster are both being radically expanded, which will give AWS a very large supercomputer in one of its data centers and NVIDIA will use it to do AI research, among other things.

Live from GTC was an on-site studio at GTC for invited speakers to have a fireside chat with tech influencers like VentureBeat. Chetan Kapoor, Director of Product Management for Amazon EC2 at AWS, was interviewed by VentureBeat at the Live from GTC studio, where he discussed AWSs presence and highlighted key announcements at GTC.

The AWS booth showcased generative AI services, like the LLMs with Anthropic and Cohere on Amazon Bedrock, PartyRock, Amazon Q, Amazon SageMaker JumpStart, and more. Highlights included:

During GTC, AWS invited 23 partner and customer solution demos to join its booth with either a dedicated demo kiosk or a 30-minute in-booth session. Such partners and customers included Ansys, Anthropic, Articul8, Bria.ai, Cohere, Deci, Deepbrain.AI, Denali Advanced Integration, Ganit, Hugging Face, Lilt, Linker Vision, Mavenir, MCE, Media.Monks, Modular, NVIDIA, Perplexity, Quantiphi, Run.ai, Salesforce, Second Spectrum, and Slalom.

Among them, high-potential early-stage startups in generative AI across the globe were showcased with a dedicated kiosk at the AWS booth. The AWS Startups team works closely with these companies by investing and supporting their growth, offering resources through programs like AWS Activate.

NVIDIA was one of the 45 launch partners for the new AWS Generative AI Competency program. The Generative AI Center of Excellence for AWS Partners team members were on site at the AWS booth, presenting this program for both existing and potential AWS partners. The program offers valuable resources along with best practices for all AWS partners to build, market, and sell generative AI solutions jointly with AWS.

Watch a video recap of the AWS presence at NVIDIA GTC 2024. For additional resources about the AWS and NVIDIA collaboration, refer to the AWS at NVIDIA GTC 2024 resource hub.

Julie Tang is the Senior Global Partner Marketing Manager for Generative AI at Amazon Web Services (AWS), where she collaborates closely with NVIDIA to plan and execute partner marketing initiatives focused on generative AI. Throughout her tenure at AWS, she has held various partner marketing roles, including Global IoT Solutions, AWS Partner Solution Factory, and Sr. Campaign Manager in Americas Field Marketing. Prior to AWS, Julie served as the Marketing Director at Segway. She holds a Masters degree in Communications Management with a focus on marketing and entertainment management from the University of Southern California, and dual Bachelors degrees in Law and Broadcast Journalism from Fudan University.

Read the original post:
AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS | Amazon Web Services - AWS Blog

Read More..

High-resolution meteorology with climate change impacts from global climate model data using generative machine … – Nature.com

Zhou, E. & Mai, T. Electrification Futures Study: Operational Analysis of U.S. Power Systems with Increased Electrification and Demand-Side Flexibility (US National Renewable Energy Laboratory, 2021); https://www.nrel.gov/docs/fy21osti/79094.pdf

Xexakis, G. & Trutnevyte, E. Consensus on future EU electricity supply among citizens of France, Germany, and Poland: implications for modeling. Energy Strategy Rev. 38, 100742 (2021).

Article Google Scholar

Steggals, W., Gross, R. & Heptonstall, P. Winds of change: how high wind penetrations will affect investment incentives in the GB electricity sector. Energy Policy 39, 13891396 (2011).

Article Google Scholar

Brinkman, G. et al. The North American Renewable Integration Study: A U.S. Perspective (US National Renewable Energy Laboratory, 2021); https://www.nrel.gov/docs/fy21osti/79224.pdf

Boie, I., Fernandes, C., Fras, P. & Klobasa, M. Efficient strategies for the integration of renewable energy into future energy infrastructures in European analysis based on transnational modeling and case studies for nine European regions. Energy Policy 67, 170185 (2014).

Article Google Scholar

Sun, X., Zhang, B., Tang, X., McLellan, B. C. & Hk, M. Sustainable energy transitions in China: renewable options and impacts on the electricity system. Energies 9, 980 (2016).

Article Google Scholar

Carvallo, J. et al. A Guide for Improved Resource Adequacy Assessments in Evolving Power Systems: Institutional and Technical Dimensions (Ernest Orlando Lawrence Berkeley National Laboratory, 2023); https://eta-publications.lbl.gov/sites/default/files/ra_project_-_final.pdf

Stenclik, D. et al. Redefining Resource Adequacy for Modern Power Systems (Energy Systems Integration Group, 2021); https://www.esig.energy/wp-content/uploads/2022/12/ESIG-Redefining-Resource-Adequacy-2021-b.pdf

Auffhammer, M., Baylis, P. & Hausman, C. H. Climate change is projected to have severe impacts on the frequency and intensity of peak electricity demand across the United States. Proc. Natl Acad. Sci. USA 114, 18861891 (2017).

Article Google Scholar

Huang, J. & Gurney, K. R. Impact of climate change on U.S. building energy demand: sensitivity to spatiotemporal scales, balance point temperature, and population distribution. Clim. Change 137, 171185 (2016).

Article Google Scholar

Craig, M. T. et al. A review of the potential impacts of climate change on bulk power system planning and operations in the United States. Renew. Sustain. Energy Rev. 98, 255267 (2018).

Article Google Scholar

Bloomfield, H. C., Brayshaw, D. J., Shaffrey, L. C., Coker, P. J. & Thornton, H. E. Quantifying the increasing sensitivity of power systems to climate variability. Environ. Res. Lett. 11, 124025 (2016).

Article Google Scholar

Yalew, S. G. et al. Impacts of climate change on energy systems in global and regional scenarios. Nat. Energy 5, 794802 (2020).

Article Google Scholar

Craig, M. T., Jaramillo, P., Hodge, B.-M., Nijssen, B. & Brancucci, C. Compounding climate change impacts during high stress periods for a high wind and solar power system in Texas. Environ. Res. Lett. 15, 024002 (2020).

Article Google Scholar

Dowling, P. The impact of climate change on the European energy system. Energy Policy 60, 406417 (2013).

Article Google Scholar

Craig, M. T. et al. Overcoming the disconnect between energy system and climate modeling. Joule 6, 14051417 (2022).

Article Google Scholar

Tapiador, F. J., Navarro, A., Moreno, R., Snchez, J. L. & Garca-Ortega, E. Regional climate models: 30 years of dynamical downscaling. Atmos. Res. 235, 104785 (2020).

Article Google Scholar

Pierce, D. W., Cayan, D. R. & Thrasher, B. L. Statistical downscaling using localized constructed analogs (LOCA). J. Hydrometeorol. 15, 25582585 (2014).

Article Google Scholar

Wood, A. W., Leung, L. R., Sridhar, V. & Lettenmaier, D. P. Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Clim. Change 62, 189216 (2004).

Article Google Scholar

Kaczmarska, J., Isham, V. & Onof, C. Point process models for fine-resolution rainfall. Hydrol. Sci. J. 59, 19721991 (2014).

Article Google Scholar

Vandal, T., Kodra, E. & Ganguly, A. R. Intercomparison of machine learning methods for statistical downscaling: the case of daily and extreme precipitation. Theor. Appl. Climatol. 137, 557570 (2019).

Article Google Scholar

Stengel, K., Glaws, A., Hettinger, D. & King, R. N. Adversarial super-resolution of climatological wind and solar data. Proc. Natl Acad. Sci. USA 117, 1680516815 (2020).

Article Google Scholar

Tran, D. T. et al. GANs enabled super-resolution reconstruction of wind field. J. Phys. Conf. Ser. 1669, 012029 (2020).

Article Google Scholar

Kim, J., Lee, J. K. & Lee, K. M. Deeply-recursive convolutional network for image super-resolution. in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 16371645 (2016).

Hess, P., Drke, M., Petri, S., Strnad, F. M. & Boers, N. Physically constrained generative adversarial networks for improving precipitation fields from Earth system models. Nat. Mach. Intell. https://doi.org/10.1038/s42256-022-00540-1 (2022).

Goodfellow, I. et al. Generative adversarial nets. In Proc. Advances in Neural Information Processing Systems Vol. 27 (eds Ghahramani, Z. et al.) (Curran Associates, Inc., 2014).

Di Luca, A., de Ela, R. & Laprise, R. Potential for small scale added value of RCMs downscaled climate change signal. Clim. Dyn. 40, 601618 (2013).

Article Google Scholar

Flato, G. et al. in IPCC Climate Change 2013: The Physical Science Basis Ch. 9 (eds Stocker, T. F. et al.) (IPCC, Cambridge Univ. Press, 2013).

Yukimoto, S. et al. MRI MRI-ESM2.0 Model Output Prepared for CMIP6 C4MIP esm-ssp585 Version 20191108 (WDC Climate, 2019); https://doi.org/10.22033/ESGF/CMIP6.6811

EC-Earth Consortium (EC-Earth). EC-Earth-Consortium EC-Earth3 Model Output Prepared for CMIP6 CMIP esm-ssp585, Version 20200310 (Earth System Grid Federation, 2019); https://doi.org/10.22033/ESGF/CMIP6.4700

Kao, S.-C. et al. The Third Assessment of the Effects of Climate Change on Federal Hydropower (OSTI, 2022); https://www.osti.gov/biblio/1887712/

Martinez, A. & Iglesias, G. Climate change impacts on wind energy resources in North America based on the CMIP6 projections. Sci. Total Environ. 806, 150580 (2022).

Article Google Scholar

Sengupta, M. et al. The National Solar Radiation Data Base (NSRDB). Renew. Sustain. Energy Rev. 89, 5160 (2018).

Article Google Scholar

Draxl, C., Clifton, A., Hodge, B.-M. & McCaa, J. The Wind Integration National Dataset (WIND) Toolkit. Appl. Energy 151, 355366 (2015).

Article Google Scholar

James, E. P. et al. The High-Resolution Rapid Refresh (HRRR): an hourly updating convection-allowing forecast model. Part II: forecast performance. Weather Forecast. 37, 13971417 (2022).

Article Google Scholar

Jafari, S., Sommer, T., Chokani, N. & Abhari, R. S. Wind resource assessment using a mesoscale model: the effect of horizontal resolution. in Proc. ASME Turbo Expo 2012: Turbine Technical Conference and Exposition (eds Bainier, F. et al.) 987995 (American Society of Mechanical Engineers Digital Collection, 2013).

Perez, R., David, M. & Hoff, T. E. in Foundations and Trends in Renewable Energy (eds Norton, B. et al.) 144 (Now Publishers Inc., 2016).

Kolmogorov, A. N. Dissipation of energy in the locally isotropic turbulence. Proc. Math. Phys. Sci. 434, 1517 (1991).

MathSciNet Google Scholar

Holttinen, H. et al. Design and Operation of Power Systems with Large Amounts of Wind Power: Final Summary Report, IEA WIND Task 25, Phase Four 20152017 (VTT Technical Research Centre of Finland, 2019); https://doi.org/10.32040/2242-122X.2019.T350

Dobos, A. P. PVWatts Version 5 Manual (OSTI, 2014); https://www.osti.gov/biblio/1158421

Gueymard, C. A. REST2: high-performance solar radiation model for cloudless-sky irradiance, illuminance, and photosynthetically active radiationvalidation with a benchmark dataset. Sol. Energy 82, 272285 (2008).

Article Google Scholar

Maxwell, E. L. A Quasi-Physical Model for Converting Hourly Global Horizontal to Direct Normal Insolation (OSTI, 1987); https://www.osti.gov/biblio/5987868

Olea, R. A. in Geostatistics for Engineers and Earth Scientists (ed. Olea, R. A.) 6790 (Springer, 1999).

Stull, R. Wet-bulb temperature from relative humidity and air temperature. J. Appl. Meteorol. Climatol. 50, 22672269 (2011).

Article Google Scholar

Gelaro, R. et al. The modern-era retrospective analysis for research and applications, version 2 (MERRA-2). J. Clim. 30, 54195454 (2017).

Article Google Scholar

Atmospheric Radiation Measurement (ARM). Data Quality Assessment for ARM Radiation Data (QCRADBRS1LONG). 2015-01-01 to 2021-12-31, Southern Great Plains (SGP) Central Facility, Lamont, OK (C1) (eds Shi, Y. & Riihimaki, L.) (ARM Data Center, 1993); https://doi.org/10.5439/1027745

Brinkman, G. et al. The North American Renewable Integration Study (NARIS): A U.S. Perspective (OSTI, 2021); https://www.osti.gov/biblio/1804701

Peacock, J. A. Two-dimensional goodness-of-fit testing in astronomy. Mon. Not. R. Astron. Soc. 202, 615627 (1983).

Article Google Scholar

Novacheck, J. et al. The Evolving Role of Extreme Weather Events in the U.S. Power System with High Levels of Variable Renewable Energy (OSTI, 2021); https://www.osti.gov/biblio/1837959

IPCC Climate Change 2023: Synthesis Report Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (eds Lee, H. & Romero, J.) 184 (IPCC, 2023).

Ralston Fonseca, F. et al. Climate-induced tradeoffs in planning and operating costs of a regional electricity system. Environ. Sci. Technol. 55, 1120411215 (2021).

Article Google Scholar

Avery, C. W. et al. in Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment Vol. II (eds Reidmiller, D. R. et al.) 14131430 (US Global Change Research Program, 2018).

Draxl, C., Hodge, B. M., Clifton, A. & McCaa, J. Overview and Meteorological Validation of the Wind Integration National Dataset Toolkit (OSTI, 2015); https://www.osti.gov/biblio/1214985

Hassanaly, M., Glaws, A., Stengel, K. & King, R. N. Adversarial sampling of unknown and high-dimensional conditional distributions. J. Comput. Phys. 450, 110853 (2022).

Article MathSciNet Google Scholar

Wootten, A., Terando, A., Reich, B. J., Boyles, R. P. & Semazzi, F. Characterizing sources of uncertainty from global climate models and downscaling techniques. J. Appl. Meteorol. Climatol. 56, 32453262 (2017).

Article Google Scholar

Karnauskas, K. B., Lundquist, J. K. & Zhang, L. Southward shift of the global wind energy resource under high carbon dioxide emissions. Nat. Geosci. 11, 3843 (2018).

Article Google Scholar

Cohen, J. et al. Divergent consensuses on Arctic amplification influence on midlatitude severe winter weather. Nat. Clim. Chang. 10, 2029 (2020).

Article Google Scholar

Voigt, A. et al. Clouds, radiation, and atmospheric circulation in the present-day climate and under climate change. WIREs Clim. Change 12, e694 (2021).

Article Google Scholar

Springenberg, J. T., Dosovitskiy, A., Brox, T. & Riedmiller, M. A. Striving for simplicity: the all convolutional net. in CoRR Vol. abs/1412.6806 (2014).

He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in Proc. IEEE Conference on Computer Vision and Pattern Recognition 770778 (2016).

He, K., Zhang, X., Ren, S. & Sun, J. Identity mappings in deep residual networks. in Proc. Computer VisionECCV 2016 (eds Leibe, B. et al.) 630645 (Springer International Publishing, 2016).

Shi, W. et al. Is the deconvolution layer the same as a convolutional layer? Preprint at arXiv http://arxiv.org/abs/1609.07009 (2016).

Federal Aviation Administration. in Pilots Handbook of Aeronautical Knowledge Ch. 4 (FAA, US Government, 2023).

Ho, C. K., Stephenson, D. B., Collins, M., Ferro, C. A. T. & Brown, S. J. Calibration strategies: a source of additional uncertainty in climate change projections. Bull. Am. Meteorol. Soc. 93, 2126 (2012).

Article Google Scholar

Read the original post:
High-resolution meteorology with climate change impacts from global climate model data using generative machine ... - Nature.com

Read More..

Google Cloud Next 2024: Pushing the Next Frontier of AI – Technology Magazine

The companys updated AI offering is now available on Vertex AI, Googles platform to customise and manage a wide range of leading Gen AI models. The company says that more than one million developers are currently using Googles Gen AI via its AI Studio and Vertex AI tools.

Likewise, its AI Hypercomputer is now being used by leading AI companies such as Anthropic, AI21 Labs, Contextual AI, Essential AI and Mistral AI. The Hypercomputer aims to employ a system of performance-optimised hardware, open software and machine learning frameworks to enable companies to better advance their digital transformation strategies.

Multiple companies are already harnessing the power of Google Cloud AI, including forward-thinking organisations like Mercedes-Benz, Uber and Palo Alto Networks to bolster their existing services and improve customer experience.

Mercedes-Benz, for example, is harnessing Google AI to improve customer service in call centres and to further optimise their website experience.

As AI continues to drive transformative progress in the business world, Google Cloud is aiming to help organisations around the world to discover whats next.

Google Cloud is also introducing new features that aim to offer AI assistance so that its customers can work and code more efficiently, allowing them to better identify and resolve cybersecurity threats by taking direct action against attacks.

Google Clouds product, Gemini in Threat Intelligence, utilises natural language to deliver insights about how threat actors behave. With Geminis larger context window, users can analyse much larger samples of potentially malicious code and gain more accurate results.

These AI-driven tools will help businesses take more detailed action, preventing more catastrophic data breaches.

Currently, there is incredible customer innovation across a broad range of industries, including retail, transportation and more. Harnessing Gen AI to fast-forward innovation requires a secure business AI platform that offers end-to-end capabilities that are easy to integrate with existing systems within a business.

The rest is here:
Google Cloud Next 2024: Pushing the Next Frontier of AI - Technology Magazine

Read More..

Why MLBOMs Are Useful for Securing the AI/ML Supply Chain – Dark Reading

COMMENTARY

The days of large, monolithic apps are withering. Today's applications rely on microservices and code reuse, which makes development easier but creates complexity when it comes to tracking and managing the components they use.

This is why thesoftware bill of materials (SBOM)has emerged as an indispensable tool for identifying what's in a software app, including the components, versions, and dependencies that reside within systems. SBOMs also deliver deep insights into dependencies, vulnerabilities, and risks that factor into cybersecurity.

An SBOM allows CISOs and other enterprise leaders to focus on what really matters by providing an up-to-date inventory of software components. This makes it easier to establish and enforce strong governance and spot potential problems before they spiral out of control.

Yet in the age of artificial intelligence (AI), the classic SBOM has some limitations. Emerging machine learning (ML) frameworks introduce remarkable opportunities, but they also push the envelope on risk and introduce a new asset to organizations: the machine learning model. Without strong oversight and controls over these models, an array ofpractical, technical, and legal problemscan arise.

That's where machine learning bills of materials (MLBOMs) enter the picture. The framework tracks names, locations, versions, and licensing for assets that comprise an ML model. It also includes overarching information about the nature of the model, training configurations embedded in metadata, who owns it, various feature sets, hardware requirements, and more.

CISOs are realizing that AI and ML require a different security model and the underlying training data and models that run them are frequently not tracked or governed. An MLBOM can help an organization avoid security risks and failures. It addresses critical factors like model and data provenance, safety ratings, and dynamic changes that extend beyond the scope of SBOM.

Because ML environments are in a constant state of flux and changes can take place with little or no human interaction, issues related to data consistency including where it originated, how it was cleaned, and how it was labeled are a constant concern.

For example, if a business analyst or data scientist determines that a data set is poisoned, the MLBOM simplifies the task of findingall the various touch points and models that were trained with that data.

Transparency, auditability, control, and forensic insight are all hallmarks of an MLBOM. With a comprehensive view of the "ingredients" that go into an ML model, an organization is equipped to manage its ML models safely.

Here are some ways to build a best practice framework around an MLBOM:

Recognize the need for an MLBOM:It's no secret that ML fuels business innovation and even disruption. Yet it also introduces significant risks that can extend to reputation, regulatory compliance, and legal issues. Having visibility into ML models is critically important.

Conduct essential due diligence:An MLBOM should integrate withthe CI/CD pipelineand deliver a high level of clarity. Support for standard frameworks like JSON or OWASP'sCycloneDXcanunify SBOM and MLBOM processes.

Analyze policies, processes, and governance:It's essential to sync an MLBOM with an organization's workflows and business processes. This increases the odds that ML pipelines will work as intended, while minimizing risks related to cybersecurity, data privacy, compliance, and other risk-associated areas.

Use an MLBOM with machine learning gates:Rigorous controls and gateways lead to essential AI and ML guardrails. In this way, the business and the CSO can build on successes and harness ML to unlock greater cost savings, performance gains, and business value.

Machine learning is radically changing the business and IT landscape. By extending proven SBOM methodologies to ML through MLBOMs, it's possible to take a giant step toward boosting machine learning performance and protecting data and assets.

Read more:
Why MLBOMs Are Useful for Securing the AI/ML Supply Chain - Dark Reading

Read More..

The Role of Data Analytics and Machine Learning in Personalized Medicine Through Healthcare Apps – DataDrivenInvestor

Image by author

Personalized medicine through data analytics and Machine Learning has revolutionized the healthcare industry by tailoring medical treatments to individual patients based on their unique characteristics. In recent years, Data analytics and ML apps have become powerful tools to facilitate patient engagement and self-monitoring. This article explores the role of data analytics and Machine Learning in personalized medicine through healthcare apps, highlighting their importance, benefits, challenges, ethical considerations, and prospects.

Personalized medicine is like having a tailor-made healthcare plan. It considers your unique genetic makeup, lifestyle, and environment to provide more precise and effective treatments. This approach aims to deliver targeted therapies based on individual characteristics.

Healthcare apps have revolutionized the way we access and manage our health information. From tracking our daily steps to monitoring our heart rate, these apps have become essential tools in our quest for better health. What used to be basic fitness trackers have now evolved into comprehensive platforms that can analyze a vast amount of data to offer personalized insights and recommendations.

Data is the fuel that powers personalized medicine. It provides the necessary information to understand patterns, risks, and potential treatments for individuals. By analyzing vast amounts of data, such as genomic information, medical histories, and lifestyle factors, healthcare professionals can identify personalized treatment options and interventions.

Data analytics opens a whole new world of possibilities in personalized medicine. It allows healthcare providers to identify trends and correlations that may go unnoticed. This means faster and more accurate diagnoses, more effective treatment plans, and ultimately better health outcomes for patients. Data analytics also enables continuous learning and improvement by constantly refining treatment strategies based on real-world evidence.

Machine Learning is like having a computer that can learn and make decisions on its own. Its a branch of Artificial Intelligence (AI) that allows systems to analyze and interpret complex data patterns, discover insights, and make predictions or recommendations. In healthcare apps, Machine Learning algorithms can process large datasets and extract valuable information to enhance decision-making and improve patient outcomes.

Machine Learning algorithms can be embedded into healthcare apps, allowing them to continuously learn from user data and adapt their recommendations accordingly. For example, a fitness app can use Machine Learning to analyze a users exercise habits, heart rate, and sleep patterns to provide personalized exercise routines and sleep recommendations. By leveraging Machine Learning, healthcare apps can become intelligent and proactive health companions.

The combination of data analytics and Machine Learning offers significant advantages in personalized medicine.

By combining the power of data analytics tools with Machine Learning algorithms, businesses can create visually engaging and interactive dashboards that present data in a way that is easy to digest and interpret.

With data analytics and ML, businesses can uncover valuable insights that drive informed decision-making and improve overall business performance.

Businesses can automate data processing and analysis processes by integrating data analytics and Machine Learning, saving valuable time and improving accuracy.

ML algorithms analyze data objectively and make decisions based on patterns and statistical models, minimizing the impact of human subjectivity.

While data analytics and Machine Learning offer immense potential in healthcare, there are ethical considerations that need to be addressed. One concern is the potential bias in algorithms. If the data used to train Machine Learning models is biased, it can lead to biased treatment recommendations or diagnoses, disproportionately impacting certain groups of patients.

Another ethical concern is the transparency of algorithms. Patients and healthcare providers need to understand how algorithms arrive at their recommendations or diagnoses. Lack of transparency can undermine trust in the healthcare system and raise concerns about the accountability of algorithms.

The use of data analytics and Machine Learning in healthcare apps necessitates the collection and analysis of personal health information. Privacy concerns arise as this sensitive data needs to be handled with the utmost care. Healthcare apps must employ robust security measures to safeguard patient data and comply with relevant privacy regulations.

Transparency in data usage and obtaining informed consent from patients is crucial. Patients should have control over how their data is used and be fully aware of the potential risks and benefits.

As data analytics continues to evolve, several emerging trends hold promise for personalized medicine. One such trend is the integration of data from wearables and Internet of Things (IoT) devices. This real-time data collection allows for more accurate monitoring of patient health and enables timely interventions.

Another trend is the use of Natural Language Processing in analyzing unstructured medical data, such as doctors notes or research papers. NLP algorithms can extract valuable insights from these vast amounts of text, aiding in personalized medicine research and decision-making.

Machine Learning advancements are opening doors to exciting possibilities in healthcare apps. One breakthrough area is the use of deep learning algorithms. These sophisticated neural networks can process complex medical images, such as MRI scans or histopathology slides, with remarkable accuracy, assisting doctors in diagnosis and treatment planning.

Additionally, federated learning is gaining attention in healthcare. This approach allows Machine Learning models to be trained on decentralized data sources without sharing the raw data, preserving patient privacy while still benefiting from the collective knowledge present in diverse datasets.

Data analytics and Machine Learning have the potential to revolutionize personalized medicine through healthcare apps. Healthcare app development services from improving diagnosis accuracy to personalized treatment recommendations, these technologies offer valuable insights and benefits in healthcare.

Go here to read the rest:
The Role of Data Analytics and Machine Learning in Personalized Medicine Through Healthcare Apps - DataDrivenInvestor

Read More..

Yann LeCun Emphasizes the Promise AI – NYAS – The New York Academy of Sciences

Blog Article

Published April 8, 2024

By Nick Fetty

Yann LeCun, a Turing Award winning computer scientist, had a wide-ranging discussion about artificial intelligence (AI) with Nicholas Dirks, President and CEO of The New York Academy of Sciences, as part of the first installment of the Tata Series on AI & Society on March 14, 2024.

LeCun is the Vice President and Chief AI Scientist at Meta, as well as the Silver Professor for the Courant Institute of Mathematical Sciences at New York University. A leading researcher in machine learning, computer vision, mobile robotics, and computational neuroscience, LeCun has long been associated with the Academy, serving as a featured speaker during past machine learning conferences and also as a juror for the Blavatnik Awards for Young Scientists.

As a postdoc at the University of Toronto, LeCun worked alongside Geoffrey Hinton, whos been dubbed the godfather of AI, conducting early research in neural networks. Some of this early work would later be applied to the field of generative AI. At this time, many of the fields foremost experts cautioned against pursuing such endeavors. He shared with the audience what drove him to pursue this work, despite the reservations some had.

Everything that lives can adapt but everything that has a brain can learn, said LeCun. The idea was that learning was going to be critical to make machines more intelligent, which I think was completely obvious, but I noticed that nobody was really working on this at the time.

LeCun joked that because of the fields relative infancy, he struggled at first to find a doctoral advisor, but he eventually pursued a PhD in computer science at the Universit Pierre et Marie Curie where he studied under Maurice Milgram. He recalled some of the limitations, such as the lack of large-scale training data and limited processing power in computers, during those early years in the late 1980s and 1990s. By the early 2000s, he and his colleagues began developing a research community to revive and advance work in neural networks and machine learning.

Work in the field really started taking off in the late 2000s, LeCun said. Advances in speech and image recognition software were just a couple of the instances LeCun cited that used neural networks in deep learning applications. LeCun said he had no doubt about the potential of neural networks once the data sets and computing power was sufficient.

Large language models (LLMs), such as ChatGPT or autocomplete, use machine learning to predict and generate plausible language. While some have expressed concerns about machines surpassing human intelligence, LeCun admits that he takes an unpopular opinion in thinking that he doesnt think LLMs are as intelligent as they may seem.

LLMs are developed using a finite number of words, or more specifically tokens which are roughly three-quarters of a word on average, according to LeCun. He said that many LLMs are developed using as many as 10 trillion tokens.

While much consideration goes into deciding what tunable parameters will be used to develop these systems, LeCun points out that theyre not trained for any particular task, theyre basically trained to fill in the blanks. He said that more than just language needs to be considered to develop an intelligent system.

Thats pretty much why those LLMs are subject to hallucinations, which really you should call confabulations. They cant really reason. They cant really plan. They basically just produce one word after the other, without really thinking in advance about what theyre going to say, LeCun said, adding that we have a lot of work to do to get machines to the level of human intelligence, were nowhere near that.

LeCun argued that to have a smarter AI, these technologies should be informed by sensory input (observations and interactions) instead of language inputs. He pointed to orangutans, which are highly intelligent creatures that survive without using language.

Part of LeCuns argument for why sensory inputs would lead to better AI systems is because the brain processes these inputs much faster. While reading text or digesting language, the human brain processes information at about 12 bytes per second, compared to sensory inputs from observations and interactions, which the brain processes at about 20 megabytes per second.

To build truly intelligent systems, theyd need to understand the physical world, be able to reason, plan, remember and retrieve. The architecture of future systems that will be capable of doing this will be very different from current large language models, he said.

As part of his work with Meta, LeCun uses and develops AI tools to detect content that violates the terms of services on social media platforms like Facebook and Instagram, though he is not directly involved with the moderation of content itself. Roughly 88 percent of content removed is initially flagged by AI, which helps his team in taking down roughly 10 million items every three months. Despite these efforts, misinformation, disinformation, deep fakes, and other manipulated content continue to be problematic, though the means for detecting this content automatically has vastly improved.

LeCun referenced statistics stating that in late 2017, roughly 20 to 25 percent of hate speech content was flagged by AI tools. This number climbed to 96 percent just five years later. LeCun said this difference can be attributed to two things: first the emergence of self-supervised, language-based AI systems (which predated the existence of ChatGPT); and second, is the transformer architecture present in LLMs and other systems. He added that these systems can not only detect hate speech, but also violent speech, terrorist propaganda, bullying, fake news and deep fakes.

The best countermeasure against these [concerns] is AI. AI is not really the problem here, its actually the solution, said LeCun.

He said this will require a combination of better technological systems, The AI of the good guys have to stay ahead of the AI of the bad guys, as well as non-technological, societal input to easily detect content produced or adapted by AI. He added that an ideal standard would involve a watermark-like tool that verifies legitimate content, as opposed to a technology tasked with flagging inauthentic material.

LeCun pointed to a study by researchers at New York University which found that audiences over the age of 65 are most likely to be tricked by false or manipulated content. Younger audiences, particularly those who grew up with the internet, are less likely to be fooled, according to the research.

One element that separates Meta from its contemporaries is the formers ability to control the AI algorithms that oversee much of its platforms content. Part of this is attributed to LeCuns insistence on open sourcing their AI code, which is a sentiment shared by the company and part of the reason he ended up at Meta.

I told [Meta executives] that if we create a research lab well have to publish everything we do, and open source our code, because we dont have a monopoly on good ideas, said LeCun. The best way I know, which I learned from working at Bell Labs and in academia, of making progress as quickly as possible is to get as many people as possible contributing to a particular problem.

LeCun added that part of the reason AI has made the advances it has in recent years is because many in the industry have embraced the importance of open publication, open sourcing and collaboration.

Its an ecosystem and we build on each others ideas, LeCun said.

Another advantage is that open sourcing lessens the likelihood of a single company developing a monopoly over a particular technology. LeCun said a single company simply does not have the ability to finetune an AI system that will adequately serve the entire population of the world.

Many of the early systems have been developed using English, where data is abundant, but, for example, different inputs will need to be considered in a country such as India, where 22 different official languages are spoken. These inputs can be utilized in a way that a contributor doesnt need to be literate simply having the ability to speak a language would be enough to create a baseline for AI systems that serve diverse audiences. He said that freedom and diversity in AI is important in the same way that freedom and diversity is vital to having an independent press.

The risk of slowing AI is much greater than the risk of disseminating it, LeCun said.

Following a brief question and answer session, LeCun was presented with an Honorary Life Membership by the Academys President and CEO, Nick Dirks.

This means that youll be coming back often to speak with us and we can all get our questions answered, Dirks said with a smile to wrap up the event. Thank you so much.

Read the original here:
Yann LeCun Emphasizes the Promise AI - NYAS - The New York Academy of Sciences

Read More..