Category Archives: Machine Learning
AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS | Amazon Web Services – AWS Blog
AWS was delighted to present to and connect with over 18,000 in-person and 267,000 virtual attendees at NVIDIA GTC, a global artificial intelligence (AI) conference that took place March 2024 in San Jose, California, returning to a hybrid, in-person experience for the first time since 2019.
AWS has had a long-standing collaboration with NVIDIA for over 13 years. AWS was the first Cloud Service Provider (CSP) to offer NVIDIA GPUs in the public cloud, and remains among the first to deploy NVIDIAs latest technologies.
Looking back at AWS re:Invent 2023, Jensen Huang, founder and CEO of NVIDIA, chatted with AWS CEO Adam Selipsky on stage, discussing how NVIDIA and AWS are working together to enable millions of developers to access powerful technologies needed to rapidly innovate with generative AI. NVIDIA is known for its cutting-edge accelerators and full-stack solutions that contribute to advancements in AI. The company is combining this expertise with the highly scalable, reliable, and secure AWS Cloud infrastructure to help customers run advanced graphics, machine learning, and generative AI workloads at an accelerated pace.
The collaboration between AWS and NVIDIA further expanded at GTC 2024, with the CEOs from both companies sharing their perspectives on the collaboration and state of AI in a press release:
The deep collaboration between our two organizations goes back more than 13 years, when together we launched the worlds first GPU cloud instance on AWS, and today we offer the widest range of NVIDIA GPU solutions for customers, says Adam Selipsky, CEO of AWS. NVIDIAs next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWSs powerful Elastic Fabric Adapter networking, Amazon EC2 UltraClusters hyper-scale clustering, and our unique AWS Nitro Systems advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else. Together, we continue to innovate to make AWS the best place to run NVIDIA GPUs in the cloud.
AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries, says Jensen Huang, founder and CEO of NVIDIA. Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of whats possible.
On the first day of the NVIDIA GTC, AWS and NVIDIA made a joint announcement focused on their strategic collaboration to advance generative AI. Huang included the AWS and NVIDIA collaboration on a slide during his keynote, highlighting the following announcements. The GTC keynote had over 21 million views within the first 72 hours.
By March 22, AWSs announcement with NVIDIA had generated 104 articles mentioning AWS and Amazon. The vast majority of coverage mentioned AWSs plans to offer Blackwell-based instances. Adam Selipsky appeared on CNBCs Mad Money to discuss the long-standing collaboration between AWS and NVIDIA, among the many other ways AWS is innovating in generative AI, stating that AWS has been the first to bring many of its GPUs to the cloud to drive efficiency and scalability for customers.
Project Ceiba has also been a focus in media coverage. Forbes referred to Project Ceiba as the most exciting project by AWS and NVIDIA, stating that it should accelerate the pace of innovation in AI, making it possible to tackle more complex problems, develop more sophisticated models, and achieve previously unattainable breakthroughs. The Next Platform ran an in-depth piece on Ceiba, stating that the size and the aggregate compute of Ceiba cluster are both being radically expanded, which will give AWS a very large supercomputer in one of its data centers and NVIDIA will use it to do AI research, among other things.
Live from GTC was an on-site studio at GTC for invited speakers to have a fireside chat with tech influencers like VentureBeat. Chetan Kapoor, Director of Product Management for Amazon EC2 at AWS, was interviewed by VentureBeat at the Live from GTC studio, where he discussed AWSs presence and highlighted key announcements at GTC.
The AWS booth showcased generative AI services, like the LLMs with Anthropic and Cohere on Amazon Bedrock, PartyRock, Amazon Q, Amazon SageMaker JumpStart, and more. Highlights included:
During GTC, AWS invited 23 partner and customer solution demos to join its booth with either a dedicated demo kiosk or a 30-minute in-booth session. Such partners and customers included Ansys, Anthropic, Articul8, Bria.ai, Cohere, Deci, Deepbrain.AI, Denali Advanced Integration, Ganit, Hugging Face, Lilt, Linker Vision, Mavenir, MCE, Media.Monks, Modular, NVIDIA, Perplexity, Quantiphi, Run.ai, Salesforce, Second Spectrum, and Slalom.
Among them, high-potential early-stage startups in generative AI across the globe were showcased with a dedicated kiosk at the AWS booth. The AWS Startups team works closely with these companies by investing and supporting their growth, offering resources through programs like AWS Activate.
NVIDIA was one of the 45 launch partners for the new AWS Generative AI Competency program. The Generative AI Center of Excellence for AWS Partners team members were on site at the AWS booth, presenting this program for both existing and potential AWS partners. The program offers valuable resources along with best practices for all AWS partners to build, market, and sell generative AI solutions jointly with AWS.
Watch a video recap of the AWS presence at NVIDIA GTC 2024. For additional resources about the AWS and NVIDIA collaboration, refer to the AWS at NVIDIA GTC 2024 resource hub.
Julie Tang is the Senior Global Partner Marketing Manager for Generative AI at Amazon Web Services (AWS), where she collaborates closely with NVIDIA to plan and execute partner marketing initiatives focused on generative AI. Throughout her tenure at AWS, she has held various partner marketing roles, including Global IoT Solutions, AWS Partner Solution Factory, and Sr. Campaign Manager in Americas Field Marketing. Prior to AWS, Julie served as the Marketing Director at Segway. She holds a Masters degree in Communications Management with a focus on marketing and entertainment management from the University of Southern California, and dual Bachelors degrees in Law and Broadcast Journalism from Fudan University.
Read the original post:
AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS | Amazon Web Services - AWS Blog
High-resolution meteorology with climate change impacts from global climate model data using generative machine … – Nature.com
Zhou, E. & Mai, T. Electrification Futures Study: Operational Analysis of U.S. Power Systems with Increased Electrification and Demand-Side Flexibility (US National Renewable Energy Laboratory, 2021); https://www.nrel.gov/docs/fy21osti/79094.pdf
Xexakis, G. & Trutnevyte, E. Consensus on future EU electricity supply among citizens of France, Germany, and Poland: implications for modeling. Energy Strategy Rev. 38, 100742 (2021).
Article Google Scholar
Steggals, W., Gross, R. & Heptonstall, P. Winds of change: how high wind penetrations will affect investment incentives in the GB electricity sector. Energy Policy 39, 13891396 (2011).
Article Google Scholar
Brinkman, G. et al. The North American Renewable Integration Study: A U.S. Perspective (US National Renewable Energy Laboratory, 2021); https://www.nrel.gov/docs/fy21osti/79224.pdf
Boie, I., Fernandes, C., Fras, P. & Klobasa, M. Efficient strategies for the integration of renewable energy into future energy infrastructures in European analysis based on transnational modeling and case studies for nine European regions. Energy Policy 67, 170185 (2014).
Article Google Scholar
Sun, X., Zhang, B., Tang, X., McLellan, B. C. & Hk, M. Sustainable energy transitions in China: renewable options and impacts on the electricity system. Energies 9, 980 (2016).
Article Google Scholar
Carvallo, J. et al. A Guide for Improved Resource Adequacy Assessments in Evolving Power Systems: Institutional and Technical Dimensions (Ernest Orlando Lawrence Berkeley National Laboratory, 2023); https://eta-publications.lbl.gov/sites/default/files/ra_project_-_final.pdf
Stenclik, D. et al. Redefining Resource Adequacy for Modern Power Systems (Energy Systems Integration Group, 2021); https://www.esig.energy/wp-content/uploads/2022/12/ESIG-Redefining-Resource-Adequacy-2021-b.pdf
Auffhammer, M., Baylis, P. & Hausman, C. H. Climate change is projected to have severe impacts on the frequency and intensity of peak electricity demand across the United States. Proc. Natl Acad. Sci. USA 114, 18861891 (2017).
Article Google Scholar
Huang, J. & Gurney, K. R. Impact of climate change on U.S. building energy demand: sensitivity to spatiotemporal scales, balance point temperature, and population distribution. Clim. Change 137, 171185 (2016).
Article Google Scholar
Craig, M. T. et al. A review of the potential impacts of climate change on bulk power system planning and operations in the United States. Renew. Sustain. Energy Rev. 98, 255267 (2018).
Article Google Scholar
Bloomfield, H. C., Brayshaw, D. J., Shaffrey, L. C., Coker, P. J. & Thornton, H. E. Quantifying the increasing sensitivity of power systems to climate variability. Environ. Res. Lett. 11, 124025 (2016).
Article Google Scholar
Yalew, S. G. et al. Impacts of climate change on energy systems in global and regional scenarios. Nat. Energy 5, 794802 (2020).
Article Google Scholar
Craig, M. T., Jaramillo, P., Hodge, B.-M., Nijssen, B. & Brancucci, C. Compounding climate change impacts during high stress periods for a high wind and solar power system in Texas. Environ. Res. Lett. 15, 024002 (2020).
Article Google Scholar
Dowling, P. The impact of climate change on the European energy system. Energy Policy 60, 406417 (2013).
Article Google Scholar
Craig, M. T. et al. Overcoming the disconnect between energy system and climate modeling. Joule 6, 14051417 (2022).
Article Google Scholar
Tapiador, F. J., Navarro, A., Moreno, R., Snchez, J. L. & Garca-Ortega, E. Regional climate models: 30 years of dynamical downscaling. Atmos. Res. 235, 104785 (2020).
Article Google Scholar
Pierce, D. W., Cayan, D. R. & Thrasher, B. L. Statistical downscaling using localized constructed analogs (LOCA). J. Hydrometeorol. 15, 25582585 (2014).
Article Google Scholar
Wood, A. W., Leung, L. R., Sridhar, V. & Lettenmaier, D. P. Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Clim. Change 62, 189216 (2004).
Article Google Scholar
Kaczmarska, J., Isham, V. & Onof, C. Point process models for fine-resolution rainfall. Hydrol. Sci. J. 59, 19721991 (2014).
Article Google Scholar
Vandal, T., Kodra, E. & Ganguly, A. R. Intercomparison of machine learning methods for statistical downscaling: the case of daily and extreme precipitation. Theor. Appl. Climatol. 137, 557570 (2019).
Article Google Scholar
Stengel, K., Glaws, A., Hettinger, D. & King, R. N. Adversarial super-resolution of climatological wind and solar data. Proc. Natl Acad. Sci. USA 117, 1680516815 (2020).
Article Google Scholar
Tran, D. T. et al. GANs enabled super-resolution reconstruction of wind field. J. Phys. Conf. Ser. 1669, 012029 (2020).
Article Google Scholar
Kim, J., Lee, J. K. & Lee, K. M. Deeply-recursive convolutional network for image super-resolution. in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 16371645 (2016).
Hess, P., Drke, M., Petri, S., Strnad, F. M. & Boers, N. Physically constrained generative adversarial networks for improving precipitation fields from Earth system models. Nat. Mach. Intell. https://doi.org/10.1038/s42256-022-00540-1 (2022).
Goodfellow, I. et al. Generative adversarial nets. In Proc. Advances in Neural Information Processing Systems Vol. 27 (eds Ghahramani, Z. et al.) (Curran Associates, Inc., 2014).
Di Luca, A., de Ela, R. & Laprise, R. Potential for small scale added value of RCMs downscaled climate change signal. Clim. Dyn. 40, 601618 (2013).
Article Google Scholar
Flato, G. et al. in IPCC Climate Change 2013: The Physical Science Basis Ch. 9 (eds Stocker, T. F. et al.) (IPCC, Cambridge Univ. Press, 2013).
Yukimoto, S. et al. MRI MRI-ESM2.0 Model Output Prepared for CMIP6 C4MIP esm-ssp585 Version 20191108 (WDC Climate, 2019); https://doi.org/10.22033/ESGF/CMIP6.6811
EC-Earth Consortium (EC-Earth). EC-Earth-Consortium EC-Earth3 Model Output Prepared for CMIP6 CMIP esm-ssp585, Version 20200310 (Earth System Grid Federation, 2019); https://doi.org/10.22033/ESGF/CMIP6.4700
Kao, S.-C. et al. The Third Assessment of the Effects of Climate Change on Federal Hydropower (OSTI, 2022); https://www.osti.gov/biblio/1887712/
Martinez, A. & Iglesias, G. Climate change impacts on wind energy resources in North America based on the CMIP6 projections. Sci. Total Environ. 806, 150580 (2022).
Article Google Scholar
Sengupta, M. et al. The National Solar Radiation Data Base (NSRDB). Renew. Sustain. Energy Rev. 89, 5160 (2018).
Article Google Scholar
Draxl, C., Clifton, A., Hodge, B.-M. & McCaa, J. The Wind Integration National Dataset (WIND) Toolkit. Appl. Energy 151, 355366 (2015).
Article Google Scholar
James, E. P. et al. The High-Resolution Rapid Refresh (HRRR): an hourly updating convection-allowing forecast model. Part II: forecast performance. Weather Forecast. 37, 13971417 (2022).
Article Google Scholar
Jafari, S., Sommer, T., Chokani, N. & Abhari, R. S. Wind resource assessment using a mesoscale model: the effect of horizontal resolution. in Proc. ASME Turbo Expo 2012: Turbine Technical Conference and Exposition (eds Bainier, F. et al.) 987995 (American Society of Mechanical Engineers Digital Collection, 2013).
Perez, R., David, M. & Hoff, T. E. in Foundations and Trends in Renewable Energy (eds Norton, B. et al.) 144 (Now Publishers Inc., 2016).
Kolmogorov, A. N. Dissipation of energy in the locally isotropic turbulence. Proc. Math. Phys. Sci. 434, 1517 (1991).
MathSciNet Google Scholar
Holttinen, H. et al. Design and Operation of Power Systems with Large Amounts of Wind Power: Final Summary Report, IEA WIND Task 25, Phase Four 20152017 (VTT Technical Research Centre of Finland, 2019); https://doi.org/10.32040/2242-122X.2019.T350
Dobos, A. P. PVWatts Version 5 Manual (OSTI, 2014); https://www.osti.gov/biblio/1158421
Gueymard, C. A. REST2: high-performance solar radiation model for cloudless-sky irradiance, illuminance, and photosynthetically active radiationvalidation with a benchmark dataset. Sol. Energy 82, 272285 (2008).
Article Google Scholar
Maxwell, E. L. A Quasi-Physical Model for Converting Hourly Global Horizontal to Direct Normal Insolation (OSTI, 1987); https://www.osti.gov/biblio/5987868
Olea, R. A. in Geostatistics for Engineers and Earth Scientists (ed. Olea, R. A.) 6790 (Springer, 1999).
Stull, R. Wet-bulb temperature from relative humidity and air temperature. J. Appl. Meteorol. Climatol. 50, 22672269 (2011).
Article Google Scholar
Gelaro, R. et al. The modern-era retrospective analysis for research and applications, version 2 (MERRA-2). J. Clim. 30, 54195454 (2017).
Article Google Scholar
Atmospheric Radiation Measurement (ARM). Data Quality Assessment for ARM Radiation Data (QCRADBRS1LONG). 2015-01-01 to 2021-12-31, Southern Great Plains (SGP) Central Facility, Lamont, OK (C1) (eds Shi, Y. & Riihimaki, L.) (ARM Data Center, 1993); https://doi.org/10.5439/1027745
Brinkman, G. et al. The North American Renewable Integration Study (NARIS): A U.S. Perspective (OSTI, 2021); https://www.osti.gov/biblio/1804701
Peacock, J. A. Two-dimensional goodness-of-fit testing in astronomy. Mon. Not. R. Astron. Soc. 202, 615627 (1983).
Article Google Scholar
Novacheck, J. et al. The Evolving Role of Extreme Weather Events in the U.S. Power System with High Levels of Variable Renewable Energy (OSTI, 2021); https://www.osti.gov/biblio/1837959
IPCC Climate Change 2023: Synthesis Report Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (eds Lee, H. & Romero, J.) 184 (IPCC, 2023).
Ralston Fonseca, F. et al. Climate-induced tradeoffs in planning and operating costs of a regional electricity system. Environ. Sci. Technol. 55, 1120411215 (2021).
Article Google Scholar
Avery, C. W. et al. in Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment Vol. II (eds Reidmiller, D. R. et al.) 14131430 (US Global Change Research Program, 2018).
Draxl, C., Hodge, B. M., Clifton, A. & McCaa, J. Overview and Meteorological Validation of the Wind Integration National Dataset Toolkit (OSTI, 2015); https://www.osti.gov/biblio/1214985
Hassanaly, M., Glaws, A., Stengel, K. & King, R. N. Adversarial sampling of unknown and high-dimensional conditional distributions. J. Comput. Phys. 450, 110853 (2022).
Article MathSciNet Google Scholar
Wootten, A., Terando, A., Reich, B. J., Boyles, R. P. & Semazzi, F. Characterizing sources of uncertainty from global climate models and downscaling techniques. J. Appl. Meteorol. Climatol. 56, 32453262 (2017).
Article Google Scholar
Karnauskas, K. B., Lundquist, J. K. & Zhang, L. Southward shift of the global wind energy resource under high carbon dioxide emissions. Nat. Geosci. 11, 3843 (2018).
Article Google Scholar
Cohen, J. et al. Divergent consensuses on Arctic amplification influence on midlatitude severe winter weather. Nat. Clim. Chang. 10, 2029 (2020).
Article Google Scholar
Voigt, A. et al. Clouds, radiation, and atmospheric circulation in the present-day climate and under climate change. WIREs Clim. Change 12, e694 (2021).
Article Google Scholar
Springenberg, J. T., Dosovitskiy, A., Brox, T. & Riedmiller, M. A. Striving for simplicity: the all convolutional net. in CoRR Vol. abs/1412.6806 (2014).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in Proc. IEEE Conference on Computer Vision and Pattern Recognition 770778 (2016).
He, K., Zhang, X., Ren, S. & Sun, J. Identity mappings in deep residual networks. in Proc. Computer VisionECCV 2016 (eds Leibe, B. et al.) 630645 (Springer International Publishing, 2016).
Shi, W. et al. Is the deconvolution layer the same as a convolutional layer? Preprint at arXiv http://arxiv.org/abs/1609.07009 (2016).
Federal Aviation Administration. in Pilots Handbook of Aeronautical Knowledge Ch. 4 (FAA, US Government, 2023).
Ho, C. K., Stephenson, D. B., Collins, M., Ferro, C. A. T. & Brown, S. J. Calibration strategies: a source of additional uncertainty in climate change projections. Bull. Am. Meteorol. Soc. 93, 2126 (2012).
Article Google Scholar
Read the original post:
High-resolution meteorology with climate change impacts from global climate model data using generative machine ... - Nature.com
Google Cloud Next 2024: Pushing the Next Frontier of AI – Technology Magazine
The companys updated AI offering is now available on Vertex AI, Googles platform to customise and manage a wide range of leading Gen AI models. The company says that more than one million developers are currently using Googles Gen AI via its AI Studio and Vertex AI tools.
Likewise, its AI Hypercomputer is now being used by leading AI companies such as Anthropic, AI21 Labs, Contextual AI, Essential AI and Mistral AI. The Hypercomputer aims to employ a system of performance-optimised hardware, open software and machine learning frameworks to enable companies to better advance their digital transformation strategies.
Multiple companies are already harnessing the power of Google Cloud AI, including forward-thinking organisations like Mercedes-Benz, Uber and Palo Alto Networks to bolster their existing services and improve customer experience.
Mercedes-Benz, for example, is harnessing Google AI to improve customer service in call centres and to further optimise their website experience.
As AI continues to drive transformative progress in the business world, Google Cloud is aiming to help organisations around the world to discover whats next.
Google Cloud is also introducing new features that aim to offer AI assistance so that its customers can work and code more efficiently, allowing them to better identify and resolve cybersecurity threats by taking direct action against attacks.
Google Clouds product, Gemini in Threat Intelligence, utilises natural language to deliver insights about how threat actors behave. With Geminis larger context window, users can analyse much larger samples of potentially malicious code and gain more accurate results.
These AI-driven tools will help businesses take more detailed action, preventing more catastrophic data breaches.
Currently, there is incredible customer innovation across a broad range of industries, including retail, transportation and more. Harnessing Gen AI to fast-forward innovation requires a secure business AI platform that offers end-to-end capabilities that are easy to integrate with existing systems within a business.
The rest is here:
Google Cloud Next 2024: Pushing the Next Frontier of AI - Technology Magazine
Why MLBOMs Are Useful for Securing the AI/ML Supply Chain – Dark Reading
COMMENTARY
The days of large, monolithic apps are withering. Today's applications rely on microservices and code reuse, which makes development easier but creates complexity when it comes to tracking and managing the components they use.
This is why thesoftware bill of materials (SBOM)has emerged as an indispensable tool for identifying what's in a software app, including the components, versions, and dependencies that reside within systems. SBOMs also deliver deep insights into dependencies, vulnerabilities, and risks that factor into cybersecurity.
An SBOM allows CISOs and other enterprise leaders to focus on what really matters by providing an up-to-date inventory of software components. This makes it easier to establish and enforce strong governance and spot potential problems before they spiral out of control.
Yet in the age of artificial intelligence (AI), the classic SBOM has some limitations. Emerging machine learning (ML) frameworks introduce remarkable opportunities, but they also push the envelope on risk and introduce a new asset to organizations: the machine learning model. Without strong oversight and controls over these models, an array ofpractical, technical, and legal problemscan arise.
That's where machine learning bills of materials (MLBOMs) enter the picture. The framework tracks names, locations, versions, and licensing for assets that comprise an ML model. It also includes overarching information about the nature of the model, training configurations embedded in metadata, who owns it, various feature sets, hardware requirements, and more.
CISOs are realizing that AI and ML require a different security model and the underlying training data and models that run them are frequently not tracked or governed. An MLBOM can help an organization avoid security risks and failures. It addresses critical factors like model and data provenance, safety ratings, and dynamic changes that extend beyond the scope of SBOM.
Because ML environments are in a constant state of flux and changes can take place with little or no human interaction, issues related to data consistency including where it originated, how it was cleaned, and how it was labeled are a constant concern.
For example, if a business analyst or data scientist determines that a data set is poisoned, the MLBOM simplifies the task of findingall the various touch points and models that were trained with that data.
Transparency, auditability, control, and forensic insight are all hallmarks of an MLBOM. With a comprehensive view of the "ingredients" that go into an ML model, an organization is equipped to manage its ML models safely.
Here are some ways to build a best practice framework around an MLBOM:
Recognize the need for an MLBOM:It's no secret that ML fuels business innovation and even disruption. Yet it also introduces significant risks that can extend to reputation, regulatory compliance, and legal issues. Having visibility into ML models is critically important.
Conduct essential due diligence:An MLBOM should integrate withthe CI/CD pipelineand deliver a high level of clarity. Support for standard frameworks like JSON or OWASP'sCycloneDXcanunify SBOM and MLBOM processes.
Analyze policies, processes, and governance:It's essential to sync an MLBOM with an organization's workflows and business processes. This increases the odds that ML pipelines will work as intended, while minimizing risks related to cybersecurity, data privacy, compliance, and other risk-associated areas.
Use an MLBOM with machine learning gates:Rigorous controls and gateways lead to essential AI and ML guardrails. In this way, the business and the CSO can build on successes and harness ML to unlock greater cost savings, performance gains, and business value.
Machine learning is radically changing the business and IT landscape. By extending proven SBOM methodologies to ML through MLBOMs, it's possible to take a giant step toward boosting machine learning performance and protecting data and assets.
Read more:
Why MLBOMs Are Useful for Securing the AI/ML Supply Chain - Dark Reading
The Role of Data Analytics and Machine Learning in Personalized Medicine Through Healthcare Apps – DataDrivenInvestor
Image by author
Personalized medicine through data analytics and Machine Learning has revolutionized the healthcare industry by tailoring medical treatments to individual patients based on their unique characteristics. In recent years, Data analytics and ML apps have become powerful tools to facilitate patient engagement and self-monitoring. This article explores the role of data analytics and Machine Learning in personalized medicine through healthcare apps, highlighting their importance, benefits, challenges, ethical considerations, and prospects.
Personalized medicine is like having a tailor-made healthcare plan. It considers your unique genetic makeup, lifestyle, and environment to provide more precise and effective treatments. This approach aims to deliver targeted therapies based on individual characteristics.
Healthcare apps have revolutionized the way we access and manage our health information. From tracking our daily steps to monitoring our heart rate, these apps have become essential tools in our quest for better health. What used to be basic fitness trackers have now evolved into comprehensive platforms that can analyze a vast amount of data to offer personalized insights and recommendations.
Data is the fuel that powers personalized medicine. It provides the necessary information to understand patterns, risks, and potential treatments for individuals. By analyzing vast amounts of data, such as genomic information, medical histories, and lifestyle factors, healthcare professionals can identify personalized treatment options and interventions.
Data analytics opens a whole new world of possibilities in personalized medicine. It allows healthcare providers to identify trends and correlations that may go unnoticed. This means faster and more accurate diagnoses, more effective treatment plans, and ultimately better health outcomes for patients. Data analytics also enables continuous learning and improvement by constantly refining treatment strategies based on real-world evidence.
Machine Learning is like having a computer that can learn and make decisions on its own. Its a branch of Artificial Intelligence (AI) that allows systems to analyze and interpret complex data patterns, discover insights, and make predictions or recommendations. In healthcare apps, Machine Learning algorithms can process large datasets and extract valuable information to enhance decision-making and improve patient outcomes.
Machine Learning algorithms can be embedded into healthcare apps, allowing them to continuously learn from user data and adapt their recommendations accordingly. For example, a fitness app can use Machine Learning to analyze a users exercise habits, heart rate, and sleep patterns to provide personalized exercise routines and sleep recommendations. By leveraging Machine Learning, healthcare apps can become intelligent and proactive health companions.
The combination of data analytics and Machine Learning offers significant advantages in personalized medicine.
By combining the power of data analytics tools with Machine Learning algorithms, businesses can create visually engaging and interactive dashboards that present data in a way that is easy to digest and interpret.
With data analytics and ML, businesses can uncover valuable insights that drive informed decision-making and improve overall business performance.
Businesses can automate data processing and analysis processes by integrating data analytics and Machine Learning, saving valuable time and improving accuracy.
ML algorithms analyze data objectively and make decisions based on patterns and statistical models, minimizing the impact of human subjectivity.
While data analytics and Machine Learning offer immense potential in healthcare, there are ethical considerations that need to be addressed. One concern is the potential bias in algorithms. If the data used to train Machine Learning models is biased, it can lead to biased treatment recommendations or diagnoses, disproportionately impacting certain groups of patients.
Another ethical concern is the transparency of algorithms. Patients and healthcare providers need to understand how algorithms arrive at their recommendations or diagnoses. Lack of transparency can undermine trust in the healthcare system and raise concerns about the accountability of algorithms.
The use of data analytics and Machine Learning in healthcare apps necessitates the collection and analysis of personal health information. Privacy concerns arise as this sensitive data needs to be handled with the utmost care. Healthcare apps must employ robust security measures to safeguard patient data and comply with relevant privacy regulations.
Transparency in data usage and obtaining informed consent from patients is crucial. Patients should have control over how their data is used and be fully aware of the potential risks and benefits.
As data analytics continues to evolve, several emerging trends hold promise for personalized medicine. One such trend is the integration of data from wearables and Internet of Things (IoT) devices. This real-time data collection allows for more accurate monitoring of patient health and enables timely interventions.
Another trend is the use of Natural Language Processing in analyzing unstructured medical data, such as doctors notes or research papers. NLP algorithms can extract valuable insights from these vast amounts of text, aiding in personalized medicine research and decision-making.
Machine Learning advancements are opening doors to exciting possibilities in healthcare apps. One breakthrough area is the use of deep learning algorithms. These sophisticated neural networks can process complex medical images, such as MRI scans or histopathology slides, with remarkable accuracy, assisting doctors in diagnosis and treatment planning.
Additionally, federated learning is gaining attention in healthcare. This approach allows Machine Learning models to be trained on decentralized data sources without sharing the raw data, preserving patient privacy while still benefiting from the collective knowledge present in diverse datasets.
Data analytics and Machine Learning have the potential to revolutionize personalized medicine through healthcare apps. Healthcare app development services from improving diagnosis accuracy to personalized treatment recommendations, these technologies offer valuable insights and benefits in healthcare.
Go here to read the rest:
The Role of Data Analytics and Machine Learning in Personalized Medicine Through Healthcare Apps - DataDrivenInvestor
Yann LeCun Emphasizes the Promise AI – NYAS – The New York Academy of Sciences
Blog Article
Published April 8, 2024
By Nick Fetty
Yann LeCun, a Turing Award winning computer scientist, had a wide-ranging discussion about artificial intelligence (AI) with Nicholas Dirks, President and CEO of The New York Academy of Sciences, as part of the first installment of the Tata Series on AI & Society on March 14, 2024.
LeCun is the Vice President and Chief AI Scientist at Meta, as well as the Silver Professor for the Courant Institute of Mathematical Sciences at New York University. A leading researcher in machine learning, computer vision, mobile robotics, and computational neuroscience, LeCun has long been associated with the Academy, serving as a featured speaker during past machine learning conferences and also as a juror for the Blavatnik Awards for Young Scientists.
As a postdoc at the University of Toronto, LeCun worked alongside Geoffrey Hinton, whos been dubbed the godfather of AI, conducting early research in neural networks. Some of this early work would later be applied to the field of generative AI. At this time, many of the fields foremost experts cautioned against pursuing such endeavors. He shared with the audience what drove him to pursue this work, despite the reservations some had.
Everything that lives can adapt but everything that has a brain can learn, said LeCun. The idea was that learning was going to be critical to make machines more intelligent, which I think was completely obvious, but I noticed that nobody was really working on this at the time.
LeCun joked that because of the fields relative infancy, he struggled at first to find a doctoral advisor, but he eventually pursued a PhD in computer science at the Universit Pierre et Marie Curie where he studied under Maurice Milgram. He recalled some of the limitations, such as the lack of large-scale training data and limited processing power in computers, during those early years in the late 1980s and 1990s. By the early 2000s, he and his colleagues began developing a research community to revive and advance work in neural networks and machine learning.
Work in the field really started taking off in the late 2000s, LeCun said. Advances in speech and image recognition software were just a couple of the instances LeCun cited that used neural networks in deep learning applications. LeCun said he had no doubt about the potential of neural networks once the data sets and computing power was sufficient.
Large language models (LLMs), such as ChatGPT or autocomplete, use machine learning to predict and generate plausible language. While some have expressed concerns about machines surpassing human intelligence, LeCun admits that he takes an unpopular opinion in thinking that he doesnt think LLMs are as intelligent as they may seem.
LLMs are developed using a finite number of words, or more specifically tokens which are roughly three-quarters of a word on average, according to LeCun. He said that many LLMs are developed using as many as 10 trillion tokens.
While much consideration goes into deciding what tunable parameters will be used to develop these systems, LeCun points out that theyre not trained for any particular task, theyre basically trained to fill in the blanks. He said that more than just language needs to be considered to develop an intelligent system.
Thats pretty much why those LLMs are subject to hallucinations, which really you should call confabulations. They cant really reason. They cant really plan. They basically just produce one word after the other, without really thinking in advance about what theyre going to say, LeCun said, adding that we have a lot of work to do to get machines to the level of human intelligence, were nowhere near that.
LeCun argued that to have a smarter AI, these technologies should be informed by sensory input (observations and interactions) instead of language inputs. He pointed to orangutans, which are highly intelligent creatures that survive without using language.
Part of LeCuns argument for why sensory inputs would lead to better AI systems is because the brain processes these inputs much faster. While reading text or digesting language, the human brain processes information at about 12 bytes per second, compared to sensory inputs from observations and interactions, which the brain processes at about 20 megabytes per second.
To build truly intelligent systems, theyd need to understand the physical world, be able to reason, plan, remember and retrieve. The architecture of future systems that will be capable of doing this will be very different from current large language models, he said.
As part of his work with Meta, LeCun uses and develops AI tools to detect content that violates the terms of services on social media platforms like Facebook and Instagram, though he is not directly involved with the moderation of content itself. Roughly 88 percent of content removed is initially flagged by AI, which helps his team in taking down roughly 10 million items every three months. Despite these efforts, misinformation, disinformation, deep fakes, and other manipulated content continue to be problematic, though the means for detecting this content automatically has vastly improved.
LeCun referenced statistics stating that in late 2017, roughly 20 to 25 percent of hate speech content was flagged by AI tools. This number climbed to 96 percent just five years later. LeCun said this difference can be attributed to two things: first the emergence of self-supervised, language-based AI systems (which predated the existence of ChatGPT); and second, is the transformer architecture present in LLMs and other systems. He added that these systems can not only detect hate speech, but also violent speech, terrorist propaganda, bullying, fake news and deep fakes.
The best countermeasure against these [concerns] is AI. AI is not really the problem here, its actually the solution, said LeCun.
He said this will require a combination of better technological systems, The AI of the good guys have to stay ahead of the AI of the bad guys, as well as non-technological, societal input to easily detect content produced or adapted by AI. He added that an ideal standard would involve a watermark-like tool that verifies legitimate content, as opposed to a technology tasked with flagging inauthentic material.
LeCun pointed to a study by researchers at New York University which found that audiences over the age of 65 are most likely to be tricked by false or manipulated content. Younger audiences, particularly those who grew up with the internet, are less likely to be fooled, according to the research.
One element that separates Meta from its contemporaries is the formers ability to control the AI algorithms that oversee much of its platforms content. Part of this is attributed to LeCuns insistence on open sourcing their AI code, which is a sentiment shared by the company and part of the reason he ended up at Meta.
I told [Meta executives] that if we create a research lab well have to publish everything we do, and open source our code, because we dont have a monopoly on good ideas, said LeCun. The best way I know, which I learned from working at Bell Labs and in academia, of making progress as quickly as possible is to get as many people as possible contributing to a particular problem.
LeCun added that part of the reason AI has made the advances it has in recent years is because many in the industry have embraced the importance of open publication, open sourcing and collaboration.
Its an ecosystem and we build on each others ideas, LeCun said.
Another advantage is that open sourcing lessens the likelihood of a single company developing a monopoly over a particular technology. LeCun said a single company simply does not have the ability to finetune an AI system that will adequately serve the entire population of the world.
Many of the early systems have been developed using English, where data is abundant, but, for example, different inputs will need to be considered in a country such as India, where 22 different official languages are spoken. These inputs can be utilized in a way that a contributor doesnt need to be literate simply having the ability to speak a language would be enough to create a baseline for AI systems that serve diverse audiences. He said that freedom and diversity in AI is important in the same way that freedom and diversity is vital to having an independent press.
The risk of slowing AI is much greater than the risk of disseminating it, LeCun said.
Following a brief question and answer session, LeCun was presented with an Honorary Life Membership by the Academys President and CEO, Nick Dirks.
This means that youll be coming back often to speak with us and we can all get our questions answered, Dirks said with a smile to wrap up the event. Thank you so much.
Read the original here:
Yann LeCun Emphasizes the Promise AI - NYAS - The New York Academy of Sciences
Spider conversations decoded with the help of machine learning and contact microphones – Popular Science
Arachnids are born dancers. After millions of years of evolution, many species rely on fancy footwork to communicate everything from courtship rituals, to territorial disputes, to hunting strategies. Researchers usually observe these movements in lab settings using what are known as laser vibrometers. After aiming the tools light beam at a target, the vibrometer measures miniscule vibration frequencies and amplitudes emitted from the Doppler shift effect. Unfortunately, such systems cost and sensitivity often limit their field deployment.
To find a solution for this long-standing problem, a University of Nebraska-Lincoln PhD student recently combined an array of tiny, cheap contact microphones alongside a sound-processing machine learning program. Then, once packed up, he headed into the forests of north Mississippi to test out his new system.
Noori Chois results, recently published in Communications Biology, highlight a never-before-seen approach to collecting spiders extremely hard-to-detect movements across woodland substrates. Choi spent two sweltering summer months placing 25 microphones and pitfall traps across 1,000-square-foot sections of forest floor, then waited for the local wildlife to make its vibratory moves. In the end, Choi left the Magnolia State with 39,000 hours of data including over 17,000 series of vibrations.
[Related: Meet the first electric blue tarantula known to science.]
Not all those murmurings were the wolf spiders Choi wanted, of course. Forests are loud places filled with active insects, chatty birds, rustling tree branches, as well as the invasive sounds of human life like overhead plane engines. These sound waves are also absorbed into the ground as vibrations, and need to be sifted out from scientists arachnid targets.
The vibroscape is a busier signaling space than we expected, because it includes both airborne and substrate-borne vibrations, Choi said in a recent university profile.
In the past, this analysis process was a frustratingly tedious, manual endeavor that could severely limit research and dataset scopes. But instead of pouring over roughly 1,625 days worth of recordings, Choi designed a machine learning program capable of filtering out unwanted sounds while isolating the vibrations from three separate wolf spider species: Schizocosa stridulans, S. uetzi, and S. duplex.
Further analysis yielded fascinating new insights into arachnid behaviors, particularly an overlap of acoustic frequency, time, and signaling space between the S. stridulans and S. uetzi sibling species. Choi determined that both wolf spider variants usually restricted their signaling for when they were atop leaf litter, not pine debris. According to Choi, this implies that real estate is at a premium for the spiders.
[They] may have limited options to choose from, because if they choose to signal in different places, on different substrates, they may just disrupt the whole communication and not achieve their goal, like attracting mates, Choi, now a postdoctoral researcher at Germanys Max Planck Institute of Animal Behavior, said on Monday.
Whats more, S. stridulans and S. uetzi appear to adapt their communication methods depending on how crowded they are at any given time, and who was crowding them. S. stridulans, for example, tended to lengthen their vibration-intense courtship dances when they detected nearby, same-species males. When they sensed nearby S. uetzi, however, they often varied their movements slightly to differentiate them from the other species, thus reducing potential courtship confusion.
In addition to opening up entirely new methods of observing arachnid behavior, Chois combination of contact microphones and machine learning analysis could also help others one day monitor an ecosystems overall health by keeping an ear on spider populations.
Even though everyone agrees that arthropods are very important for ecosystem functioning if they collapse, the whole community can collapse, Choi said. Nobody knows how to monitor changes in arthropods.
Now, however, Chois new methodology could allow a non-invasive, accurate, and highly effective aid in staying atop spiders daily movements.
See the article here:
Spider conversations decoded with the help of machine learning and contact microphones - Popular Science
Starting this Summer, Students Can Minor in Applications of Artificial Intelligence and Machine Learning – Georgia Tech College of Engineering
The minor initially was suggested by leaders and external advisory board members in Georgia Techs biomedical engineering department to provide more AI and ML curriculum for their students.
AI is so rampant in so many disciplines, and biomedical engineering students need to have that background before and after graduation, said Jaydev Desai, associate chair for undergraduate studies in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. Once we talked to the rest of the College about the minor, it quickly became clear that other schools had the same goals. This minor will truly be a transformative experience for our undergraduate students and make them competitive after graduation, be it in industry or pursuit of further education.
Desai quickly found a partner in IAC when he spoke with Shatakshee Dhongde, the Colleges associate dean for academic affairs. Together they built a program that teaches both the technical and policy aspects of AI.
This program is designed to address how AI and machine learning models are applied to solve some of the world's most pressing and complex problems," said Dhongde, who is alsoassociate professor in theSchool of Economics. Using AI/ML models with an understanding of the ethical issues around the technology is the real strength of this minor.
Students on both tracks are required to take three core courses including a philosophy course in AI ethics and policy and two electives.
Engineering courses are offered by six of the Colleges eight schools: biomedical, chemical, electrical and computer, industrial systems, materials, and mechanical engineering. Subjects range from robotics to biomedical AI to signal processing and more.
IAC courses cover topics that include machine learning for economics, language and computers, race and gender and digital media, and public policy.
Organizers developed several new courses to create the minor. Desai has already heard from other Georgia Tech colleges and schools about adding more classes, and hes excited to see many more undergraduate students at Georgia Tech benefit as the initiative expands.
We wanted to create something that will improve the educational experience of our undergraduate students and make them more competitive in the marketplace, Desai said. The current collection of courses also will make them stronger if they have a goal of starting their own businesses or creating devices. The minor really has a nice structure that welcomes other disciplines around the campus, and we look forward to them joining us in the future.
See the rest here:
Starting this Summer, Students Can Minor in Applications of Artificial Intelligence and Machine Learning - Georgia Tech College of Engineering
AI Jobs: Your Gateway to Careers in AI and Machine Learning – KillerStartups
AI Jobs emerges as a pioneering force in the realm of employment within the rapidly evolving fields of Artificial Intelligence (AI), Machine Learning (ML), and Data Science. Founded by Adam Krzywda, a seasoned entrepreneur, in 2023, AI Jobs has quickly established itself as a critical resource for professionals seeking to navigate the burgeoning job market created by AI advancements.
Company Overview
Progress and Current Status AI Jobs is already live and making significant strides in the industry by rapidly expanding its job listings, enhancing content, and integrating new features. The platform is dedicated to connecting job seekers with a wealth of opportunities in AI-related fields, including positions for AI developers, prompt engineers, AI testers, and internships.
Inspiring Story The journey of AI Jobs is a testament to the tenacity required to thrive in the competitive job board space. Faced with the challenge of establishing a foothold amidst established giants, the AI Jobs team has engaged in relentless marketing and outreach efforts. This commitment to growth, despite the odds, underscores the startups dedication to serving as a comprehensive resource for individuals passionate about AI and ML.
Future Outlook In the next four years, AI Jobs aspires to become the quintessential online hub for AI, Data Science, and ML career opportunities. Alongside its extensive job listings, the platform aims to broaden its content offerings, including blogs, videos, and podcasts, thereby establishing itself as a major resource for AI-related content. Through continuous content production and strategic partnerships, AI Jobs is poised to achieve its goal of being a leading authority in the AI job market.
AI Jobs stands at the forefront of addressing the workforce needs generated by AI advancements, serving as a vital bridge between innovative companies and talented individuals eager to shape the future of technology.
Excerpt from:
AI Jobs: Your Gateway to Careers in AI and Machine Learning - KillerStartups
AI in Finance: Machine Learning Models for Stock Price Predication and Auto-Trading – Clark University
Please join the Data Science Seminar this Wednesday for a talk by Dr. Timothy Li. Tim currently serves as principal data scientist and vice president at the Central Modeling Team of Citizens Bank. He will present two of his earlier AI projects in finance, using innovative machine learning models for stock price predication and auto-trading, outperforming conventional approaches.
Li will present two of his earlier AI finance projects. The first used a deep-learning long short-term memory model to forecast the alpha returns of approximately 1000 stocks on both the Chinese A-stock market and the U.S. stock market. Based on predicted returns, the research team devised a new portfolio strategy that outperformed a traditional model in both markets significantly.
In the second project, Li and his fellow researchers developed an optimal portfolio execution system (OPEX), using a tree-based ensemble machine-learning model for automated stock trading to reduce the trading cost. The study demonstrated that the OPEX system could effectively reduce trading costs, with an estimated savings of approximately $35 million per year compared to a legacy linear model.
Dr. Timothy (Minghai) Li currently serves as a principal data scientist and vice president at the central modeling team of Citizens Bank. Prior to this role, he held positions as a senior data scientist at Fidelity Investments, Netbrain Tech Inc., and FIS. He earned his Ph.D. in physics from Boston University and conducted post-doctoral research in Professor Sharon Huos group at Clark. He has authored or co-authored more than 20 publications covering topics in physics, material science, chemistry, biology, and computer modeling and simulation. Presently, his focus lies in the application of advanced machine learning algorithms and generative AI to finance.
Read more here:
AI in Finance: Machine Learning Models for Stock Price Predication and Auto-Trading - Clark University