Page 1,604«..1020..1,6031,6041,6051,606..1,6101,620..»

Regenerative Medicine and Advanced Therapies: Information on … – Government Accountability Office

What GAO Found

The goal of regenerative medicine and advanced therapies is to repair or replace damaged human cells, tissues, or organs to supplement or restore function. The field is developing therapies that go beyond existing treatments to address underlying causes of disease or provide cures for previously untreatable diseases and conditions. The regenerative medicine and advanced therapies workforce is generally reflective of the larger life sciences workforce, with individuals occupying a wide range of jobs across research and development, biomanufacturing, clinical care, and regulatory affairs, as shown below.

Examples of Regenerative Medicine and Advanced Therapy Occupations

aTranslational scientists take discoveries made in the laboratory, clinic, or field, and transform them into new treatments and approaches that help improve the health of the population.

bBiomanufacturing activities include the production of therapies using living cells.

Individuals working with regenerative medicine and advanced therapies would need postsecondary degrees appropriate to their area of work. For example, researchers would generally need science- and engineering-based degrees, and clinical occupations would generally need clinical and professional degrees. Further, stakeholders noted that many occupations would likely need additional specialized training, such as training in laboratory techniques, or medical fellowships in topics and practices specific to the field.

Many of the eight stakeholders GAO interviewed discussed shortages in the number of current and projected laboratory and biomanufacturing technicians to support the development of regenerative medicine and advanced therapies, as well as gaps in other positions, such as data scientists. Some stakeholders said that education for these technicians at the community and technical college level is insufficient to meet current and future workforce needs. In addition, many stakeholders noted that there is no nationally recognized education curriculum for the field. One of these stakeholders agreed that a core curriculum that reflects the diverse, interdisciplinary nature of regenerative medicine and advanced therapies would help support a competent, robust workforce.

The field of regenerative medicine and advanced therapiesincluding cell, gene, and tissue-based therapiesis evolving and interdisciplinary. Practitioners believe these therapies have the potential to revolutionize patient care and improve lives. The promise of such therapies to ameliorate, or cure, previously untreatable diseases and conditions depends, in part, on the existence of a robust, well-trained workforce.

The Timely ReAuthorization of Necessary Stem-cell Programs Lends Access to Needed Therapies (TRANSPLANT) Act of 2021 included a provision for GAO to study the regenerative medicine and advanced therapies workforce in the commercial and academic sectors. This report provides information on (1) the makeup of this workforce, (2) education and training for this workforce, and (3) current and future workforce and education and training needs.

GAO interviewed officials from the Department of Health and Human Services (HHS), and eight stakeholder organizations selected for representation across the occupational areas GAO identified for this work, as well as other criteria. GAO also reviewed related reports and job postings. Existing workforce and education data do not contain information specific to the regenerative medicine workforce. To quantify the number of stakeholders who made certain statements, some means two to four stakeholders and many means five to seven stakeholders.

For more information, contact Leslie V. Gordon at (202) 512-7114 or GordonLV@gao.gov.

Read this article:

Regenerative Medicine and Advanced Therapies: Information on ... - Government Accountability Office

Read More..

Vanti Introduces AI-Powered Predictive Quality Platform for Adaptive … – PR Web

Vanti.ai

ORLANDO, Fla. (PRWEB) March 22, 2023

Vanti, a leading provider of AI-powered Industrial Optimization solutions, is proud to introduce its Predictive Quality platform for adaptive industrial applications at the Gartner Data & Analytics Summit 2023. The platform leverages adaptive AI, identified by Gartner as a strategic trend for 2023, to help manufacturers and industrial companies identify and avoid quality issues before they appear to increase their efficiency, productivity, and business outcomes.

Vanti's CEO, Smadar David, will discuss the company's data-driven approach to quality assurance, known as the Predictive Quality Platform during her presentation at the Gartner Summit. Predictive quality examines industrial and manufacturing data in real-time for outliers, trends, and patterns. By analyzing production data, including time-series data, images, videos, and more, manufacturers can identify potential quality issues before they occur, helping to improve first-pass yield, reduce waste, and ensure high-quality output.

As Gartner notes, "AI offers substantial potential for manufacturing companies, both in operations and business process optimization. Industry 4.0, factory of the future, and smart manufacturing strategies point toward a self-adaptive and automatically reconfigurable production capability." (Applying AI in Industries, published 7 December 2022)

Our platform empowers subject matter experts, like engineers and plant floor managers, to access and utilize the platform's insights without requiring data science expertise, said Smadar David, Vanti CEO and Co-Founder. "By democratizing access to predictive quality insights, we're enabling a new level of collaboration and productivity across manufacturing teams. Our goal is to help our customers achieve better outcomes with their existing resources and expertise, and we believe that our platform is a key step forward in achieving this vision."

Vantis platform's wide range of applications targeted at improving manufacturing metrics spans many industries, with the electronics manufacturing, CPG and food & beverage, medical device manufacturing, and automotive industries being just a few. Our AI-powered predictive quality solution can be configured to meet the unique needs of different industrial environments, enabling our customers to address dynamic supply chain challenges and achieve their goals of increased efficiency, automation, and sustainability.

About Vanti

At Vanti, we're committed to helping manufacturers and industries improve business outcomes with our AI-powered predictive quality solution. Our platform is designed to be customizable and easy to deploy, enabling our customers to achieve their goals of increased efficiency, productivity, and profitability. We invite all attendees to visit us at booth #303 to learn more about Vanti's AI-powered Predictive Quality platform and explore how we can help your organization achieve its manufacturing goals. Alternatively, you can also visit us online at vanti.ai to learn more. We look forward to hearing from you and exploring how we can help you stay ahead of the curve in the fast-paced world of Industry 4.0.

About the Gartner Data & Analytics Summit

The Gartner Data & Analytics Summit provides insights for data and analytics (D&A) leaders to enable a D&A-centric culture within their organizations by tying strategy to business outcomes and promoting the adoption of technologies, such as artificial intelligence, while creating a resilient culture that accelerates change and where data literacy, digital trust, governance and data-driven critical thinking are pervasive.

Media Contact

Gil Levonai, CMO media@vanti.ai

Share article on social media or email:

More here:

Vanti Introduces AI-Powered Predictive Quality Platform for Adaptive ... - PR Web

Read More..

How Research Helped One Pre-med Discover a Love for Statistics … – Duke University

If youre a doe-eyed first-year at Duke who wants to eventually become a doctor, chances are you are currently, or will soon, take part in a pre-med rite of passage: finding a lab to research in.

Most pre-meds find themselves researching in the fields of biology, chemistry, or neuroscience, with many hoping to make research a part of their future careers as clinicians. Undergraduate student and San Diego native Eden Deng (T23) also found herself plodding a similar path in a neuroimaging lab her freshman year.

At the time, she was a prospective neuroscience major on the pre-med track. But as she soon realized, neuroimaging is done through fMRI. And to analyze fMRI data, you need to be able to conduct data analysis.

This initial research experience at Duke in the Martucci Lab, which looks at chronic pain and the role of the central nervous system, sparked a realization for Deng. Ninety percent of my time was spent thinking about computational and statistical problems, she explained to me. Analysis was new to her, and as she found herself struggling with it, she thought to herself, why dont I spend more time getting better at that academically?

This desire to get better at research led Deng to pursue a major in Statistics with a secondary in Computer Science, while still on the pre-med track. Many people might instantly think about how hard it must be to fit in so much challenging coursework that has virtually no overlap. And as Deng confirmed, her academic path not been without challenges.

For one, shes never really liked math, so she was wary of getting into computation. Additionally, considering that most Statistics and Computer Science students want to pursue jobs in the technology industry, its been hard for her to connect with like-minded people who are equally familiar with computers and the human body.

I never felt like I excelled in my classes, Deng said. And that was never my intention. Deng had to quickly get used to facing what she didnt know head-on. But as she kept her head down, put in the work, and trusted that eventually she would figure things out, the merits of her unconventional academic path started to become more apparent.

Research at the intersection of data and health

Last summer, Deng landed a summer research experience at Mount Sinai, where she looked at patient-level cancer data. Utilizing her knowledge in both biology and data analytics, she worked on a computational screener that scientists and biologists could use to measure gene expression in diseased versus normal cells. This will ultimately aid efforts in narrowing down the best genes to target in drug development. Deng will be back at Mount Sinai full-time after graduation, to continue her research before applying to medical school.

But in her own words, Dengs most favorite research experience has been her senior thesis through Dukes Department of Biostatistics and Bioinformatics. Last year, she reached out to Dr. Xiaofei Wang, who is part of a team conducting a randomized controlled trial to compare the merits of two different lung tumor treatments.

Generally, when faced with lung disease, the conservative approach is to remove the whole lobe. But that can pose challenges to the quality of life of people who are older, with more comorbidities. Recently, there has been a push to focus on removing smaller sections of lung tissue instead. Dengs thesis looks at patient surgical data over the past 15 years, showing that patient survival rates have improved as more of these segmentectomies or smaller sections of tissue removal have become more frequent in select groups of patients.

I really enjoy working on it every week, Deng says about her thesis, which is not something I can usually say about most of the work I do! According to Deng, a lot of research hers included is derived from researchers mulling over what they think would be interesting to look at in a silo, without considering what problems might be most useful for society at large. Whats valuable for Deng about her thesis work is that shes gotten to work closely with not just statisticians but thoracic surgeons. Originally my thesis was going to go in a different direction, she said, but upon consulting with surgeons who directly impacted the data she was using and would be directly impacted by her results she changed her research question.

The merits of an interdisciplinary academic path

Dengs unique path makes her the perfect person to ask: is pursuing seemingly disparate interests, like being a Statistics and Computer Science double-major on the pre-med, track worth it? And judging by Dengs insights, the answer is a resounding yes.

At Duke, she says, Ive been challenged by many things that I wouldnt have expected to be able to do myself like dealing with the catch-up work of switching majors and pursuing independent research. But over time shes learned that even if something seems daunting in the moment, if you apply yourself, most, if not all things, can be accomplished. And shes grateful for the confidence that shes acquired through pursuing her unique path.

Moreover, as Deng reflects on where she sees herself and the field of healthcare a few years from now, she muses that for the first time in the history of healthcare, a third-party player is joining the mix technology.

While her initial motivation to pursue statistics and computer science was to aid her in research, Ive now seen how its beneficial for my long-term goals of going to med school and becoming a physician. As healthcare evolves and the introduction of algorithms, AI and other technological advancements widens the gap between traditional and contemporary medicine, Deng hopes to deconstruct it all and make healthcare technology more accessible to patients and providers.

At the end of the day, its data that doctors are communicating to patients, Deng says. So shes grateful to have gained experience interpreting and modeling data at Duke through her academic coursework.

And as the Statistics major particularly has taught her, complexity is not always a good thing sometimes, the simpler you can make something, the better. Some research doesnt always do this, she says shes encountered her fair share of research that feels performative, prioritizing complexity to appear more intellectual. But by continually asking herself whether her research is explainable and applicable, she hopes to let those two questions be the North Stars that guide her future research endeavors.

At the end of the day, its data that doctors are communicating to patients.

When asked what advice she has for first-years, Deng said that its important to not let your inexperience or perceived lack of knowledge prevent you from diving into what interests you. Even as a first-year undergrad, know that you can contribute to academia and the world of research.

And for those who might be interested in pursuing an academic path like Deng, theres some good news. After Deng talked to the Statistics department about the lack of pre-health representation that existed, the Statistics department now has a pre-health listserv that you can join for updates and opportunities pertaining specifically to pre-med Stats majors. And Deng emphasizes that the Stats-CS-pre-med group at Duke is growing. Shes noticed quite a few underclassmen in the Statistics and Computer Science departments who vocalize an interest in medical school.

So if you also want to hone your ability to communicate research that you care about whether youre pre-med or not feel free to jump right into the world of data analysis. As Deng concludes, everyone has something to say thats important.

Post by Meghna Datta, Class of 2023

Read more from the original source:

How Research Helped One Pre-med Discover a Love for Statistics ... - Duke University

Read More..

NVIDIA Invites Dataiku to the DGX-Ready Software Program to … – Database Trends and Applications

Dataiku, the platform for Everyday AI, is joining NVIDIAs DGX-Ready Software program, simplifying the deployment and management of AI for customers.

Dataiku has been selected for the exclusive, invite-only program because of its tested and certified solutions that pair with NVIDIA DGX systems, allowing NVIDIA customers and partners to easily implement advanced analytics and AI.

Enterprises are seeking integrated solutions to power successful AI deployments, said John Barco, senior director of DGX Product Management, NVIDIA. Pairing NVIDIA DGX systems with Dataiku software can help customers seamlessly and securely access and manage their data to simplify the deployment of enterprise AI.

Already a member of the NVIDIA AI Accelerated program, Dataiku has also recently become a Premier member of NVIDIA Inception, a program that offers resources and support to cutting-edge startups transforming industries with advancements in AI and data science.

Through Dataikus collaboration with NVIDIA, customers will be able to overcome challenges through the following benefits:

This collaboration gives our customers a clear advantage with unrivaled access to market-leading NVIDIA technology, said Abhi Madhugiri, vice president, global technology alliances at Dataiku. The combination of Dataiku's platform with NVIDIA accelerated computing solutions like DGX will accelerate and simplify data projects, bringing the power of AI to organizations regardless of size or industry. Joining the exclusive NVIDIA DGX-Ready Software program and becoming a Premier member of NVIDIA Inception is an exciting opportunity for us to better serve our customers and continue our mission to democratize AI.

For more information about this news, visit http://www.dataiku.com.

Link:

NVIDIA Invites Dataiku to the DGX-Ready Software Program to ... - Database Trends and Applications

Read More..

Ethical Use of AI in Insurance Modeling and Decision-Making – FTI Consulting

With increased availability of next-generation technology and data mining tools, insurance company use of external consumer data sets and artificial intelligence (AI) and machine learning (ML)-enabled analytical models is rapidly expanding and accelerating. Insurers have initially targeted key business areas such as underwriting, pricing, fraud detection, marketing distribution and claims management to leverage technical innovations to realize enhanced risk management, revenue growth and improved profitability. At the same time, regulators worldwide are intensifying their focus on the governance and fairness challenges presented by these complex, highly innovative tools specifically, the potential for unintended bias against protected classes of people.

In the United States, the Colorado Division of Insurance recently issued a first-in-the-nation draft regulation to support the implementation of a 2021 law passed by the states legislature.1 This law (SB21-169) prohibits life insurers from using external personal data and information sources (ECDIS), or employing algorithms and models that use ECDIS, where the resulting impact of such use is unfair discrimination against consumers on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.2 In pre-release public meetings with industry stakeholders, the Colorado Department of Insurance also offered guidance that similar rules should be expected in the not-too-distant future for property & casualty insurers. In the same vein, UK and EU regulators are now penning new policies and legal frameworks to prevent AI model-driven consumer bias, ensure transparency and explainability of model-based decisions for customers and other stakeholders, and impose accountability for insurers who leverage these capabilities.3

Clearly, regulators around the globe believe that well-defined guard rails are needed to ensure the ethical use of external data and AI-powered analytics in insurance decision-making. Moreover, in some jurisdictions, public oversight and enablement bodies such as the U.S. Department of Commerces National Institute of Standards and Technology (NIST) are also actively working to define cross-industry guidelines and rules for the acceptable use of external data to train AI/ML-powered decision support models without resulting in discrimination against protected classes of consumers.4 Examples of potentially disfavored data may include:

Based on the Colorado draft regulation that was recently published, the expected breadth of pending new AI and external data set rules could mean potentially onerous execution challenges for insurers, who seek to balance the need for proactive risk management, market penetration and profitability objectives with principles of consumer fairness. For many insurers, internal data science and technology resources that are already swamped with their day jobs will be insufficient to meet expected reporting and model testing obligations across the multiple jurisdictions in which their companies do business. In other situations, insurers may lack appropriate test data and skill sets to assess potential model bias. In either instance or both, model testing and disclosure obligations will continue to mount and support will be needed to satisfy regulator demands and avoid the significant business ramifications of non-compliance.

So, how can insurance companies and their data science/technology teams best address the operational challenges that evolving data privacy and model ethics regulations will certainly present? Leading companies that want to get ahead of the curve may opt to partner with skilled experts, who understand the data and processing complexities of non-linear AI/ML-enabled models. The best of these external operators will also bring to the table deep insurance domain knowledge to assure context for testing and offer reliable, independent and market-proven test data and testing methodologies that can be easily demonstrated and explained to insurers and regulators alike.

The burden of regulatory compliance in the insurance industry cannot be diminished and can challenge a companys ability to attain target business benefits if these two seemingly opposing objectives compliance and profitability -- are not managed in a proactively strategic and supportive way. With appropriate guidance and execution, insurers who comply with new and emerging regulations for the use of AI-powered decision-support models and external data sets may actually realize a number of tangible benefits beyond compliance, including more stable analytic insight models, improved new business profitability and operational scalability, and a better customer experience that enhances brand loyalty and drives customer retention and enhanced lifetime value.

More here:

Ethical Use of AI in Insurance Modeling and Decision-Making - FTI Consulting

Read More..

Machine Learning Finds 140000 Future Star Forming Regions in the Milky Way – Universe Today

Our galaxy is still actively making stars. Weve known that for a while, but sometimes its hard to understand the true scale in astronomical terms. A team from Japan is trying to help with that by using a novel machine-learning technique to identify soon-to-be star-forming regions spread throughout the Milky Way. They found 140,000 of them.

The regions, known in astronomy as molecular clouds, are typically invisible to humans. However, they do emit radio waves, which can be picked up by the massive radio telescopes dotted around our planet. Unfortunately, the Milky Way is the only galaxy close enough where we can pick up those signals, and even in our home galaxy; the clouds are so far spread apart it has been challenging to capture an overall picture of them.

Therefore a team from Osaka Metropolitan University thought machine learning to the rescue. They took a data set from the Nobeyama radio telescope located in Nagano prefecture and looked for the prevalence of carbon monoxide molecules. That resulted in an astonishing 140,000 visible molecular clouds in just one quadrant of the Milky Way.

As a next step, the team looked deeper into the data and figured out how large they were, as well as where they were located in the galactic plane. Given that there are four more quadrants to explore, theres a good chance there are significantly more to find.

But to access at least two of those quadrants, they need a different radio telescope. Nobeyama is located in Japan, in the northern hemisphere, and cant see the southern sky. Plenty of radio telescopes, such as ALMA, are already online in the southern hemisphere. Some are on the horizon, such as the Square Kilometer Array that could provide an even farther look around the southern hemispheres galactic plane.The team just needs to pick which one they would like to use.

One of the great things about AI is that once you train it, which can take a significant amount of time, analyzing similar data sets is a breeze. Future work on more radio data should take advantage of that fact and allow Dr. Shinji Fujita and his team to quickly analyze even more star-forming regions. With some additional research, well be able to truly understand our galaxys creation engine sometime in the not-too-distant future.

Learn More:Osaka Metropolitan University AI draws most accurate map of star birthplaces in the GalaxyFujita et al. Distance determination of molecular clouds in the first quadrant of the Galactic plane using deep learning: I. Method and resultsUT One of the Brightest Star-Forming Regions in the Milky Way, Seen in InfraredUT Speedrunning Star Formation in the Cygnus X Region

Lead Image:Image of star-forming region Sharpless 2-106, about 2,000 light years away from Earth.Credit NASA , ESA, STScI/Aura

Like Loading...

More:
Machine Learning Finds 140000 Future Star Forming Regions in the Milky Way - Universe Today

Read More..

Crypto AI Announces Its Launch, Using AI Machine Learning to … – GlobeNewswire

LONDON, UK, March 23, 2023 (GLOBE NEWSWIRE) -- Crypto AI ($CAI), an AI-powered NFT generator that uses machine learning algorithms to create unique digital assets, has announced its official launch in March 2023. The project aims to revolutionize the NFT space by combining the power of artificial intelligence and machine learning.

Crypto AI ($CAI) is a software application that generates NFTs through a proprietary algorithm that creates unique digital assets. These assets can then be sold on various NFT marketplaces or used as part of a larger project.

Discover What Crypto AI Do

Crypto AI Strives to Disrupt the NFT and Chat GPT space using Artificial Intelligence and Machine Learning.

Martin Weiner, the CEO of Crypto AI, stated, "We are excited to announce the official launch of Crypto AI, an AI-powered NFT generator that uses machine learning algorithms to create unique digital assets. Our goal is to disrupt the NFT space by offering a product that can generate truly unique NFTs that stand out in the marketplace."

Weiner went on to explain the key features of Crypto AI that sets it apart from other NFT generators. "What sets Crypto AI apart is the power of our proprietary algorithm. Our algorithm uses advanced machine learning techniques to create unique digital assets that are truly one-of-a-kind. Our AI-powered NFT generator is not only faster than traditional methods, but it is also more accurate and efficient."

Crypto AI aims to offer a new way for artists and creators to monetize their work through NFTs. The project believes that AI-powered NFTs will help increase the value of digital assets and make them more accessible to a broader audience.

Weiner added, "We believe that AI-powered NFTs have the potential to revolutionize the art world by making it more inclusive and accessible to a wider audience. Our platform offers a new way for artists and creators to monetize their work and showcase it to the world."

Crypto AI is also committed to sustainability and plans to use renewable energy sources for its operations. The project believes that it is essential to minimize the environmental impact of its operations and is actively exploring ways to reduce its carbon footprint.

"We understand the importance of sustainability, and we are committed to minimizing our environmental impact. We plan to use renewable energy sources for our operations and explore ways to reduce our carbon footprint," Weiner stated.

Crypto AI's launch is highly anticipated by the NFT community, and the project has already gained significant interest from artists and collectors worldwide. The project's innovative approach to NFT creation and its commitment to sustainability have made it stand out in a crowded marketplace.

About Crypto AI

Crypto AI ChatGPT Bot is an AI-powered bot that assists users in their conversations with automated and intelligent responses. We use natural language processing and machine learning algorithms to generate meaningful and relevant responses to user queries.

AI App on

https://cai.codes/artist

https://cai.codes/chat

Social Links

Twitter: https://twitter.com/CryptoAIbsc

Telegram: https://t.me/CryptoAI_eng

Medium: https://medium.com/@CryptoAI

Discord: https://github.com/crypto-ai-git

Media Contact

Brand: Crypto AI

E-mail: team@cai.codes

Website: https://cai.codes

SOURCE: Crypto AI

Read more here:
Crypto AI Announces Its Launch, Using AI Machine Learning to ... - GlobeNewswire

Read More..

Unlock the Next Wave of Machine Learning with the Hybrid Cloud – The New Stack

Machine learning is no longer about experiments. Most industry-leading enterprises have already seen dramatic successes from their investments in machine learning (ML), and there is near-universal agreement among business executives that building data science capabilities is vital to maintaining and extending their competitive advantage.

The bullish outlook is evident in the U.S. Bureau of Labor Statistics predictions regarding growth of the data science career field: Employment of data scientists is projected to grow 36% from 2021 to 2031, much faster than the average for all occupations.

The aim now is to grow these initial successes beyond the specific parts of the business where they had initially emerged. Companies are looking to scale their data science capabilities to support their entire suite of business goals and embed ML-based processes and solutions everywhere the company does business.

Vanguards within the most data-centric industries, including pharmaceuticals, finance, insurance, aerospace and others, are investing heavily. They are assembling formidable teams of data scientists with varied backgrounds and expertise to develop and place ML models at the core of as many business processes as possible.

More often than not, they are running headlong into the challenges of executing data science projects across the regional, organizational, and technological divisions that abound in every organization. Data is worthless without the tools and infrastructure to use it, and both are fragmented across regions and business units, as well as in cloud and on-premises environments.

Even when analysts and data scientists overcome the hurdle of getting access to data in other parts of the business, they quickly find that they lack effective tools and hardware to leverage the data. At best, this results in low productivity, weeks of delays, and significantly higher costs due to suboptimal hardware, expensive data storage, and unnecessary data transfers. At worst, it results in project failure, or not being able to initiate the project to begin with.

Successful enterprises are learning to overcome these challenges by embracing hybrid-cloud strategies. Hybrid cloud the integrated use of on-premises and cloud environments also encompasses multicloud, the use of cloud offerings from multiple cloud providers. A hybrid-cloud approach enables companies to leverage the best of all worlds.

They can take advantage of the flexibility of cloud environments, the cost benefits of on-premises infrastructure, and the ability to select best-of-breed tools and services from any cloud vendor and machine learning operations tooling. More importantly for data science, hybrid cloud enables teams to leverage the end-to-end set of tools and infrastructure necessary to unlock data-driven value everywhere their data resides.

It allows them to arbitrage the inherent advantages of different environments while preserving data sovereignty and providing the flexibility to evolve as business and organizational conditions change.

While many organizations try to cope with disconnected platforms spread across different on-premises and cloud environments, today the most successful organizations understand that their data science operations must be hybrid cloud by design. That is, to implement end-to-end ML platforms that support hybrid cloud natively and provide integrated capabilities that work seamlessly and consistently across environments.

In a recent Forrester survey of AI infrastructure decision-makers, 71% of IT decision-makers say hybrid cloud support by their AI platform is important for executing their AI strategy, and 29% say its already critical. Further, 91% said they will be investing in hybrid cloud within two years, and 66% said they already had invested in hybrid support for AI workloads.

In addition to the overarching benefit of a hybrid-cloud strategy for data science the ability to execute data science projects and implement ML solutions anywhere in your business there are three key drivers that are accelerating the trend:

Data sovereignty: Regulatory requirements like GDPR are forcing companies to process data locally with the threat of heavy fines in more and more parts of the world. The EU Artificial Intelligence Act, which triages AI applications across three risk categories and calls for outright bans on applications deemed to be the riskiest, will go a step further than fines. Gartner predicts that 65% of the worlds population will soon be covered by similar regulations.

Cost optimization: The size of ML workloads grows as companies scale data science because of the increasing number of use cases, larger volumes of data and the use of computationally intensive, deep learning models. Hybrid-cloud platforms enable companies to direct workloads to the most cost-effective infrastructure; e.g., optimize utilization of an on-premise GPU cluster, and mitigate rising cloud costs.

Flexibility: Taking a hybrid-cloud approach allows for future-proofing to address the inevitable changes in business operations and IT strategy, such as a merger or acquisition involving a company that has a different tech stack, expansion to a new geography where your default cloud vendor does not operate or even a cloud vendor becoming a significant competitor.

Implementing a hybrid-cloud strategy for ML is easier said than done. For example, no public cloud vendor offers more than token support for on-premises workloads, let alone support for a competitors cloud, and the range of tools and infrastructure your data science teams need scales as you grow your data science rosters and undertake more ML projects. Here are the three essential capabilities for which every business must provide hybrid-cloud support in order to scale data science across the organization:

Full data science life cycle coverage: From model development to deployment to monitoring, enterprises need data science tooling and operations to manage every aspect of data science at scale.

Agnostic support for data science tooling: Given the variety of ML and AI projects and the differing skills and backgrounds of the data scientists across your distributed enterprise, your strategy needs to provide hybrid cloud support for the major open-source data science languages and frameworks and likely a few proprietary tools not to mention the extensibility to support the host of new tools and methods that are constantly being developed.

Scalable compute infrastructure: More data, more use cases and more advanced methods require the ability to scale up and scale out with distributed compute and GPU support, but this also requires an ability to support multiple distributed compute frameworks since no single framework is optimal for all workloads. Spark may work perfectly for data engineering, but you should expect that youll need a data-science-focused framework like Ray or Dask (or even OpenMPI) for your ML model training at scale.

Embedding ML models throughout your core business functions lies in the heart of AI-based digital transformation. Organizations must adopt a hybrid-cloud or equivalent multicloud strategy to expand beyond initial successes and deploy impactful ML solutions everywhere.

Data science teams need end-to-end, extensible and scalable hybrid-cloud ML platforms to access the tools, infrastructure and data they need to develop and deploy ML solutions across the business. Organizations need these platforms for the regulatory, cost and flexibility benefits they provide.

The Forrester survey notes that organizations that adopt hybrid cloud approaches to AI development are already seeing the benefits across the entire AI/ML life cycle, experiencing 48% fewer challenges in deploying and scaling their models than companies relying on a single cloud strategy. All evidence suggests that the vanguard of companies who have already invested in their data science teams and platforms are pulling even further ahead using hybrid cloud.

Excerpt from:
Unlock the Next Wave of Machine Learning with the Hybrid Cloud - The New Stack

Read More..

Scientists are using machine learning to forecast bird migration and … – Yahoo News

With chatbots like ChatGPT making a splash, machine learning is playing an increasingly prominent role in our lives. For many of us, its been a mixed bag. We rejoice when our Spotify For You playlist finds us a new jam, but groan as we scroll through a slew of targeted ads on our Instagram feeds.

Machine learning is also changing many fields that may seem surprising. One example is my discipline, ornithology the study of birds. It isnt just solving some of the biggest challenges associated with studying bird migration; more broadly, machine learning is expanding the ways in which people engage with birds. As spring migration picks up, heres a look at how machine learning is influencing ways to research birds and, ultimately, to protect them.

Most birds in the Western Hemisphere migrate twice a year, flying over entire continents between their breeding and nonbreeding grounds. While these journeys are awe-inspiring, they expose birds to many hazards en route, including extreme weather, food shortages and light pollution that can attract birds and cause them to collide with buildings.

Our ability to protect migratory birds is only as good as the science that tells us where they go. And that science has come a long way.

In 1920, the U.S. Geological Survey launched the Bird Banding Laboratory, spearheading an effort to put bands with unique markers on birds, then recapture the birds in new places to figure out where they traveled. Today researchers can deploy a variety of lightweight tracking tags on birds to discover their migration routes. These tools have uncovered the spatial patterns of where and when birds of many species migrate.

However, tracking birds has limitations. For one thing, over 4 billion birds migrate across the continent every year. Even with increasingly affordable equipment, the number of birds that we track is a drop in the bucket. And even within a species, migratory behavior may vary across sexes or populations.

Story continues

Further, tracking data tells us where birds have been, but it doesnt necessarily tell us where theyre going. Migration is dynamic, and the climates and landscapes that birds fly through are constantly changing. That means its crucial to be able to predict their movements.

This is where machine learning comes in. Machine learning is a subfield of artificial intelligence that gives computers the ability to learn tasks or associations without explicitly being programmed. We use it to train algorithms that tackle various tasks, from forecasting weather to predicting March Madness upsets.

But applying machine learning requires data and the more data the better. Luckily, scientists have inadvertently compiled decades of data on migrating birds through the Next Generation Weather Radar system. This network, known as NEXRAD, is used to measure weather dynamics and help predict future weather events, but it also picks up signals from birds as they fly through the atmosphere.

BirdCast is a collaborative project of Colorado State University, the Cornell Lab of Ornithology and the University of Massachusetts that seeks to leverage that data to quantify bird migration. Machine learning is central to its operations. Researchers have known since the 1940s that birds show up on weather radar, but to make that data useful, we need to remove nonavian clutter and identify which scans contain bird movement.

This process would be painstaking by hand but by training algorithms to identify bird activity, we have automated this process and unlocked decades of migration data. And machine learning allows the BirdCast team to take things further: By training an algorithm to learn what atmospheric conditions are associated with migration, we can use predicted conditions to produce forecasts of migration across the continental U.S.

BirdCast began broadcasting these forecasts in 2018 and has become a popular tool in the birding community. Many users may recognize that radar data helps produce these forecasts, but fewer realize that its a product of machine learning.

Currently these forecasts cant tell us what species are in the air, but that could be changing. Last year, researchers at the Cornell Lab of Ornithology published an automated system that uses machine learning to detect and identify nocturnal flight calls. These are species-specific calls that birds make while migrating. Integrating this approach with BirdCast could give us a more complete picture of migration.

These advancements exemplify how effective machine learning can be when guided by expertise in the field where it is being applied. As a doctoral student, I joined Colorado State Universitys Aeroecology Lab with a strong ornithology background but no machine learning experience. Conversely, Ali Khalighifar, a postdoctoral researcher in our lab, has a background in machine learning but has never taken an ornithology class.

Together, we are working to enhance the models that make BirdCast run, often leaning on each others insights to move the project forward. Our collaboration typifies the convergence that allows us to use machine learning effectively.

Machine learning is also helping scientists engage the public in conservation. For example, forecasts produced by the BirdCast team are often used to inform Lights Out campaigns.

These initiatives seek to reduce artificial light from cities, which attracts migrating birds and increases their chances of colliding with human-built structures, such as buildings and communication towers. Lights Out campaigns can mobilize people to help protect birds at the flip of a switch.

As another example, the Merlin bird identification app seeks to create technology that makes birding easier for everyone. In 2021, the Merlin staff released a feature that automates song and call identification, allowing users to identify what theyre hearing in real time, like an ornithological version of Shazam.

This feature has opened the door for millions of people to engage with their natural spaces in a new way. Machine learning is a big part of what made it possible.

Sound ID is our biggest success in terms of replicating the magical experience of going birding with a skilled naturalist, Grant Van Horn, a staff researcher at the Cornell Lab of Ornithology who helped develop the algorithm behind this feature, told me.

Opportunities for applying machine learning in ornithology will only increase. As billions of birds migrate over North America to their breeding grounds this spring, people will engage with these flights in new ways, thanks to projects like BirdCast and Merlin. But that engagement is reciprocal: The data that birders collect will open new opportunities for applying machine learning.

Computers cant do this work themselves. Any successful machine learning project has a huge human component to it. That is the reason these projects are succeeding, Van Horn said to me.

This article is republished from The Conversation, an independent nonprofit news site dedicated to sharing ideas from academic experts. Like this article? Subscribe to our weekly newsletter.

It was written by: Miguel Jimenez, Colorado State University.

Read more:

Miguel Jimenez receives funding from the National Aeronautics and Space Administration.

Here is the original post:
Scientists are using machine learning to forecast bird migration and ... - Yahoo News

Read More..

Striveworks Partners With Carahsoft to Provide AI and Machine … – PR Newswire

AUSTIN, Texas, March 23, 2023 /PRNewswire/ -- Striveworks, a pioneer in responsible MLOps, today announceda partnership with Carahsoft Technology Corp., The Trusted Government IT Solutions Provider.Under the agreement, Carahsoft will serve as Striveworks' public sector distributor, making the company's Chariot platform and other software solutions available to government agencies through Carahsoft's reseller partners, NASA Solutions for Enterprise-Wide Procurement (SEWP) V, Information Technology Enterprise Solutions Software 2 (ITES-SW2), OMNIA Partners, and National Cooperative Purchasing Alliance (NCPA) contracts.

"We are excited to partner with Carahsoft and its reseller partners to leverage their public sector expertise and expand access to our products and solutions," said Quay Barnett, Executive Vice President at Striveworks. "Striveworks' inclusion on Carahsoft's contracts enables U.S. Federal, State, and Local Governments to make better models, faster."

Decision making in near-peer and contested environments requires end-to-end dynamic data capabilities that are rapidly deployed. Current solutions remain isolated, not scalable, and not integrated from enterprise to edge. The Striveworks and Carahsoft partnership helps simplify the procurement of Striveworks' AI and machine learning solutions.

Striveworks' Chariot provides a no-code/low-code solution that supports all phases of mission-relevant analytics including: developing, deploying, monitoring, and remediating models. Also available through the partnership is Ark, Striveworks' edge model deployment software for the rapid and custom integration of computer vision, sensors, and telemetry data collection.

"We are pleased to add Striveworks' solutions to our AI and machine learning portfolio," said Michael Adams, Director of Carahsoft's AI/ML Solutions Portfolio. "Striveworks' data science solutions and products allow government agencies to simplify their machine learning operations. We look forward to working with Striveworks and our reseller partners to help the public sector drive better outcomes in operationally relevant timelines."

Striveworks' offerings are available through Carahsoft's SEWP V contracts NNG15SC03B and NNG15SC27B, ITES-SW2 contract W52P1J-20-D-0042, NCPA contract NCPA001-86, and OMNIA Partners contract R191902. For more information contact Carahsoft at (888) 606-2770 or [emailprotected].

About Striveworks

Striveworks is a pioneer in responsible MLOpsfor national security and other highly regulated spaces. Striveworks' MLOps platform, Chariot, enables organizations to deploy AI/ML models at scale while maintaining full audit and remediation capabilities. Founded in 2018, Striveworks was highlighted as an exemplar in the National Security Commission for AI 2020 Final Report. For more information visit http://www.striveworks.com.

About Carahsoft

Carahsoft Technology Corp. is The Trusted Government IT Solutions Provider, supporting Public Sector organizations across Federal, State and Local Government agencies and Education and Healthcare markets. As the Master Government Aggregator for our vendor partners, we deliver solutions for Artificial Intelligence & Machine Learning, Cybersecurity, MultiCloud, DevSecOps, Big Data, Open Source, Customer Experience and more. Working with resellers, systems integrators and consultants, our sales and marketing teams provide industry leading IT products, services and training through hundreds of contract vehicles. Visit us at http://www.carahsoft.com.

Media ContactMary Lange(703) 230-7434[emailprotected]

SOURCE Striveworks, Inc.

View post:
Striveworks Partners With Carahsoft to Provide AI and Machine ... - PR Newswire

Read More..