Page 806«..1020..805806807808..820830..»

AI and science: what 1,600 researchers think – Nature.com

Illustration by Acapulco Studio

Artificial-intelligence (AI) tools are becoming increasingly common in science, and many scientists anticipate that they will soon be central to the practice of research, suggests a Nature survey of more than 1,600 researchers around the world.

Science and the new age of AI: a Nature special

When respondents were asked how useful they thought AI tools would become for their fields in the next decade, more than half expected the tools to be very important or essential. But scientists also expressed strong concerns about how AI is transforming the way that research is done.

The share of research papers that mention AI terms has risen in every field over the past decade, according to an analysis for this article by Nature.

Machine-learning statistical techniques are now well established, and the past few years have seen rapid advances in generative AI, including large language models (LLMs), that can produce fluent outputs such as text, images and code on the basis of the patterns in their training data. Scientists have been using these models to help summarize and write research papers, brainstorm ideas and write code, and some have been testing out generative AI to help produce new protein structures, improve weather forecasts and suggest medical diagnoses, among many other ideas.

See Supplementary information for full methodology.

With so much excitement about the expanding abilities of AI systems, Nature polled researchers about their views on the rise of AI in science, including both machine-learning and generative AI tools.

Focusing first on machine-learning, researchers picked out many ways that AI tools help them in their work. From a list of possible advantages, two-thirds noted that AI provides faster ways to process data, 58% said that it speeds up computations that were not previously feasible, and 55% mentioned that it saves scientists time and money.

AI has enabled me to make progress in answering biological questions where progress was previously infeasible, said Irene Kaplow, a computational biologist at Duke University in Durham, North Carolina.

The survey results also revealed widespread concerns about the impacts of AI on science. From a list of possible negative impacts, 69% of the researchers said that AI tools can lead to more reliance on pattern recognition without understanding, 58% said that results can entrench bias or discrimination in data, 55% thought that the tools could make fraud easier and 53% noted that ill-considered use can lead to irreproducible research.

The main problem is that AI is challenging our existing standards for proof and truth, said Jeffrey Chuang, who studies image analysis of cancer at the Jackson Laboratory in Farmington, Connecticut.

To assess the views of active researchers, Nature e-mailed more than 40,000 scientists who had published papers in the last 4 months of 2022, as well as inviting readers of the Nature Briefing to take the survey. Because researchers interested in AI were much more likely to respond to the invitation, the results arent representative of all scientists. However, the respondents fell into 3 groups: 48% who directly developed or studied AI themselves, 30% who had used AI for their research, and the remaining 22% who did not use AI in their science. (These categories were more useful for probing different responses than were respondents research fields, genders or geographical regions; see Supplementary information for full methodology).

Among those who used AI in their research, more than one-quarter felt that AI tools would become essential to their field in the next decade, compared with 4% who thought the tools essential now, and another 47% felt AI would be very useful. (Those whose research field was already AI were not asked this question.) Researchers who dont use AI were, unsurprisingly, less excited. Even so, 9% felt these techniques would become essential in the next decade, and another 34% said they would be very useful.

The chatbot ChatGPT and its LLM cousins were the tools that researchers mentioned most often when asked to type in the most impressive or useful example of AI tools in science (closely followed by protein-folding AI tools, such as AlphaFold, that create 3D models of proteins from amino-acid sequences). But ChatGPT also topped researchers choice of the most concerning uses of AI in science. When asked to select from a list of possible negative impacts of generative AI, 68% of researchers worried about proliferating misinformation, another 68% thought that it would make plagiarism easier and detection harder, and 66% were worried about bringing mistakes or inaccuracies into research papers.

Respondents added that they were worried about faked studies, false information and perpetuating bias if AI tools for medical diagnostics were trained on historically biased data. Scientists have seen evidence of this: a team in the United States reported, for instance, that when they asked the LLM GPT-4 to suggest diagnoses and treatments for a series of clinical case studies, the answers varied depending on the patients race or gender (T. Zack et al. Preprint at medRxiv https://doi.org/ktdz; 2023) probably reflecting the text that the chatbot was trained on.

There is clearly misuse of large language models, inaccuracy and hollow but professional-sounding results that lack creativity, said Isabella Degen, a software engineer and former entrepreneur who is now studying for a PhD in using AI in medicine at the University of Bristol, UK. In my opinion, we dont understand well where the border between good use and misuse is.

The clearest benefit, researchers thought, was that LLMs aided researchers whose first language is not English, by helping to improve the grammar and style of their research papers, or to summarize or translate other work. A small number of malicious players notwithstanding, the academic community can demonstrate how to use these tools for good, said Kedar Hippalgaonkar, a materials scientist at the National University of Singapore.

Researchers who regularly use LLMs at work are still in a minority, even among the interested group who took Natures survey. Some 28% of those who studied AI said they used generative AI products such as LLMs every day or more than once a week, 13% of those who only use AI said they did, and just 1% among others, although many had at least tried the tools.

Moreover, the most popular use among all groups was for creative fun unrelated to research (one respondent used ChatGPT to suggest recipes); a smaller share used the tools to write code, brainstorm research ideas and to help write research papers.

Some scientists were unimpressed by the output of LLMs. It feels ChatGPT has copied all the bad writing habits of humans: using a lot of words to say very little, one researcher who uses the LLM to help copy-edit papers wrote. Although some were excited by the potential of LLMs for summarizing data into narratives, others had a negative reaction. If we use AI to read and write articles, science will soon move from for humans by humans to for machines by machines, wrote Johannes Niskanen, a physicist at the University of Turku in Finland.

Around half of the scientists in the survey said that there were barriers preventing them from developing or using AI as much as they would like but the obstacles seem to be different for different groups. The researchers who directly studied AI were most concerned about a lack of computing resources, funding for their work and high-quality data to run AI on. Those who work in other fields but use AI in their research tended to be more worried by a lack of skilled scientists and training resources, and they also mentioned security and privacy considerations. Researchers who didnt use AI generally said that they didnt need it or find it useful, or that they lacked experience or time to investigate it.

Another theme that emerged from the survey was that commercial firms dominate computing resources for AI and ownership of AI tools and this was a concern for some respondents. Of the scientists in the survey who studied AI, 23% said they collaborated with or worked at firms developing these tools (with Google and Microsoft the most often named), whereas 7% of those who used AI did so. Overall, slightly more than half of those surveyed felt it was very or somewhat important that researchers using AI collaborate with scientists at such firms.

The principles of LLMs can be usefully applied to build similar models in bioinformatics and cheminformatics, says Garrett Morris, a chemist at the University of Oxford, UK, who works on software for drug discovery, but its clear that the models must be extremely large. Only a very small number of entities on the planet have the capabilities to train the very large models which require large numbers of GPUs [graphics processing units], the ability to run them for months, and to pay the electricity bill. That constraint is limiting sciences ability to make these kinds of discoveries, he says.

Researchers have repeatedly warned that the naive use of AI tools in science can lead to mistakes, false positives and irreproducible findings potentially wasting time and effort. And in the survey, some scientists said they were concerned about poor-quality research in papers that used AI. Machine learning can sometimes be useful, but AI is causing more damage than it helps. It leads to false discoveries due to scientists using AI without knowing what they are doing, said Lior Shamir, a computer scientist at Kansas State University in Manhattan.

When asked if journal editors and peer reviewers could adequately review papers that used AI, respondents were split. Among the scientists who used AI for their work but didnt directly develop it, around half said they didnt know, one-quarter thought reviews were adequate, and one-quarter thought they were not. Those who developed AI directly tended to have a more positive opinion of the editorial and review processes.

Reviewers seem to lack the required skills and I see many papers that make basic mistakes in methodology, or lack even basic information to be able to reproduce the results, says Duncan Watson-Parris, an atmospheric physicist who uses machine learning at the Scripps Institution of Oceanography in San Diego, California. The key, he says, is whether journal editors are able to find referees with enough expertise to review the studies.

That can be difficult to do, according to one Japanese respondent who worked in earth sciences but didnt want to be named. As an editor, its very hard to find reviewers who are familiar both with machine-learning (ML) methods and with the science that ML is applied to, he wrote.

Nature also asked respondents how concerned they were by seven potential impacts of AI on society which have been widely discussed in the news. The potential for AI to be used to spread misinformation was the most worrying prospect for the researchers, with two-thirds saying they were extremely or very concerned by it. Automated AI weapons and AI-assisted surveillance were also high up on the list. The least concerning impact was the idea that AI might be an existential threat to humanity although almost one-fifth of respondents still said they were extremely or very concerned by this prospect.

Many researchers, however, said AI and LLMs were here to stay. AI is transformative, wrote Yury Popov, a specialist in liver disease at the Beth Israel Deaconess Medical Center in Boston, Massachusetts. We have to focus now on how to make sure it brings more benefit than issues.

View original post here:

AI and science: what 1,600 researchers think - Nature.com

Read More..

Fish-ial recognition software aims to protect trout – EurekAlert

New research focused on brook trout is using artificial intelligence to identify individual fish, with the goal of building population models that track trout health and habitat changes.

This groundbreaking use of AI, a collaboration between data scientists at the University of Virginia and the U.S. Geological Survey, will create a more efficient and accurate way to track trout by using fish-ial recognition software.

Researchers are classifying fish in both controlled and natural environments in West Virginia and Massachusetts, building a unique database that has the potential to save the taxpayer millions of dollars and advance protective measures for trout and streams. They hope to engage anglers as boots-on-the-ground citizen scientists to assist with the project, creating an interactive application where fishermen can upload images of fish and participate in protecting the health of brook trout and preservingtheir natural environment.

Fish biologists have been studying climate change and conservation for decades, and tracking fish is not new. Previously, however, scientists have had to use markers or injections to identify individual fish, methods that are invasive, require minor surgery, and do not work on small fish. The new frontier is individual recognition using AI technology, said Nathaniel Hitt, a research fish biologist with the U.S. Geological Survey.

The project originated during work at Shenandoah National Park by researchers from the U.S. Geological Surveys Ecological Science Center in West Virginia. We were using video sampling in stream pools to estimate the abundance of brook trout. We would take underwater video and have human observers count fish, said Hitt. We actually crowdsourced this to schools across the nation.

The success of the crowdsourcing got the fish biologists thinking about how they could automate the process. With the rise of AI and computer science applications like facial recognition software, they thought, why not apply it to fish? Brook trout have unique identifying markings, making them the perfect fish species to test this theory.

Brook trout are unique in that they are the only native trout of Appalachia and have been around for millions of years. Anglers for generations have come to love the fish and are invested in protecting its future. Brook trout have ecological importance as well, according to Hitt: Theyre the canary in the coal mine for climate change.

Ben Letcher, a research ecologist at the Conte Research Laboratory in Turners Falls, Mass., who is partnering on the project, explains: Each state in New England has cold-water criteria, and some states use the presence of a brook trout to identify a cold-water stream. Cold-water streams get special protections, so knowing where the trout are now and where they will be in the future is important for land protection and conservation.

To build a database of images large enough to be useful for prediction models, the researchers are capturing fish images in both controlled fisheries in West Virginia and in the wild streams of western Massachusetts, using different methods while working toward the same goal.

In Massachusetts, the team uses an electrofisher backpack to collect fish. They then place the caught fish in a bucket, anesthetize a few at a time, and then take measurements and photographs before releasing them back into the stream from which they came. In West Virginia researchers have used GoPro cameras to collect images of fish while they swim in tanks. The team then uses anesthetics to capture measurements and take additional photographs.

All of those images are then shared with data scientists at the University of Virginia who feed them into an image processing pipeline that identifies individual fish features. The team, led by Sheng Li, an assistant professor of data science at UVA, then trains the model to improve image recognition.

Its quite challenging, said Li. You see a large variation in fish appearance such as body size and other changes over time. We have had to developmultiple AI methods to improve the recognition of each individual fish.

The data scientists rely heavily on the images provided by the on-the-ground fish biologists and ecologists wading into streams, catching fish, and carefully and categorically photographing them.

Everyone on the project credits the projects success to interdisciplinary collaboration. Fish experts like Hitt and Letcher work with computer and data scientists like Li and others to use new techniques to solve old, persistent problems.

Hitt believes the tools they are developing using AI could have applications toward research on any animal with spots. We envision this transforming fish biology globally.

But the challenge of amassing a large enough and current database of images remains, which is why the team hopes to appeal to citizen scientists and anglers to be active participants.

By using their phone to capture and upload photos of fish caught, a future interactive database has the potential to identify a specific fish, trace its tracking history, and feed up-to-date information in real time. The U.S. Geological Survey is working with fishing expedition companies to test out this new method of collecting data.

Letcher predicts a phone application could be created where an angler takes a photo of caught fish; uploads it to an open, shared database; and learns its exact identification and history. This could be very valuable for collecting scientific information but also to engage anglers in new ways, he said.

Using images, we can create individual fish ID and could monitor population trajectories, said Hitt, but this also changes the relationship between anglers and these natural resources. It fosters a deeper sense of stewardship and connection to the streams and rivers.

When speaking of brook trout, Letcher and his colleagues become almost reverent. Youre taking something so ancient, so deeply rooted in the evolution of our planet, and developing a new appreciation and respect for it.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Original post:

Fish-ial recognition software aims to protect trout - EurekAlert

Read More..

Giving students the computational chops to tackle 21st-century … – MIT News

Graduate student Nikasha Patel 22 is using artificial intelligence to build a computational model of how infants learn to walk, which could help robots acquire motor skills in a similar fashion.

Her research, which sits at the intersection of reinforcement learning and motor learning, uses tools and techniques from computer science to study the brain and human cognition.

Its an area of research she wasnt aware of before she arrived at MIT in the fall of 2018, and one Patel likely wouldnt have considered if she hadnt enrolled in a newly launched blended major, Course 6-9: Computation and Cognition, the following spring.

Patel was drawn to the flexibility offered by Course 6-9, which enabled her to take a variety of courses from the brain and cognitive sciences major (Course 9) and the computer science major (Course 6). For instance, she took a class on neural computation and a class on algorithms at the same time, which helped her better understand some of the computational approaches to brain science she is currently using in her research.

After earning her undergraduate degree last spring, Patel enrolled in the 6-9 masters program and is now pursuing a PhD in computation and cognition. While a PhD wasnt initially on her radar, the blended major opened her eyes to unique opportunities in cross-disciplinary research. In the future, she hopes to study motor control and the computational building blocks that our brains use for movement.

Looking back on my experience at MIT, being in Course 6-9 really led me up to this moment. You cant just think of the world through one lens. You need to have both perspectives so you can tackle these complex problems together, she says.

Blending disciplines

The Department of Brain and Cognitive Sciences Course 6-9 is one of four blended majors available through the MIT Schwarzman College of Computing. Each of the majors is offered jointly by the Department of Electrical Engineering and Computer Science and a different MIT department. Course 6-7, Computer Science and Molecular Biology, is offered with the Department of Biology; Course 6-14, Computer Science, Economics, and Data Science, is offered with the Department of Economics; and Course 11-6, Urban Science and Planning with Computer Science, is offered with the Department of Urban Studies and Planning.

Each major is designed to give students a solid grounding in computational fundamentals, such as coding, algorithms, and ethical AI, while equipping them to tackle hard problems in different fields like neurobiology, economics, or urban design, using tools and insights from the realm of computer science.

The four majors, all launched between 2017 and 2019, have grown rapidly and now encompass about 360 undergraduates, or roughly 8 percent of MITs total undergraduate enrollment.

With so much focus on generative AI and machine learning in many disciplines, even those not traditionally associated with computer science, it is no surprise to associate professor Mehrdad Jazayeri that blended majors, and Course 6-9 in particular, have grown so rapidly. Course 6-9 launched with 40 students and has since quadrupled its enrollment.

Many students who come to MIT are enamored with machine-learning tools and techniques, so the opportunity to utilize those skills in a field like neurobiology is a great opportunity for students with varied interests, says Jazayeri, who is also director of education for the Department of Brain and Cognitive Sciences and an investigator at the McGovern Institute for Brain Research.

It is pretty clear that new developments and insights in industry and technology will be heavily dependent on computational power. Fields related to the human mind are no different from that, from the study of neurodegenerative diseases, to research into child development, to understanding how marketing affects the human psyche, he says.

Computation to improve medicine

Using the power of computer science to make an impact in biological research inspired senior Charvi Sharma to major in Course 6-7.

Though she was interested in medicine from a young age, it wasnt until she came to MIT that she began to explore the role computation could play in medical care.

Coming to college with interests in both computer science and biology, Sharma considered a double major; however, she soon realized that what really interested her was the intersection of the two disciplines, and Course 6-7 was a perfect fit.

Sharma, who is planning to attend medical school, sees computer science and medicine dovetail through her work as an undergraduate researcher at MITs Koch Institute for Cancer Research. She and her fellow researchers seek to understand how signaling pathways contribute to a cells ability to escape from cell cycle arrest, or the inability of a cell to continue dividing, after DNA damage. Their work could ultimately lead to improved cancer treatments.

The data science and analysis skills she has honed through computer science courses help her understand and interpret the results of her research. She expects those same skills will prove useful in her future career as a physician.

A lot of the tools used in medicine do require some knowledge of technology. But more so than the technical skills that Ive learned through my computer science foundation, I think the computational mindset the problem solving and pattern recognition will be incredibly helpful in treatment and diagnosis as a physician, she says.

AI for better cities

While biology and medicine are areas where machine learning is playing an increasing role, urban planning is another field that is rapidly becoming dependent on big data and the use of AI.

Interested in learning how computation could enhance urban planning, senior Kwesi Afrifa decided to apply to MIT after reading about the blended major Course 11-6, urban sciences and planning with computer science.

His experiences growing up in the Ghanian capital of Accra, situated in the midst of a rapidly growing and sprawling metro area of about 5.5 million people, convinced Afrifa that data can be used to shape urban environments in a way that would make them more livable for residents.

The combination of fundamentals from Course 6, like software engineering and data science, with important concepts from urban planning, such as equity and environmental management, has helped him understand the importance of working with communities to create AI-driven software tools in an ethical manner for responsible development.

We cant just be the smart engineers from MIT who come in and tell people what to do. Instead, we need to understand that communities have knowledge about the issues they face, and tools from tech and planning are a way to enhance their development in their own way, he says.

As an undergraduate researcher, Afrifa has been working on tools for pedestrian impact analysis, which has shown him how ideas from planning, such as spatial analysis and mapping, and software engineering techniques from computer science can build off one another.

Ultimately, he hopes the software tools he creates enable planners, policymakers, and community members to make faster progress at reshaping neighborhoods, towns, and cities so they meet the needs of the people who live and work there.

Read the original here:

Giving students the computational chops to tackle 21st-century ... - MIT News

Read More..

Deploying Your Machine Learning Model to Production in the Cloud – KDnuggets

AWS, or Amazon Web Services, is a cloud computing service used in many businesses for storage, analytics, applications, deployment services, and many others. Its a platform utilizes several services to support business in a serverless way with pay-as-you-go schemes.

Machine learning modeling activity is also one of the activities that AWS supports. With several services, modeling activities can be supported, such as developing the model to making it into production. AWS has shown versatility, which is essential for any business that needs scalability and speed.

This article will discuss deploying a machine learning model in the AWS cloud into production. How could we do that? Lets explore further.

Before you start this tutorial, you need to create an AWS account, as we would need them to access all the AWS services. I assume that the reader would use the free tier to follow this article. Additionally, I assume the reader already knows how to use Python programming language and has basic knowledge of machine learning. Also, we will focus on the model deployment part and will not concentrate on other aspects of data science activity, such as data preprocessing and model evaluation.

With that in mind, we will start our journey of deploying your machine learning model in the AWS Cloud services.

In this tutorial, we will develop a machine-learning model to predict churn from the given data. The training dataset is acquired from Kaggle, which you can download here.

After we have acquired the dataset, we would create an S3 bucket to store the dataset. Search the S3 in the AWS services and make the bucket.

In this article, I named the bucket telecom-churn-dataset and located in Singapore. You can change them if you want, but lets go with this one for now.

After you have finished creating the bucket and uploading the data into your bucket, we will go to the AWS SageMaker service. In this service, we will use the Studio as our working environment. If you have never used the Studio, lets create a domain and user before proceeding further.

First, choose the Domains within the Amazon SageMaker Admin configurations.

In the Domains, you would see a lot of buttons to select. In this screen, select the Create domain button.

Choose the quick setup if you want to speed up the creation process. After its finished, you should see a new domain created in the dashboard. Select the new domain you just created and then click the Add user button.

Next, you should name the user profile according to your preferences. For the execution role, you can leave it on default for now, as its the one that was created during the Domain creation process.

Just click next until the canvas setting. In this section, I turn off several settings that we dont need, such as Time Series Forecasting.

After everything is set, go to the studio selection and select the Open studio button with the user name you just created.

Inside the Studio, navigate to the sidebar that looks like a folder icon and create a new notebook there. We can let them by default, like the image below.

With the new notebook, we would work to create a churn prediction model and deploy the model into API inferences that we can use in production.

First, lets import the necessary package and read the churn data.

Next, we would split the data above into training data and testing data with the following code.

We set the test data to be 30% of the original data. With our data split, we would upload them back into the S3 bucket.

You can see the data inside your S3 bucket, which currently consists of three different datasets.

With our dataset ready, we would now develop a churn prediction model and deploy them. In the AWS, we often use a script training method for machine learning training. Thats why we would develop a script before starting the training.

For the next step, we need to create an additional Python file, which I called train.py, in the same folder.

Inside this file, we would set our model development process to create the churn model. For this tutorial, I would adopt some code from Ram Vegiraju.

First, we would import all the necessary packages for developing the model.

Next, we would use the parser method to control the variable that we can input into our training process. The overall code that we would put in our script to train our model is in the code below.

Lastly, we need to put four different functions that SageMaker requires to make inferences: model_fn, input_fn, output_fn, and predict_fn.

With our script ready, we would run the training process. In the next step, we would pass the script we created above into the SKLearn estimator. This estimator is a Sagemaker object that would handle the entire training process, and we would only need to pass all the parameters similar to the code below.

If the training is successful, you will end up with the following report.

If you want to check the Docker image for the SKLearn training and your model artifact location, you can access them using the following code.

With the model in place, we would then deploy the model into an API endpoint that we can use for prediction. To do that, we can use the following code.

If the deployment is successful, the model endpoint is created, and you can access it to create a prediction. You can also see the endpoint in the Sagemaker dashboard.

You can now make predictions with this endpoint. To do that, you can test the endpoint with the following code.

Congratulation. You have now successfully deployed your model in the AWS Cloud. After you have finished the testing process, dont forget to clean up the endpoint. You can use the following code to do that.

Dont forget to shut down the instance you use and clean up the S3 storage if you dont need it anymore.

For further reading, you can read more about the SKLearn estimator and Batch Transform inferences if you prefer to not have an endpoint model.

AWS Cloud platform is a multi-purpose platform that many companies use to support their business. One of the services often used is for data analytic purposes, especially model production. In this article, we learn to use AWS SageMaker and how to deploy the model into the endpoint.Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and Data tips via social media and writing media.

Visit link:

Deploying Your Machine Learning Model to Production in the Cloud - KDnuggets

Read More..

Duality Technologies Joins AWS Partner Network and Launches … – PR Newswire

Duality leverages modern PETs and privacy-preserving AI to deliver faster and more secure data collaboration for healthcare, financial services, government, and more.

HOBOKEN, N.J., Sept. 28, 2023 /PRNewswire/ -- Duality Technologies, the leader in secure data collaboration for enterprises and government agencies, today announced it has joined the Amazon Web Services (AWS) Partner Network (APN) and launched its secure data collaboration platformin AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.

Duality protects the intellectual property of AI/ML models and the security and privacy of the data used for training.

The APN is a global community of AWS Partners that leverage programs, expertise, and resources to build, market, and sell customer offerings in diverse global markets. Duality Technologies underwent the comprehensive AWS Foundational Technical Review (FTR) to certify the enterprise readiness of its platform. As an APN member, Duality allows AWS users to securely collaborate on data without requiring direct access to the raw data, supporting privacy regulations and unlocking additional data sources not previously permitted. Duality is also being used to train models while protecting the intellectual property (IP) of the artificial intelligence and machine learning (AI/ML) models and maintaining the security and privacy of the protected health or personally identifiable information (PII/PHI) used for training and model personalization.

"Duality's inclusion in the APN and its availability in AWS Marketplace means AWS customers can more easily collaborate on data science projects utilizing sensitive and regulated data across their business ecosystem from a single location within AWS," said VP of Product, Adi Hirschstein, Duality Technologies. "This adds privacy and security guardrails required by various regulated industries and organizations to leverage AWS services like AWS Nitro Enclaves and Amazon SageMaker. Not only that, but AWS customers will find that by making it easier to work with sensitive, these integrations will accelerate data-driven innovations and growth strategies."

Duality's enterprise-ready secure data collaboration platform operationalizes PrivacyEnhancing Technologies (PETs) to empower users to unleash the full value of collaborative data science and AI while minimizing risk. Organizations can securely share, analyze, and enrich sensitive data to gain business value while raw data remains encrypted throughout the entire data science lifecycle, minimizing the risk of exposure and ensuring compliance with data protection and industry regulations. Duality's uniquely secure solution is made possible vialeading-edge cryptographic and security technologies.

Duality has integrated with both Amazon SageMaker and AWS Nitro Enclaves to enable seamless integration with AWS services. The integration with AWS Nitro Enclaves expands the privacy-enhancing capabilities of Duality's platform, allowing organizations to collaborate on any data type with any type of model. The integration with Amazon SageMaker now allows companies to benefit from AWS model outputs using data that would otherwise be off-limits due to IP/PII/PHI in the data set.

"As an unequivocal global leader in making privacy technology real and practical, we're thrilled to bring the power of secure data collaboration to AWS. Combining Duality and AWS allows data-first organizations to securely apply data science and machine learning on sensitive data, further breaking down silos that exist within and between organizations," said Prof.Kurt Rohloff, chief technical officer and co-founder of Duality.

AWS customers can utilize Duality's secure collaboration solution today through AWS Marketplace.

As an APN member, Duality joins a global network of 100,000 AWS Partners from more than 150 countries working with AWS to provide innovative solutions, solve technical challenges, and deliver value to mutual customers.

About Duality Technologies

Dualityis the leader in privacy enhanced secure data collaboration, empowering organizations worldwide to maximize the value of their data without compromising on privacy or regulatory compliance. Founded and led by world-renowned cryptographers and data scientists, Duality operationalizes privacy enhancing technologies (PETs) to accelerate data insights by enabling analysis and AI on sensitive data while protecting data privacy, compliance, and protecting valuable IP. A World Economic Forum (WEF) Tech Pioneer and a Gartner Cool Vendor, Duality is recognized by numerous industry awards, including Fast Company 2023 World Changing Ideas award, 2023 CyberTech 100 Most Innovative Companies list, 2022 CB Insights' AI 100, the 2022 RegTech 100 Awards, and the AIFinTech100 2022 Awards.Learn more.

CONTACT: Derek Wood[emailprotected]+1 917-310-1175[emailprotected]

SOURCE Duality Technologies, Inc.

More:

Duality Technologies Joins AWS Partner Network and Launches ... - PR Newswire

Read More..

With the summer of data in the rear-view mirror, here are the key … – SiliconANGLE News

Theres no talking about the Summer of Data without including a significant addition about the year of artificial intelligence the two are inextricably linked and will remain so in the months to come.

Thats because the rise of AI has led to the need for incredible amounts of data, and projections indicate that data centers are to become the worlds largest energy consumers, rising from 3% of total electricity use in 2017 to 4.5% by 2025. Indeed, more companies are seeing their data needs grow on a yearly basis, leading to the characterization that every company is a data company.

With an eye on approaching this challenge, various technological advancements have rolled out in 2023, including a necessity for data storage innovation. Next-generation storage solutions are estimated to be valued at more than $150 billion by 2032, according to a recent studyfrom Global Market Insights Inc.

Its also no surprise that every vendor offering data-related solutions are striving to secure a share of whats estimated to be a total addressable market in the tens of billions of dollars when it comes to data platforms, noted Rob Strechay, lead analyst for theCollective from theCUBE, in an analysis for SiliconANGLE.

The opportunities for storage platform vendors and data platform vendors lie in integrating data platforms as-a-service into their storage offerings, Strechay wrote.

With all of these changes and demands in mind, some of the major players in data including Snowflake Inc., MongoDB Inc., VAST Data Inc. and Databricks Inc. spent the Summer of Data unveiling their strategies as data becomes even more important in support of AIs evolution.

Though all of these companies and others like them are responding to the same challenges, their solutions differ. Thats why, with the Summer of Data in the rearview, its worth a recap of what we learned so far and where these companies could be heading next.

This feature is part of SiliconANGLE Medias ongoing series with theCUBE exploring the latest developments in the data storage and AI market.

Before this years Snowflake Summit, the companys stated target of $10 billion in revenue for fiscal year 2028 left plenty of open questions about how they might get there. Over the course of this year, meanwhile, theCUBE has produced a number of in-depth analyses, laying out a mental model for the future of data platforms.

In his post-summit analysis, theCUBE analyst Dave Vellante discussed the vision outlined by Snowflake during this years Snowflake Summit, from its keynote presentations to product announcements. The companys intention was clearly to be the number one platform on which this new breed of data applications will be built, according to Vellante.

This weeks Snowflake Summit further confirmed our expectations with a strong top-line message of All Data/All Workloads, and a technical foundation that supports an expanded number of ways to access data, Vellante wrote. Squinting through the messaging and firehose of product announcements, we believe Snowflakes core differentiation is its emerging ability to be a complete platform for data applications. Just about all competitors either analyze data or manage data.

Other companies have also been weighing their strategies as the world of data storage evolves and as data and AI converge. For VAST, that looks like an evolution beyond being a storage company. In early August, VAST announced a new, global data infrastructure for AI called the VAST Data Platform, with an aim to unify data storage, database and virtualized compute engine services in a scalable system.

By bringing together structured and unstructured data in a high-performance, globally distributed namespace with real-time analysis, VAST is not only tackling fundamental DBMS challenges of data access and latency but also offering genuinely disruptive data infrastructure that provides the foundation organizations need to solve the problems they havent yet attempted, Market Strategy analyst Merv Adrian said at the time of the announcement.

Meanwhile, the realities of modern business with challenges such as the skills shortage considered mean developers must be kept happy. That has been good news for companies such ascloud database provider MongoDB.

The company recently saw its stock soar with blowout fiscal first-quarter earnings results, which posed an interesting question to watch in advance of MongoDB .local NYC in June: Was AI contributing to the surge in stock price?

DevOps democratization has surged over the past 20 years, but AI has posed a new wrinkle. AI isnt the only thing to consider as developers seek to go next-level with their data, according to Mark Porter, chief technology officer of MongoDB, during an interview with theCUBE during MongoDB .local NYC.

It is currently the thing thats really exciting, and being able to build great apps that do great things with your core data is always going to be important, he said. But whats happening is people are enhancing their apps with AI.

With hundreds of people using MongoDB as the foundation of their AI apps, Porter pointed to the companys developer data platform as key to this arrangement. Meanwhile, Databricks recently acquired Okera Inc., a data governance platform with a focus on AI with a stated goal to expand its own governance and compliance capabilities when it comes to machine learning and large language model AIs. Customers used to control access to their data using simple data controls that only needed to address one plane, such as a database.

The rise of AI, in particular machine learning models and LLMs, is making this approach insufficient, the Databricks team, including Chief Executive Officer Ali Ghodsi, explained in the announcement.

As an industry leader, many are watching what Databricks is doing closely, including Vellante. The big question for the company this summer was surrounding how it would execute its critical strategic decisions in the future as hype and confusion continued to swirl around the world of AI.

Emerging customer data requirements and market forces are conspiring in a way that we believe will cause modern data platform players generally and Databricks specifically to make some key directional decisions and perhaps even reinvent themselves, Vellante wrote in an edition of his Breaking Analysis series.

After the Data + AI Summit, those connections began to come into better view. In a new world where data is influenced by broader trends in AI, Databricks is back in its wheelhouse, according to Doug Henschen, vice president and principal analyst at Constellation Research Inc.

I think generative AI for the last three years, theyve been building up the warehouse side of their Lakehouse and making a case, he said. All this time data science has been their wheelhouse, and their strength and their customers are here, while others are making announcements of previews thatll help eventually down the road on AI. This is where its really happening, and theyre building generative models today.

TheSummer of Data may be over, but its clear that the evolution of AI will continue to play into the strategy of major players in data for many months to come. That will lead to solutions such as the adoption of next-generation storage solutions and their valuation at over $150 billion by 2023.

Though the AI-powered hybrid-multi-super cloud comes with various demands on data, companies such as those mentioned above have laid out their plans during the Summer of Data, and the year ahead will be critical as those same companies are tasked to execute. So, too, will various data platforms continue to evolve.

Most traditional applications are built on compute, networking and storage infrastructure, but the future will see applications program the real world, George Gilbert, a contributor to theCUBE, wrote in a recent analysis.

In that world, data-driven digital twins representing real-world people, places, things and activities will be on the platform, Gilbert wrote, which explains the stakes at hand.

On balance, we believe that the distributed nature of data originating from real-world things, combined with the desire for real-time actions, will further stress existing data platforms. We expect a variety of approaches will emerge to address future data challenges, he wrote. These will come at the problem starting from a traditional data management perspective (such as Snowflake), a data science view of the world (such as Databricks) and core infrastructure prowess (cloud/infrastructure-as-a-service, compute and storage vendors).

Clearly, the challenges around data remain the same as AI continues its meteoric rise. The upcoming months will be critical as the needs of companies in this new world continue to be of paramount importance.

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

See the original post here:

With the summer of data in the rear-view mirror, here are the key ... - SiliconANGLE News

Read More..

FS Insight Weekly Roadmap: Q4 Shows Promise After ‘Vortex of Pain’ – RealMoney

Ken Xuan, CFA, FRM

Head of Data Science Research

[Note: Fundstrat Head of Research Tom Lee was on a well-deserved vacation this week. In his absence, Head of Data Science Ken Xuan and the Fundstrat Team present their views on the week's developments.]

The most important data point over the past two days was Q2 GDP which came in at 2.1% vs 2.2% (expected). This lower than expected report partially caused a risk-on rally with the S&P 500 ending Thursday +0.59%. Still, markets are nervous about looming headline risks, notably a potential government shutdown over the next few weeks. Even though equities rallied the past two days, the equity put/call ratio surged to 1.41 Wednesday, underscoring the "nervousness" in markets. Forward 1M/3M returns are strong when such readings have occurred since 1995, so we see this as more constructive to markets than alarming. Still, the most important factor for markets in the coming months will arguably be the trajectory of inflation.

We believe the latest PCE data support our thesis that inflation remains on a glidepath lower. Although many cite the rise in headline (due to rise in gas prices) as a reason to be "wary" about the PCE release, Chair Powell himself has admitted that the short-term volatility of gasoline will not influence the Fed's actions.

----------

FS Insight Head of Technical Strategy

* Reversal likely could lead to a retest and minor break of lows next week.

* Treasury to Equity correlation remains quite strong.

* September's decline likely could lead to 4Q gains given seasonal tendencies.

----------

Head of Crypto Strategy

* On Thursday, equities and crypto markets rebounded, buoyed by anticipated SEC approval of ETH futures ETFs.

* While an ETH ETF should induce a short-term rally, its intermediate-term impact remains uncertain and will be monitored alongside other indicators (flows). These developments, in the context of a favorable Q4 macro-outlook, enhance the risk/reward profile for Ethereum-based assets like ETHE.

* Amid an upcoming government shutdown that has sped up ETH futures ETF approvals and delayed spot Bitcoin ETF decisions, the odds of a deferral until January for a BTC ETF have risen. However, a Q4 approval for spot Bitcoin ETFs remains our base case.

* Recent shifts in Bitcoin's correlation with macro variables and its outperformance compared to the Nasdaq 100 raise questions about whether crypto is signaling a local or longer-term bottom in asset prices.

* As the typically negative seasonal influences on crypto for August and September wane, we are approaching a period generally considered to be more bullish for crypto markets.

* Core Strategy - Despite soaring rates and volatile asset prices, we believe it's prudent to adopt a more constructive stance on crypto prices as we enter Q4. While we await confirmation from flows data, we think its right to start increasing risk exposure, particularly in the majors and Grayscale trusts, which continue to trade at a discount to NAV. We are reintroducing ETH L2 tokens to the Core Strategy allocation as well as a small allocation to SOL.

----------

FS Insight Washington Policy Strategist

* Speaker McCarthy fails to get Republican Continuing Resolution to keep the government open through the House, with 21 Republicans defying him.

* In the Senate, a bipartisan discussion for a CR continues, but House Republicans will demand immigration and border-protection provisions in exchange for their support.

* House and Senate leaders have told members to stay in DC for the weekend as the Sunday midnight deadline approaches.

----------

* The S&P 500 slipped to 4,288.05 this week, down 0.74%. The Nasdaq was nearly flat, closing at 13,219.32. Bitcoin climbed 2.36% to about 26,868.40.

* Despite high volume of noise, inflation remains the primary macro driver for markets.

* Government shutdown could put the Fed on an automatic "structural pause."

"Sometimes we stare so long at a door that is closing that we see too late the one that is open." ~ Alexander Graham Bell

Markets remain in what Fundstrat Head of Data Science Ken Xuan terms a "vortex of pain." We opened on Monday with markets continuing to digest the previous week's central bank actions (from the Fed, the Bank of England, and the Bank of Japan.)

Last week's FOMC meeting led to renewed fears among many market participants that the Fed would stay "higher for longer." A continued surge in yields followed, weighing on the stock market. Fundstrat's Head of Data Science, Ken Xuan, pointed out that the central bank's quantitative tightening might also have contributed to yields climbing: during the week of September 18, the Fed reduced its balance sheet by $75B -- the largest weekly decline since QT began. Part of this was achieved through a significant selloff in Treasuries, possibly depressing prices and boosting yields.

Regardless of the reason for surging yields, Head of Technical Strategy Mark Newton began his remarks at our weekly huddle by noting that, "The biggest thing you have to understand about the equity market right now is that rates and equities are very inversely correlated and with rates moving up equities do not respond well. We've pushed up to almost 4.75% in the 10-year and so that is certainly a source of concern for the equity market."

From a technical standpoint, however, Newton argued that "there are some reasons to think that these yields are just getting too stretched. So ideally, what's going to happen is that probably as of next week, we see yields start to break, markets bounce, but then yields sort of churn. Then eventually, we peak out between October or November with a larger pullback in rates happening in the next year."

He noted as an aside that, "There are some reasons to think [Treasuries] are pretty good value here. I like (TLT) and (IEF) -- even if you don't catch the exact low, they represent a great value here not only to make money on yield but also in price appreciation."

Returning to the stock market, Newton said that "Since September 1, momentum has turned negative. We are nearly oversold based on a lot of metrics, but we are nearing pretty good support which is right near the 4200 level. I don't think we're going to get down under that. To me, the risk-reward is increasingly good." He summed it up a bit more colorfully: "For those who like to buy and hold your nose, I think you'll be rewarded between now and year end, even if it's not right away."

Although Fundstrat Head of Research Tom Lee was out this week on a much-needed vacation, Xuan and the Fundstrat team assessed the week's data. They also concluded that our constructive base case for the rest of 2023 remains intact.

Data showed apparently conflicting views on the housing market this week. "We saw pretty weak data regarding pending home sales and new home sales this week," Xuan observed. But the July Case-Shiller Home Price Index came in hot at 0.87% versus expectations of 0.70%. This was the first positive reading after four straight negative months.

Xuan and the team found that this higher reading had more to do with the "base effects" of a methodology that excluded weak July 2022 data than with any strengthening of the housing market. We expect the same effect to cause the Case Shiller YoY figure to pivot higher in the next few months, but of the 20 cities that Case-Schiller tracks, eight of them (40%) have housing prices in outright deflation. This is double the 20% average (since 1988), Xuan pointed out, "so in our view, the housing market remains cool."

For stocks, "the battleground remains inflation," Xuan said. As we expected, Core PCE again showed declining inflation. YoY, Core PCE came in below 4% for the first time since 2021, and MoM came in below Street expectations at 0.1%, the lowest that figure has been since November 2020. This provided continued evidence of our assertion that inflation is on a glide path lower.

"Sentiment is actually now more bearish," Newton said. "It's taken almost two and a half months, but we've dropped from very bullish levels in July. Now we're extremely bearish on Fear and Greed. AAII is also now bearish by 13 percentage points."

Xuan also pointed out that hedge funds recently added the most short-exposure, on a week-over-week basis, since the Covid-19 pandemic -- a sign of weakening sentiment. Data from EPFR also shows equities had their largest outflows of 2023 last week.

To us, this bearish sentiment is a bullish sign.

"We're approaching seasonally a much better time," Newton told us. "When you look at the last four Septembers they were all down between 3.5 and 7%. But Q4 is anywhere up between 7% or higher. So we are coming into seasonally a very good time."

Xuan also pointed out that going back to 1950, when the S&P 500 is up >10% through July (as it was this year), but declines from July through September 23 (as it has this year), 4Q performance has always been strong -- eight out of eight times, with any average quarterly return of 7-8%. "That's a stunning 100% win ratio," he observed.

Washington Policy Strategist Tom Block has been writing about the possibility of a federal government shutdown ever since Kevin McCarthy won the role of Speaker of the House in January, and now the shutdown deadline is upon us.

Block collaborated with Ken Xuan and with Fundstrat Research Associate Alexandra Sinsheimer to look at the possible effects the shutdown might have on the market. The results are summarized in our Chart of the Week:

Of the 20 government shutdowns since 1960, the average shutdown lasted nine days, and markets on average were flat during those shutdowns. One month afterwards, markets on average were actually higher by 1.2%. The 1M median performance was +1.3%.

But there was one interesting implication. The Fed has long asserted that its policy decisions would be "data dependent." We contacted the Bureau of Labor Statistics to find out how a shutdown would affect their operations, and we were told in summary that during a shutdown, the BLS suspends data collection, processing, and dissemination. Once funding is restored, the BLS would resume normal operations and notify the public of changes to the economic releases.

Would it make sense for the Fed to make a policy decision with incomplete data? If it was sufficiently long, a government shutdown might automatically put the Fed on a "structural pause."

Get an email alert each time I write an article for Real Money. Click the "+Follow" next to my byline to this article.

See the rest here:

FS Insight Weekly Roadmap: Q4 Shows Promise After 'Vortex of Pain' - RealMoney

Read More..

NETL Scientist Participates in Research Experience in Carbon … – National Energy Technology Laboratory

An NETL researcher gathered invaluable knowledge and experience by participating in the annual Research Experience in Carbon Sequestration (RECS) program a carbon capture, utilization and storage (CCUS) education program designed to help graduate students and early career professionals expand their knowledge and grow a collaborative network.

Gail Choisser, an NETL geo-data scientist, was the latest Lab researcher to participate in the widely respected RECS program that was founded in 2004 by the U.S. Department of Energys (DOE) Office of Fossil Energy and Carbon Management (FECM) and NETL.

CCUS is a combination of technologies that capture, compress, transport, use and permanently store carbon dioxide (CO2) emissions from large, stationary energy and industrial facilities. The RECS program also addresses removal of CO2 from the atmosphere.

According to Choisser, the RECS program included interactive content on a range of CCUS topics and included site tours of a power plant specifically outfitted to integrate testing of carbon capture technologies, a coal mine, a CO2 capture facility and two injection wellheads as well as geology field exercises, live lectures, discussions and group projects.

Participants also toured the National Renewable Energy Labs (NREL) Energy System Integration Facility (ESIL), the ION Clean Energy Facility, the Global Thermostat Direct Air Capture Plant, and one of the NETL-supported CarbonSAFE storage sites in Gillette, Wyoming.

Some of the nations leading CCUS experts from DOE National Laboratories, the energy industry, CCUS project developers and academia provide valuable input for the program each year and lead key discussions of CCUS research, development and demonstration projects, commercial deployment trends, and policy and business impacts in the field.

More than 150 applicants sought to participate in the 2023 version of RECS. Choisser was selected as one of 31 participants who converged on NREL in Denver, Colorado, to participate in the program.

In addition to supporting RECS, NETL has a distinguished history with the program. Two NETL carbon storage researchers now serve as mentors to individuals who participate in the event.

Kelly Rose, Ph.D., NETLs Science-Based Artificial Intelligence and Machine Learning Institute (SAMI) director, serves as a RECS mentor.

With almost two decades of DOE and industry support, the CCUS industry plays a key role in reducing greenhouse gas emissions and initiating the shift to clean energy, Rose explained. By partnering with RECS, FECM and NETL are furthering the commitment to accelerating a safe, reliable and technology-informed CCUS commercial sector.

Ale Hakala, Ph.D., is a veteran speaker for RECS and currently serves as a senior fellow for geologic and environmental systems at NETL.

NETL is committed to the next generation of energy and environmental innovators, Hakala said. I found the RECS program to be very effective and Im excited to see the success of the program. RECS participants have been able to take the knowledge they gained in RECS and apply it to groundbreaking CCUS research and development.

RECS participants are graduate students or early career professionals who are based in the United States. RECS encourages people with backgrounds in geology, chemistry, hydrology, physics, engineering, natural sciences, and related fields to apply. Enrollment is limited and tuition is free.

NETL is a DOE national laboratory that drives innovation and delivers technological solutions for an environmentally sustainable and prosperous energy future. By using its world-class talent and research facilities, NETL is ensuring affordable, abundant, and reliable energy that drives a robust economy and national security, while developing technologies to manage carbon across the full life cycle, enabling environmental sustainability for all Americans.

More here:

NETL Scientist Participates in Research Experience in Carbon ... - National Energy Technology Laboratory

Read More..

Getting Started with PyTorch in 5 Steps – KDnuggets

PyTorch is a popular open-source machine learning framework based on Python and optimized for GPU-accelerated computing. Originally developed by developed by Meta AI in 2016 and now part of the Linux Foundation, PyTorch has quickly become one of the most widely used frameworks for deep learning research and applications.

Unlike some other frameworks like TensorFlow, PyTorch uses dynamic computation graphs which allow for greater flexibility and debugging capabilities. The key benefits of PyTorch include:

PyTorch Lightning is a lightweight wrapper built on top of PyTorch that further simplifies the process of researcher workflow and model development. With Lightning, data scientists can focus more on designing models rather than boilerplate code. Key advantages of Lightning include:

By combining the power and flexibility of PyTorch with the high-level APIs of Lightning, developers can quickly build scalable deep learning systems and iterate faster.

To start using PyTorch and Lightning, you'll first need to install a few prerequisites:

It's recommended to use Anaconda for setting up a Python environment for data science and deep learning workloads. Follow the steps below:

Verify that PyTorch is installed correctly by running a quick test in Python:

This will print out a random 3x3 tensor, confirming PyTorch is working properly.

With PyTorch installed, we can now install Lightning using pip:

pip install lightning-ai

Let's confirm Lightning is set up correctly:

This should print out the version number, such as 0.6.0.

Now we're ready to start building deep learning models.

PyTorch uses tensors, similar to NumPy arrays, as its core data structure. Tensors can be operated on by GPUs and support automatic differentiation for building neural networks.

Let's define a simple neural network for image classification:

This defines a convolutional neural network with two convolutional layers and three fully connected layers for classifying 10 classes. The forward() method defines how data passes through the network.

We can now train this model on sample data using Lightning.

Lightning provides a LightningModule class to encapsulate PyTorch model code and the training loop boilerplate. Let's convert our model:

The training_step() defines the forward pass and loss calculation. We configure an Adam optimizer with learning rate 0.02.

Now we can train this model easily:

The Trainer handles the epoch looping, validation, logging automatically. We can evaluate the model on test data:

For comparison, here is the network and training loop code in pure PyTorch:

Lightning makes PyTorch model development incredibly fast and intuitive.

Lightning provides many built-in capabilities for hyperparameter tuning, preventing overfitting, and model management.

We can optimize hyperparameters like learning rate using Lightning's tuner module:

This performs a Bayesian search over the hyperparameter space.

Strategies like dropout layers and early stopping can reduce overfitting:

Lightning makes it simple to save and reload models:

This preserves the full model state and hyperparameters.

Both PyTorch and PyTorch Lightning are powerful libraries for deep learning, but they serve different purposes and offer unique features. While PyTorch provides the foundational blocks for designing and implementing deep learning models, PyTorch Lightning aims to simplify the repetitive parts of model training, thereby accelerating the development process.

Here is a summary of the key differences between PyTorch and PyTorch Lightning:

PyTorch is renowned for its flexibility, particularly with dynamic computation graphs, which is excellent for research and experimentation. However, this flexibility often comes at the cost of writing more boilerplate code, especially for the training loop, distributed training, and hyperparameter tuning. On the other hand, PyTorch Lightning abstracts away much of this boilerplate while still allowing full customization and access to the lower-level PyTorch APIs when needed.

If you're starting a project from scratch or conducting complex experiments, PyTorch Lightning can save you a lot of time. The LightningModule class streamlines the training process, automates logging, and even simplifies distributed training. This allows you to focus more on your model architecture and less on the repetitive aspects of model training and validation.

In summary, PyTorch offers more granular control and is excellent for researchers who need that level of detail. PyTorch Lightning, however, is designed to make the research-to-production cycle smoother and faster, without taking away the power and flexibility that PyTorch provides. Whether you choose PyTorch or PyTorch Lightning will depend on your specific needs, but the good news is that you can easily switch between the two or even use them in tandem for different parts of your project.

In this article, we covered the basics of using PyTorch and PyTorch Lightning for deep learning:

With these foundations you can start building and training advanced models like CNNs, RNNs, GANs and more. The active open source community also offers Lightning support and additions like Bolt, a component and optimization library.

Happy deep learning!

Matthew Mayo (@mattmayo13) holds a Master's degree in computer science and a graduate diploma in data mining. As Editor-in-Chief of KDnuggets, Matthew aims to make complex data science concepts accessible. His professional interests include natural language processing, machine learning algorithms, and exploring emerging AI. He is driven by a mission to democratize knowledge in the data science community. Matthew has been coding since he was 6 years old.

Continued here:

Getting Started with PyTorch in 5 Steps - KDnuggets

Read More..

Tredence bags 2 Gold at Brandon Hall Group Awards for Innovation … – PR Newswire

SAN JOSE, Calif., Sept. 28, 2023 /PRNewswire/ -- Tredence, a leading Data Science and Artificial Intelligence (AI) solutions company, announces its recent success in winning two esteemed awards at the 2023 Brandon Hall Group Human Capital Management (HCM) Excellence Awards. Theserecognitions were earned in the Learning & Development and Talent Management categories, highlighting Tredence's commitment to the advancement of a skilled and empowered workforce that drives excellence and innovation.

In the face of fierce competition, Tredence received the coveted gold award in the category of Best Advance in Machine Learning and AI in Learning & Development. This accomplishment underscores Tredence's steadfast commitment to leveraging artificial intelligence (AI) and machine learning to create transformative learning experiences that yield significant organizational results.

Additionally, Tredence has been honored with the Best Unique or Innovative Talent Management Program in the Talent Management category. This recognition highlights Tredence's adeptness in devising and implementing talent management strategies that foster individual growth and organizational success.

These awards are presented by the renowned Brandon Hall Group, which is dedicated to acknowledging exceptional achievements in Learning & Development, Talent Management, and various aspects of Human Capital Management. The awards celebrate organizations that exemplify outstanding practices, innovative strategies, and measurable accomplishments in workplace learning and talent enrichment.

Commenting on the recognition, Sumit Mehra, Chief Technology Officer at Tredence, said " At Tredence, we aspire to be a 'Learning First and Learning Always' organization. While we will continue to deliver AI solutions to customers as part of our core offerings, we also aim for our teams to immerse themselves in a culture of continuous learning. Over the last few years, we have deliberately invested in our training infrastructure, and this award is a testament to that strategy and the hard work of our L&D team. We are just getting started and hope that this is the first of many such recognitions for the team dedicated to transforming Tredence into a learning organization."

Rachel Cooke, Chief Operating Officer at Brandon Hall Groupand leader of the HCM Excellence Awards program, said, "Excellence Award winners are shown to be organizations that truly value their employees and invest in them through their human capital management programs. These HCM programs have been validated as best in class for business value and the impact on the employees themselves."

About Tredence:

Tredence is a global data science solutions provider focused on solving the last-mile problem in AI. The 'last mile' is the gap between insight creation and value realization. Tredence leverages strong domain expertise, data platforms & accelerators, and strategic partnerships to provide tailored, cutting-edge solutions to its clients. Tredence is 'Great Place to Work-Certified' and a 'Leader' in the Forrester Wave: Customer Analytics Services. Tredence is 2300-plus employees strong and headquartered in San Jose, with offices in Foster City, Chicago, London, Toronto, and Bangalore. It caters to the largest companies in retail, CPG, telecom, healthcare, travel, banking, and industrials as clients. For more information, please visit [www.tredence.com] and follow us on LinkedIn at Tredence.

Photo: https://mma.prnewswire.com/media/2234355/Tredence_Brandon_Hall_Badges.jpgLogo: https://mma.prnewswire.com/media/1773052/Tredence_Logo.jpg

SOURCE Tredence Inc.

See the original post:

Tredence bags 2 Gold at Brandon Hall Group Awards for Innovation ... - PR Newswire

Read More..