Page 963«..1020..962963964965..970980..»

Insight: What happens when a $2 million gene therapy is not enough – Reuters

LOS ANGELES, Aug 12 (Reuters) - Baby Ben Kutschke was diagnosed at three months with spinal muscular atrophy, a rare inherited disorder which is the leading genetic cause of death in infancy globally. It leaves children too weak to walk, talk, swallow or even breathe.

So when in 2021 his parents heard about Zolgensma a one-time therapy costing millions of dollars that promises to replace genes needed for the body to control muscles they had high hopes.

They were disappointed.

After treatment with the $2.25 million therapy at almost eight months old, Ben was able to hold his head up for a few seconds a significant milestone, his mother Elizabeth Kutschke told Reuters. But he did not advance to rolling over or sitting up, and after a few weeks doctors recommended the family add another drug to help him.

"I just started to worry," she said from their home in Berwyn, Illinois. "He wasn't getting worse, but the gradual progress we had seen ... was no longer happening."

Ben is one of a growing number of patients with spinal muscular atrophy (SMA) whose doctors are turning to additional drugs on top of the gene therapy, six top U.S. neurologists told Reuters.

Their experience raises broader questions around other high-cost gene therapies coming to market, sometimes after accelerated regulatory approvals, drug pricing experts said.

Zolgensma, launched in 2019 by Swiss-based healthcare group Novartis as a "potential cure" for SMA, was the most expensive drug in the world at the time.

Gene therapies work by replacing genes the body's blueprint for its development. The gene Zolgensma delivers instructs the body to make a protein vital for muscle control.

Other SMA therapies need to be taken continuously, but Zolgensma's price was justified by hopes this revolutionary approach could beat the condition once and for all.

Zolgensma has been given to more than 3,000 children globally, with 2022 sales of $1.4 billion representing 91% of gene therapy sales worldwide, according to the IQVIA Institute for Human Data Science. In the U.S., where costs are borne by government health programs such as Medicaid as well as private insurance, IQVIA estimated Zolgensma sales totaled $434 million last year.

It has worked well for many. Novartis' data presented in March shows that depending on the timing of treatment, most patients have gone on to swallow, breathe, or even walk independently, said Sitra Tauscher-Wisniewski, vice president at Novartis Gene Therapies. Some are able to run and climb.

Three of six families interviewed by Reuters whose children received Zolgensma said they were progressing as well as hoped; Ben's was the only one to turn to another treatment.

But Novartis' data also shows almost one-third of children in an ongoing study went on to be given other drugs.

If gene therapies do fall short, it becomes harder to justify prices that researchers have argued are already poor value.

"The perception that Zolgensma is going to be a complete cure ... is not coming to fruition from the data we have seen over the last four years," said Dr. Roger Hajjar, director of the Mass General Brigham Gene & Cell Therapy Institute.

"Following the injection of Zolgensma you are basically going from a deadly disease to a more chronic disease state," he said, adding that many patients would not survive without this treatment.

Novartis said Zolgensma has been "transformative," but each child's journey is unique and outcomes can vary. It says there is no evidence additional therapies can help. Notably, it has dropped the term "potentially curative" common in its analyst calls in 2018 and 2019 from descriptions of Zolgensma, instead calling it a "one-time treatment."

"People still have misconceptions about Zolgensma," Kutschke told Reuters. "It is a treatment, not a cure."

Novartis CEO Vasant Narasimhan spelled out the pricing argument in a 2018 call: "Payers appreciate that when you deliver a potentially curative therapy that takes cost out of their healthcare system and enables people to live, hopefully, a more normal life, they are willing to pay and they see the value," he said.

The multimillion-dollar pricing strategy is not unique. More recently, the first hemophilia gene therapy approved by the U.S. Food and Drug Administration was priced by CSL Behring at $3.5 million; 26 more gene therapies are in late-stage development, according to IQVIA.

Drugmakers say the long-term benefits are worth it. CSL said it is confident that its treatment can generate cost savings and has offered partial refunds if patients need to resume injections of blood-clotting proteins in the first four years after the therapy.

Novartis offers payment by installments although it said no one in the U.S. has taken them. It also has plans linking payment to how well a patient responds, but said U.S. refunds under them have been rare.

The economics of treating the relatively small number of patients make a high price tag imperative for manufacturers.

"One of the arguments is you're saving all of these millions of dollars down the line," said Stacie Dusetzina, professor of health policy at Nashville's Vanderbilt University School of Medicine.

"But if you found out later on that oh, actually you have to get most of those other treatments, I think the question is, 'was that price really a fair price to begin with?'"

The Institute for Clinical and Economic Review (ICER), a drug pricing research group, has said Zolgensma's maximum price should be $900,000 less than half its current cost.

The two other available SMA treatments are also not cheap.

Biogen's Spinraza, injected into the spine, has a U.S. list price of $800,000 in the first year followed by maintenance doses at $400,000 a year. Roche's Evrysdi, an oral solution, costs $100,000 to $340,000 per year depending on how much the patient weighs.

Both Biogen and Roche said studies of their drugs as treatments for patients who do not respond sufficiently to Zolgensma have been encouraging so far. Biogen's Chief Medical Officer Maha Radhakrishnan said a second drug could still add value given the high cost of caring for severely disabled patients.

The fact that some children need treatment with other expensive drugs after Zolgensma shows that the gene therapy represents "poor value," said Steven Pearson, ICER president.

It's a headache for health insurers, who usually cover the full cost of Zolgensma upfront: It's not yet clear how to manage a payment plan over time, and that would require more real-world data on how patients respond, said Sree Chaguturu, chief medical officer at CVS Health, which owns health insurer Aetna.

"How long do you actually need to monitor and track those patients?" he said, noting that Americans commonly move between different insurance plans.

Many parents of children with SMA say Zolgensma is worth it.

Amanda Cook, a 32-year-old bookkeeper in Lebanon, Virginia, already knew when she gave birth to her son Weston in 2021 that he was diagnosed with SMA: It had led to the death of his older brother, Jackson, at 7 months.

Weston had a flaw in the gene that produces a protein critical to the function of cells that move muscles. The lack of that protein, known as SMN (survival motor neuron), results in SMA, which affects fewer than 300 babies a year in the United States.

But he had Zolgensma at 11 days old and "is ahead of everything, honestly," Cook said.

All therapies have been stopped and he is "doing amazing as a true 2-year old."

The treatment works best when infused as soon as possible after birth some patients can begin to suffer from degeneration before birth and Ben Kutschke was a relative latecomer.

His parents saw him having problems when he was about a month old. Formula spilled from the corners of his mouth. He couldn't hold a pacifier; his doctor suggested they try different sizes but it became clear his condition was serious when Elizabeth heard liquid slipping down his windpipe.

Ben was found, like about 60% of cases, to have the most severe form of SMA which often leads to paralysis and death before age two.

"The earlier and better preserved the nerves are, the more likely you are to have success," said Dr. Jerry Mendell, director of the gene therapy center at Nationwide Children's Hospital in Columbus, Ohio and principal investigator in the Zolgensma trials.

Zolgensma is becoming available to younger babies across the U.S.: Most states screen newborns for it, and health insurers typically cover Zolgensma for children with the most severe form.

But even after diagnosis, not all children are immediate candidates.

Zolgensma delivers the gene through a modified virus. Ben had already been exposed to the virus in nature and had developed antibodies, which would neutralize the replacement genes in his body.

Dr. Russell Butterfield, pediatric neurologist, University of Utah Health and Intermountain Primary Children's Hospital, said it's "not all that rare" for babies to have antibodies to the vector. He put the incidence at 15% to 20%.

Babies can also be ineligible due to issues such as liver problems Zolgensma's label warns it can cause these. Last year, two children in Russia and Kazakhstan died of acute liver failure several weeks after receiving it.

Ben was originally treated with Evrysdi, which aims to boost SMN levels directly. It took months for his antibody levels to fall to the point he could start Zolgensma.

"When they did say the antibodies were down, oh I cried on that phone call," Kutschke said.

When he finally received Zolgensma, Ben's hand movements became smoother and he was able to lift his head, "but then after that big rocket forward in progress, it seemed to just stall out," his mother said.

The body develops antibodies to the virus used, so each therapy using it can only be administered once. Doctors recommended a third treatment, Biogen's Spinraza.

The most recent Novartis data, updated in March, shows this was not unusual. Its study found that 24 of 81 children given Zolgensma as of May 2022 had been subsequently treated with other SMA drugs. Novartis declined to provide details on them.

"There are a whole lot of things that happen in real life circumstances" to affect the utility of treatment with Zolgensma, Mendell said. For less responsive patients, Mendell said it is reasonable to use other treatments.

But it can be difficult to get insurance coverage after Zolgensma. Some health plans explicitly exclude add-on therapies if a child received it, according to policies posted on their websites.

Kutschke said her insurance company UMR, part of UnitedHealth Group, balked at reimbursing another drug. A spokesperson for UnitedHealth declined to comment.

Kutschke said UMR reasoned that Ben "should be fine" after getting Zolgensma "Which is really heartbreaking because we thought he would be too."

It took nearly six months of paperwork and appeals supported by Ben's medical team before payment for Spinraza was approved.

Since then Ben, who now uses a wheelchair at age 2, began talking a lot more. He was suddenly able to move his lower body, his mother said.

Additional reporting by Eric Cox in Chicago; Edited by Caroline Humer and Sara Ledwith

Our Standards: The Thomson Reuters Trust Principles.

See more here:

Insight: What happens when a $2 million gene therapy is not enough - Reuters

Read More..

Automated Data Mapping Tools: 5 Red Flags to Avoid – JD Supra

Data subject access requests (DSARs), records of processing activities (RoPAs), vendor risk management, a dozen other data privacy compliance requirementsall of them depend upon or are significantly facilitated by a map of the personal information your organization processes.

But theres no hard and fast requirement for a data map in the GDPR, CPRA, or any other data privacy regulation. As a result, many privacy professionals dont think to investigate data mapping until they start diving into the day-to-day work of their privacy program. After weeks of interviews, dozens of emails, and a labyrinth of spreadsheets, they realize that they need an automated tool.

But as is always the case, not every tool is made equal. Some are inefficient; some substitute the work of manual data mapping with more work of a different kind; some just create yet more work for a privacy professional.

To help privacy professionals spot tools that are more trouble than their worth in advance, weve identified five red flags to watch out for in an automated data mapping tool.

Data mapping isnt unique to data privacy. Knowing where organizational data lives and what types of data are available is important for a wide variety of projects. That might include:

There's a wide variety of tasks that a data scientist can accomplish, but usually, the business wants them to conduct analyses that translate directly to dollars and cents. When privacy professionals need to rely on data scientists to map the organizations data for compliance purposes, theyll often find that compliance tasks are de-prioritized in favor of revenue generation.

Making a persuasive business case for your privacy program can mitigate this to a degree, but the reality is that data scientists are always going to be an in-demand resource at any organization. If your automated data mapping tool is owned and operated by the data science function, your privacy program will always be steps behind, and your organizational compliance posture will never be where you want it to be.

Software can do a lot of things, but magic isnt one of them. Invariably, an automated data mapping tool will run into edge cases, exceptions, and instances where manual effort is required. Consider how youll map data from:

In 1955, psychologists Joseph Luft and Harrington Ingham coined the term unknown unknowns; that is to say, issues that you aren't aware of and which you lack insight into. Unknown unknowns always appear, and the hallmark of a good tool is being prepared to handle them.

For the unknown unknown stores of personal data at your organization, its essential that your tool provides a way to facilitate discovery and streamline manual mapping efforts.

When automated data mapping tools make no mention of how they facilitate necessary manual work, they also tend to have a very narrow definition of automation and a very narrow scope. For example, an allegedly automated data mapping solution might automate just the discovery of personal data stores and not the metadata labeling and tagging that makes downstream compliance activities possible.

Whether you use an in-house automated data mapping tool or a third-party tool, a common issue that privacy professionals run into is being inundated by data stores that need to be investigated. Because data privacy compliance is an ongoing process, new data stores will be added to your data map all the time. Not all of these data stores pose the same level of risk. Some might not be involved in downstream data transfers, for example; they might not store sensitive data; or they might not store large volumes of data.

Some automated data mapping tools present these data stores as equally important. That means youll have to spend time manually investigating low-risk data stores while stores that actually pose a high risk remain unmitigated.

But in reality, it isnt too much to ask for an automated data mapping tool to estimate the level of risk posed by one data store or another. Its possible to assess the number of exports to vendors, the number of connected systems, the number and types of data fields scored, the number of identities handled, and so on to estimate high-risk versus low-risk data stores.

Since your organizations data landscape is perpetually changing, youll need to use your automated data mapping tool to scan for data stores on a regular basis. When you do, you wont want to have to wade through a backlog of data stores youve already investigated and evaluated as being irrelevant.

Not everything thats capable of holding personal information will actually do so. Or sometimes youll find data stores that require no further action. The right tool will provide quality-of-life capabilities that allow you to flag certain data stores as irrelevant, so you dont waste unnecessary team re-reviewing something that doesnt affect compliance.

Non-privacy-focused data mapping tools are often guilty of this, but even some tools meant strictly for privacy professionals suffer the same flaw: They dont make it easy to actually do anything with your data map.

There isnt a law that specifically says you need to have a data map for your organization. However, a myriad of regulatory requirements depends upon or are made significantly less tedious with a data map, such as:

Thats why the best data mapping tools for privacy professionals are integrated into an overall compliance platform.

Take Osano for example. Privacy professionals who use Osano as their automated data mapping tool can easily use discovered data for DSARs, to populate their RoPAs, and to quickly filter and search through data stores and associated metadata to identify redundancies, unneeded data, and data stores that are potentially responsive to a DPIA.

In fact, Osano passes all of the tests we described in this articleit:

See more here:

Automated Data Mapping Tools: 5 Red Flags to Avoid - JD Supra

Read More..

The first adult learners of the Saturdays in the D summer program … – University of Michigan

By: Greta GuestSource: Michigan NewsPhotos By: Scott Soderberg, Michigan Photography

LaCresha Goss celebrates earning her certificate in writing and editing during the Saturdays in the D program this summer at the U-M Detroit Center.

LaCresha Goss celebrates earning her certificate in writing and editing during the Saturdays in the D program this summer at the U-M Detroit Center.

Switching gears after losing his job with a local auto supplier, Raphael French thought a Python class offered through Saturdays in the D could be a great start.

The program spoke to me because I wanted to get into computers. I figured Id have to learn the language, French said. I wanted to get back to something someone couldnt take away from me and thats education.

He and other students who participated in the six-week pilot Saturdays in the D Summer Camp and Adult Skills Enrichment Experience graduated Saturday with a course certificate. Of the 109 who registered, 38 made it to the finish line.

The city of Detroit, Detroit Public Schools Community District, University of Michigan Center for Academic Innovation, Ally Financial and the Song Foundation partnered to offer the program this summer after a three-year COVID delay.

Melia Howard, city of Detroit director of community relations, at the Saturdays in the D graduation.

You all are what we are calling our pioneer learners, said Melia Howard, director of community relations for the city of Detroit, noting that the program was first focused on childrens extracurricular activities and tutoring. We transitioned this year to include adults so families can have a shared experience in learning coding and other programs.

While Detroit youth headed to the University of Michigan campus in Ann Arbor for a variety of STEM camps, for the first time adult residents had the opportunity to develop new skills with free professional development classes offered at the U-M Detroit Center on Woodward Avenue.

The classes offered adults a choice of four topics: Getting Started with Python; Writing and Editing; Science of Success; and Emotional Intelligence.

Enhancing skills for new directions

LaCresha Goss started out with the Python program with her daughter Selena, a 2013 U-M graduate, but switched to the writing course. Both women want to enhance their careers.

LaCresha, who works at a Veterans Administration call center, said she wants to be more of an advocate for the veterans. I want to use my words better. Thats what I hope this program will do for me.

LaCresha Goss and her daughter Selena Goss at the Saturdays in the D graduation ceremony at the U-M Detroit Center.

And Selena wants to go from her work in food access to policy. I want to be in rooms where policies are made for people who look like me.

The adult learners also were able to access:

Workshops on budgeting, credit and home ownership; a panel about trends in tech jobs called TechnicALLY Speaking, plus peer mentors from Ally Financial. Admissions counseling from the U-M Admissions Office for learners interested in furthering their education. Detroit at Work program staff to learn about top fields in need of workers. Free laptop loans

Because its all about helping communities build new skills and explore career paths, its no surprise Saturdays in the D has been so popular over the years, said Ali Summerville, business administration executive and head of Corporate Citizenship at Ally. Supporting our hometown communities is a top priority, and were confident the return of Saturdays in the D will continue to inspire a brighter future as a catalyst for economic mobility.

Rapheal French graduates from the Saturdays in the D program during a ceremony at the U-M Detroit Center.

French met with an admissions officer to talk about options. Hes starting at Wayne County Community College in the fall to study computer information systems. But the ultimate goal is to transfer to U-M, he said.

When I go back into the workforce, I want to have an education, he said. The prestige behind U-M made me want to join the Saturdays in the D program. To me, its about the access. Without it, I wouldnt have met all these people. Going to Michigan is an ultimate dream of mine.

French said he was very inspired by his program instructor Tamara Qawasmeh. She was a data curator at the U-M Institute of Social Research when she learned Python. Then she went for her masters degree at U-M in applied data science.

Rapheal French receives his certificate for the Python course from instructor Tamara Qawasmeh.

Honestly, Python changed my life. My coding brain got turned on, Qawasmeh said. But you cant learn coding without help. Having a support system is the only way I was able to learn Python.

Creating the support systems

The online courses were orginally created by U-M faculty in collaboration with the Center for Academic Innovation to make the content work for a global online audience. Those online courses were then selected to be featured in the Saturdays in the D program.

The next step was to work with the U-M Center for Innovation content strategy team and the city of Detroit team to perfect the offerings for the in-person approach to learning. Course facilitators were able to use an adult learning framework and lesson plan templates designed by the Center of Academic Innovation as the basis for their weekly sessions, but were given the latitude to personalize it for each group.

We are proud to offer Michigan Onlines workforce development and skill-building opportunities to learners in the city of Detroit. Through the enthusiastic participation of learners, we have built on our understanding of what learning opportunities are most relevant to these adult learners, and how to best support them by blending facilitated instruction in online and in-person environments. We are using this information to inform future programs, said Sarah Dysart, senior director of online learning at the Center for Academic Innovation.

Ally Financial team includes Bhusan Satapathy, director of Data & Analytics; Dan Lemont, executive director of Tech Strategy; Brenda Rios, public relations; and Bill Barbour, manager of Fraud Technology.

While these courses are offered through the universitys online education platform Michigan Online, they were redesigned as a hybrid (online + in-person) experience and focused on how people from various educational and socio-economic backgrounds learn best.

We knew the program participants were adult learners, so we wanted to learn more about them. Thats why we collected additional data on their backgrounds, aspirations, and needs to inform design decisions and how the course facilitators could best employ the principles of adult and inclusive learning, said Ahmed Lachheb, senior learning experience designer at the Center for Academic Innovation.

Several faculty members who created the original online courses including Charles Severance (Python), Paula Caproni (Science of Success), Jeffrey Sanchez-Burks and Cheri Alexander (Emotional Intelligence) and Patrick Barry (Writing and Editing) have also attended some of the in-person sessions.

Lauren Atkins Budde, director of academic content strategy for the U-M Center for Innovation.

At the end of the day, this program is about communities that value education. It took a lot of support and passion for lifelong learning from the city and several units and faculty at U-M to pull this together, said Lauren Atkins Budde, director of academic content strategy at the U-M Center for Innovation. And through the past two months weve seen a group of curious strangers become a dedicated, inspiring community of learners invested in themselves and each other. Its pretty magical.

Back to News + Stories

Go here to read the rest:

The first adult learners of the Saturdays in the D summer program ... - University of Michigan

Read More..

How AI is transforming scientific research, with Rebecca Willett (Ep … – UChicago News

One of the first areas to be affected by AI has been science. Researchers have already started to integrate these new technologies into their work., but what does it mean to let these systems into our labs? Can they be trusted? And, if they so, how radically can they push science forward?

Prof. Rebecca Willett is the faculty director of AI at the University of Chicago Data Science Institute. She is at the forefront of thinking through how AI will change science, and in this episode, she examines the opportunitiesand the dangersof this rapidly emerging technology.

Subscribe to Big Brains onApple Podcasts,StitcherandSpotify.

(Episode published August 10, 2023)

Subscribe to the Big Brains newsletter.

Please rate and review the Big Brains podcast.

Paul Rand: Big Brains is supported by the University of Chicago Graham School. Are you a lifelong learner with an insatiable curiosity? Join us at Graham and access more than 50 open enrollment courses every quarter in literature, history, religion, science, and more. We open the doors of U Chicago to learners everywhere. Expand your mind and advance your leadership. Online and in-person offerings are available. Learn more at graham.uchicago.edu/bigbrains.

Hey ChatGPT.

ChatGPT: Hello Paul. How can I assist you today?

Paul Rand: Tell me everything you know about Professor Rebecca Willett.

ChatGPT: As of my last update in September 2021, Rebecca Willett was a professor of statistics and computer science at the University of Chicago.

Paul Rand: At this point, many of you have probably had an experience just like this.

ChatGPT: Professor Willetts Research has included significant contributions to the theory and methods of machine learning.

Paul Rand: Instead of simply going online when you need to research something. For instance, a guest coming on your podcast, you go to ChatGPT.

ChatGPT: She has also worked on applications of these techniques to a variety of fields including healthcare, astronomy and social science.

Paul Rand: ChatGPT isnt the first AI in our world.

Tape: ChatGPT, the revolutionary new language model developed by OpenAI with the ability to generate human-like text and hold natural conversations.

Paul Rand: But it did prove that these machine learning systems are capable of things that many people thought were impossible.

Tape: It impersonates what its read on the internet. Type in a request in it can write legal documents, software, even school essays. People are predicting it will wipe out whole industries.

Tape: Artificial intelligence tools like ChatGPT have really, theyve seemingly changed the world overnight, and many are racing right now to find ways to harness what seems like incredible power.

Paul Rand: ChatGPT, are you going to change the world?

ChatGPT: As an AI language model, I dont have personal agency or the ability to directly change the world. It is ultimately up to humans to utilize technology and knowledge to shape the world for the better.

Paul Rand: One of the ways we shape the world is with what we focus on in this podcast, research. And AIs could give us the power to completely revolutionize the way that we do science. But as our ChatGPT friend said, its up to us to use them to make it better, even though it has the power to make it worse.

Rebecca Willett: I think theres an enormous amount of hope that its really just going to fundamentally change the way we think of the scientific method.

Paul Rand: That of course, is University of Chicago professor Rebecca Willett.

Rebecca Willett: Just as when computers became widely accessible, it changed the way that we approached science and what we could do with scientific data. I think AI is similarly going to have just an enormous impact on this entire process.

Paul Rand: There isnt a single thing that the professors and researchers on our show do that couldnt or wont be changed by AI.

Rebecca Willett: The way that we formulate hypotheses, design experiments, collect data, and then analyze that data and then kind of close the loop by using that to help us develop new hypotheses.

Paul Rand: Willett is the faculty director of AI at the Data Science Institute, where she is at the forefront of thinking through how AI will change science, both the opportunities and the dangers.

Rebecca Willett: And so the hope is that with these more complicated non-linear models that are being used in AI right now, things like neural networks, that we can take large amounts of data and figure out patterns that as humans we didnt recognize or even expect.

Paul Rand: There is no one better to help us think through how these systems will fundamentally change how research happens.

Rebecca Willett: And weve seen this sort of step up in terms of the capability of these systems, and I think this has really just sparked the imagination of a lot of people. With these kinds of capabilities, what could we do now that we hadnt even imagined trying to do a few years ago?

Paul Rand: Welcome to Big Brains where we translate the biggest ideas and complex discoveries into digestible brain food. Big Brains, little bites from the University of Chicago Podcast Network. Im your host Paul Rand. On todays episode, how AI will Change Science and Research. ChatGPT if you were going to interview Rebecca Willett for a podcast about how AI will change science and research, what would you ask?

ChatGPT: That sounds like a fascinating podcast episode. Here are some potential questions you might consider. One, what are some of the most exciting opportunities that AI offers for advancing scientific research?

Rebecca Willett: So theres a lot of work in trying to use AI to accelerate the development of pharmaceuticals and drugs. I am especially excited about this because as we know, there are diseases that predominantly impact underserved communities, that are often under prioritized for this kind of research or under-resourced. And so if we can use AI to accelerate this process or reduce the cost of this discovery process, hopefully well see a real leap forward in the treatment of disease worldwide.

Another thing that I think we will see people doing is using AI to design new materials, especially materials that are more sustainable and perhaps more biodegradable or better for the environment. Using AI to design things like microbial communities that can help break down plastics or remove nitrates from water. It could be really useful for developing sustainable climate policies. So not only do we want to predict what the climate might look like under different scenarios, but wed like to have a better sense of uncertainties associated with those predictions and to design better economic policies, better tax strategies, better incentive programs.

If we only have forecasting systems that can run on supercomputers, then our ability to do that is somewhat limited. But with AI systems, I think well be able to do this much more effectively and quickly and reliably. And so these are just a few of the things off the top of my head, and this is just in the basic sciences. If we expand our scope to also think about health sciences or healthcare, theres just a lot of potential there as well, in terms of improving our ability to analyze lab tests or medical imaging data, our ability to understand a patients entire case history or even better evaluate how they will respond to different kinds of treatments.

Paul Rand: These are just a few of the incredible ways AI could change science. But what do they look like in practice? There are some basic steps of the scientific process; hypothesis generation, experiment design, data collection, that are going to be revolutionized by AI. But well start with Willetts specialty, data analysis.

Rebecca Willett: One kind of first pass thing thats going to happen is that people are going to start using AI to analyze data being collected within scientific context. So many of us have read, for instance about the James Webb Space Telescope.

Paul Rand: Right, right.

Tape: NASAs James Webb Space Telescope, the largest and most powerful of its kind, launched last Christmas and released its first image in July, the deepest sharpest view weve ever seen of the universe. Since then it has captured far away star nurseries, cosmic cliffs and galactic clusters. Anyone can see that the images carry breathtaking beauty and astonishing scale, but what do they actually tell us about our cosmos?

Rebecca Willett: This instrument and many instruments like it are collecting just huge volumes of data that cant possibly be looked at by a human, not all of it. And so the hope is that by using these AI tools, were going to see patterns that might escape a human or be able to see phenomena or anomalies that kind of countermand our current understanding of the science and lead us to asking new questions that we hadnt thought about before or questioning where our existing models are starting to break down. And so using AI to just analyze the raw data is the bare minimum of what were going to be seeing a lot of in the future.

Paul Rand: This raw power to analyze massive sets of data could solve a problem thats plagued science forever. Many times, whatever was being studied led to a negative result. For example, we thought these two compounds when mixed would create a new malaria drug, but they didnt. And because its not a positive result, it would get discarded.

Rebecca Willett: Yeah, I think this is a common concern in the sciences. I think people refer to it as the file drawer effect. Right? You get a negative result, you put it in your filing cabinet and forget about it.

Paul Rand: Yes, yes.

Rebecca Willett: Thats just sort of the nature of the field. If I have a positive result, then itll generally get more attention. And publishers are most interested in publishing those kinds of results.

Paul Rand: But that doesnt mean the result is useless. We still learn something. As the famous saying goes, were just discovering a thousand ways not to make a light bulb.

Rebecca Willett: And I think perhaps AI will change some of these trends. And I know that there are ongoing efforts with using things like large language models to analyze the scientific literature and to cross-reference different papers that are being published in different journals by different groups around the world, in order to kind of extract higher level themes or patterns.

Paul Rand: Fascinating.

Rebecca Willett: And I think thats a setting where these negative results could be enormously impactful and help with the development of those models. And so its possible that this kind of file drawer effect that weve had in the sciences for decades, we might just change the way we think about it with the development of these AI tools for trying to extract information from the literature. Maybe well see an added value to that that was a little harder to harness in the past.

Paul Rand: But there is a concern when it comes to using AI to analyze data. The founders of ChatGPT have already admitted theyre not quite sure how their AI comes to any individual results. In the context of an experiment, what if an AI analyzes the data incorrectly? If half the time AI models make predictions that contradict established scientific knowledge but turn out to be correct, how will we know when its right or when its wrong, especially if we dont understand how it works?

Rebecca Willett: Real science is about more than detecting patterns. Its about really understanding what the underlying mechanisms are. Its just much more than making raw predictions. And its not clear to what extent AI tools are really reflecting understanding, as opposed to having recognized different patterns. So lets just take ChatGPT as an example, because I think a lot of people listening have maybe played around with it a little bit. And when you do, it can almost feel like youre interacting with a human. It produces very realistic text. But under the hood, what its doing is on the most basic level, very simple. Its saying, Im going to build a model of a probability distribution thats going to say, What is the most likely next word that youre going to say, given the last 400 words that you say?

Paul Rand: Yep.

Rebecca Willett: And then when I want to generate some text, I just start drawing words from this probability distribution. And so of course, building this model is not trivial, but at the end of the day, all its doing is its generating somewhat random sequences of words from this distribution. Thats a far cry from understanding what the language is telling us or actually being sentient, for instance.

And I think its the same with science. Right? I think this could be an enormously useful tool, but thats a far cry from it really understanding science. And I think humans are just going to be an essential part of this process. If youre trying to use something like ChatGPT for science and having it write a scientific paper for you, youre going to be in trouble. Its definitely going to be making stuff up. Like I said, its drawing words at random from a very sophisticated probability distribution, but it doesnt actually know anything. And the more text you have it generate, the more likely it is that its going to be inconsistent with itself. I have two feelings about this. On one hand, people already make mistakes in science, innocent mistakes. This is why we form a scientific community. This is why all science isnt done by a handful of Nobel Prize winners.

Paul Rand: Right.

Rebecca Willett: We have thousands of people all trying to examine each others work, find where the potential holes might be, identify real discoveries that change the way we think. And that community is going to play a critical role in analyzing ideas coming out of an AI model, evaluating whether they make any sense at all, whether its a fresh take that nobody thought of, or whether its just complete BS. Ultimately, just that human in the loop is essential, people with rigorous scientific training who can evaluate these systems. Having peer review determine whats ready for publication versus whats relatively more or less made up.

Paul Rand: One of the other areas, at least as Ive read about AI and the sciences, one of the ones that gets talked about is this idea of hypothesis generation. And I wonder if you can tell us what that is and why that might be particularly compelling.

Rebecca Willett: Were starting to also see people thinking about using AI for things like even deciding what data to collect in the first place or what experiments to run. So imagine, for instance, that I wanted to design a microbial community that could help improve somebody with a broken gut microbiome, and I want to help fix that. So we could just sort of randomly put a bunch of probiotics in their system and hope for the best. But a lot of the current approaches can be pretty short-lived if they work at all. And so what wed like to know is what determines whats going to make a good microbial community versus a bad one. And theres maybe trillions of possibilities. I cant just build them all and test them all. It would take too many resources.

And so what Id like to do is to integrate AI into this process, design a small number of communities, run some experiments on it, take that data and narrow down somehow the space of the hypotheses I have about what makes a good microbial community versus a bad one, and use that model, and any kind of uncertainties associated with that model, to help design my next set of experiments or which microbial communities I wanted to test next. And the hope is that by using AI in this process, well be able to use our money and experimental resources much more effectively than if we didnt have AI helping to suggest the next new experiments to run.

Paul Rand: But if we become too reliant, is there a concern about a future where our research agendas are becoming driven by AI? Could AI actually lead to a decrease in creative ideas from the scientific community, through path dependency based on the inputs we put into the system?

Rebecca Willett: It depends on the context. So if we go back to my earlier example where I want to find the best microbial community out of trillions of possibilities, and I have a very clear notion of what makes it the best, I can measure that, I have a lot to gain here. I can reduce the amount of resources I have to spend on collecting data, but that approach is not appropriate if Im really sort of more in an exploratory mode. So if I dont know exactly what Im looking for, then using one of these methods might mean that I just never do an experiment on something thats really interesting, but just not exactly aligned with my overall objective. And so theres this kind of inherent trade off between exploration and exploitation.

Paul Rand: How do you mean by that?

Rebecca Willett: Part of good science is just exploring the unknown. Part of what we try to do to make products and services available to people is exploitation, trying to exploit our known knowledge to design better systems or to guide the way we design experiments.

Paul Rand: Okay.

Rebecca Willett: And so depending on the context, yeah, I think using AI for experimental design would not be the right choice. And relying overly on an AI system to make predictions without kind of a thoughtful human behind the scenes is possibly a fools errand.

Paul Rand: And of course, as our AI co-host mentioned at the beginning, who that human is behind the scenes matters a great deal. How AI could open the ability to do science up to more people and why that may not be a good thing, after the break.

If youre getting a lot out of the important research shared on Big Brains, theres another University of Chicago podcast network show you should check out. Its called Entitled, and its about human rights. Co-hosted by lawyers and new Chicago Law School professors, Claudia Flores and Tom Ginsburg, Entitled explores the stories around why rights matter and whats the matter with rights.

Big Brains is supported by the University of Chicago Graham School. Are you a lifelong learner with an insatiable curiosity? Join us at Graham and access more than 50 open enrollment courses every quarter in literature, history, religion, science, and more. We open the doors of UChicago to learners everywhere, expand your mind and advance your leadership online and in-person offerings are available. Learn more at graham.uchicago.edu/bigbrains.

There is this concern that AI will eliminate jobs, but could it be the other way around? There have always been strong barriers to doing science, like needing a deep knowledge of fields, methods and statistics, and lets be honest, a high level of intelligence. But could these tools open the gates wider to people who may know how to ask the right questions and explore ideas, but dont have the other skills or time or money to acquire those skills?

Rebecca Willett: Im not sure about the answer.

I think theres inherent value to rigorous scientific training. So as we said before, what ChatGPT is doing is its generating plausible strings of text that might in no way be true. And I think its important for somebody to be able to recognize when this string of words is at all consistent with our understanding of science or where it might be going awry. And with no background, I think youre just unequipped to do that. On the other hand, creativity is extremely important in science. We normally associate it more with the arts and humanities, but really thinking of creative explanations for how the world works and why is essential. And so to some extent, if these tools allow people to generate more creative ideas, if we can develop AI assistance for scientists that allow them to really harness their creativity, I think it could be exciting.

And theres a lot of people who are really thinking about leveraging or developing creative AI assistance. Another way in which AI might help democratize science is in helping us to process our training data. For instance, one big citizen science initiative thats been running for many years now is called Galaxy Zoo, where humans do a little bit of training and then theyre presented with images of galaxies and theyre asked to answer some questions about those galaxies. And what this is doing is basically producing labels for the training data that might be used to analyze just millions of images of galaxies. And I think that having high quality training data is essential to making a lot of these AI systems work well. And so these kinds of citizen science projects provide a really cool opportunity I think, for science enthusiasts to play an important role.

I think there are also a broader category of risks that we need to think about. For instance, if we place too much trust in these AI systems, we might think, well, we need to train fewer scientists in the United States, because the AI is going to do all this work for us. And I think if we overestimate the capability of those systems, thats a real risk and a real missed opportunity. We still need those human thinkers.

Paul Rand: But what if those human thinkers are bad actors? We know that news organizations and people on social media will often cite case studies theyve seen online, but have done very little research into. In a future where AI can generate a thousand fake studies that look legitimate in a matter of minutes, how should the scientific community be thinking about maintaining integrity?

So if you were going to build safeguards in to help advise on protecting against some of these downsides, what kind of safeguards would come top of mind to you?

Rebecca Willett: Yeah, its a good question. So first Ill just tell you some of the things that the people mightve read about already in the news.

Paul Rand: Okay. Rebecca Willett: So theyll say something like, Well, I want to know what data that system was trained on. And on one hand that sounds good. I want to know if your face recognition was only trained on white men and will probably fail on Black women. That seems like a useful thing for me to know. On the other hand, when we look at something like ChatGPT that was trained on trillions of words that no human could possibly read, where no human could possibly read all of them, its kind of vacuous. Right? Telling me that doesnt tell me anything informative about whats going on under the hood for that ChatGPT system.

Another thing people have called for is building transparent or explainable AI systems, where the AI system can explain the decision its making to a layperson. And again, this sounds good in certain contexts, if were using AI to decide whos going to be let out on bail before defending their case in court, it sounds good for us to be able to explain what criteria the AI system is using. On the other hand, there are other tasks that are very difficult to explain, especially to a lay person. Like, how is a CAT scan image constructed from the raw data off the scanner? So there are a variety of things like this that have been proposed, that in the right context are important and meaningful, and in general are really insufficient.

And I hate to say this because I dont have a better solution that I can propose. I think that these are actually open technical questions. How do we build a system thats going to allow us to somehow certify it, certify that its not too biased against vulnerable groups, certify that its protecting peoples privacy in very general ways, certify that your autonomous vehicle is not going to kill a bicyclist? Besides just designing tests and trying things out, we dont really have a good handle on this. And its an open question about whether we can actually build in hooks or inroads into these systems that will allow us to test and validate and certify these systems more effectively.

Another risk, science misinformation, if you will. So you could imagine someone maliciously trying to generate a bunch of fake scientific articles towards some end, presumably malicious, and making it very hard for earnest scientists to figure out. Well, what is actually known? What experiments were actually run and whats been faked? And thats going to just put a drain on the resources for this whole scientific community.

And so yeah, I think there are definitely several different risks. Some of them, just in terms of what we need to do as academics to make sure that people are using AI in a rigorous and ethical way, and others about outside actors potentially doing malicious things that would have a terrible effect on us all. Right now, human oversight is just essential. Here at the University of Chicago, like most US universities, we have IRBs, institutional review boards. And before I run certain experiments, I need their approval to make sure that theres no major ethical lapse. Now, for the most part, those boards are for when Im running experiments on humans or animals. A lot of the research that I do on AI is not covered by those sorts of human oversight boards. So yeah, there certainly are risks.

Paul Rand: Here at the University of Chicago, Im seeing your name popping up with great frequency, all sorts of different topics with AI and the sciences.

Rebecca Willett: One of the great things about U Chicago is that theres a huge number of interactions across different departments. And so physicists and chemists, astronomers, ecologists, computer scientists, and statisticians, are constantly getting together and talking with each other, and partnering to help advance using AI in a rigorous way in the sciences. And I think this is especially exciting, because its not like things are somehow pigeonholed, where one little group is thinking about AI and physics, and a totally separate group is thinking about AI and chemistry, with no meeting in between. Weve really been focused on trying to think about core principles in AI that will influence many of the sciences. And were already seeing connections across different disciplines.

Paul Rand: Can you give any examples of some of those?

Rebecca Willett: The Margot and Tom Pritzker Foundation recently supported a joint conference between the University of Chicago and Caltech, bringing in worldwide experts in AI and science across multiple different disciplines for a three-day conference. And this was really an experiment. Most of the conferences in this space are much more kind of narrowly focused on a particular scientific domain, but it turned out to be great. We had a U Chicago researcher, Samantha Reisenfeld, talking about how she uses clustering to understand aspects of immune responses in tissues. The idea is Ive got a lot of different data points. So for example, Ive just got lots of different images of dogs, for instance. And these data points or these dog images, they dont have any labels. And what I want to do is I want to just group them into groups where somehow everything in the group is similar, and members of different groups are dissimilar.

Paul Rand: Fascinating.

Rebecca Willett: And so she was using these kinds of clustering ideas to analyze data from human tissues, and understanding peoples immune responses to different pathogens. And there was a physicist from MIT who was listening to this talk. And he said, This is amazing, because it turns out Im studying particle physics, and Im facing exactly the same challenge, but in a totally different context. And some of the specific approaches that Samantha was using turned out to be extremely relevant to the constraints associated with his physics problem.

Paul Rand: My goodness.

Rebecca Willett: And people were thrilled by this. They said, Yeah, normally I just talk to the same group of people over and over, and see the same ideas in our small little insular community. And by having this conference across different boundaries, I saw a whole different set of methods I could use.

Paul Rand: As we step into our AI future, it can sometimes feel like were in the famous dinner scene from Jurassic Park.

Tape: The world has just changed so radically, and were all running to catch up.

Paul Rand: Its clear that AI could be a powerful tool that scientists could use to cure diseases, solve climate change, or even take us outer space. But-

Speaker 9: Your scientists were so preoccupied with whether or not they could, they didnt stop to think if they should.

Paul Rand: As Professor Willett explains, there are all sorts of ways these systems could go wrong, radically wrong, if they get too far ahead of human oversight, judgment and control. And even ChatGPT agrees.

ChatGPT: AI can be a powerful tool. It doesnt replace the need for human judgment. AI is best used in partnership with human researchers, rather than as a replacement for them.

Matt Hodapp: Big Brains is a production of the University of Chicago Podcast Network. If you like what you heard, please leave us a rating and review. The show is hosted by Paul M. Rand and produced by me, Matt Hodapp, and Lea Ceasrine. Thanks for listening.

More:

How AI is transforming scientific research, with Rebecca Willett (Ep ... - UChicago News

Read More..

How to Build an Efficient Data Team to Work with Public Web Data – ReadWrite

The topic of how to assemble an efficient data team is a highly debated and frequently discussed question among data experts. If youre planning to build a data-driven product or improve your existing business with the help of public web data, you will need data specialists.

This article will cover key principles I have observed throughout my experience working in the public web data industry that may help you build an efficient data team.

Although we have yet to find a universal recipe for assisting public web data the good news is that there are various ways to approach this subject and still get the desired results. Here we will explore the process of building a data team through the perspective of business leaders who are just getting started with public web data.

A data team is responsible for collecting, processing, and providing data to stakeholders in the format needed for business processes. This team can be incorporated into a different department, such as the marketing department, or be a separate entity in the company.

The term data team can describe a team of any size, from one to two specialists to an extensive multilevel team managing and executing all aspects of data-related activities at the company.

Theres a straightforward principle that I recommend businesses working with public web data to follow: an efficient data team works in alignment with your business needs. It all starts with what product you will build and what data will be needed.

Simply put, every company planning to start working with web data needs specialists who can ingest and process large amounts of data and those who can transform data into information valuable for the business. Usually, the transformation stage is where the data starts to create value for its downstream users.

The first hire can be a data engineer with analytical skills or a data analyst with experience working with big data and light data engineering. When building something more complex, its essential to understand that public web data is essentially used for answering business questions, and web data processing is all about iterations.

Further iterations may include aggregated data or enriching your data with data from additional sources. Then, you process it to get information, like specific insights. As a result, you get information that can be used in processes that follow, for example, supporting business decision-making, building a new platform, or providing insights to clients.

Looking from a product perspective, the answer to what data team you need is connected to the tools you will be using, which also depends on the volumes of data you will be using and how it will be transformed. From this perspective, I can split building a data team into three scenarios:

Ultimately, the size of your data team and what specialists you need depend on your product and vision for it. Our experience building Coresignals data team taught us that the key principle is to match the teams capabilities with product needs, despite the seniority level of the specialists.

The short answer to this question is It depends. When it comes to the classification of data roles, there are many ways to look at this question. New roles emerge, and the lines between existing ones may sometimes overlap.

Lets cover the most common roles in teams working with public web data. In my experience, the structure of data teams is tied to the process of working with web data, which consists of the following components:

In her article published in 2017, a well-known data scientist Monica Rogati introduced the concept of the hierarchy of data science needs in an organization. It shows that most data science-related needs in an organization are related to the parts of the process at the bottom of the pyramid collecting, moving, storing, exploring, and transforming the data. These tasks also make a solid data foundation in an organization. The top layers include analytics, machine learning (ML), and artificial intelligence (AI).

However, all these layers are important in an organization working with web data and require specialists with a specific skill set.

Data engineers are responsible for managing the development, implementation, and maintenance of the processes and tools used for raw data ingestion to produce information for downstream use, for example, analysis or machine learning (ML).

When hiring data engineers, overall experience working with web data and specialization in working with specific tools is usually at the top of the priority list. You need a data engineer in scenarios 2 and 3 mentioned above and in scenario 1, if you decide to start with one specialist.

Data analysts primarily focus on existing data to evaluate how a business is performing and provide insights for improving it. You already need data analysts in scenarios 1 and 2 mentioned above.

The most common skills companies seek when hiring data analysts are SQL, Python, and other programming languages (depending on the tools used).

Data scientists are primarily responsible for advanced analytics that are focused on making future predictions or insights. Analytics are considered advanced if you use them to build data models. For example, if you will have machine learning or natural language processing operations.

Lets say you want to work with data about companies by analyzing their public profiles. You want to identify the percentage of the business profiles in your database that are fake. Through multiple multi-layer iterations, you want to create a mathematical model that will allow you to identify the likelihood of a fake profile and categorize the profiles youre analyzing based on specific criteria. For such use cases, companies often rely on data scientists.

Essential skills for a data scientist are mathematics and statistics, which are needed for building data models, and programming skills (Python, R). You will likely need to have data scientists in scenario three mentioned above.

This relatively new role is becoming increasingly popular, especially among companies working with public web data. As the title suggests, the role of an analytics engineer role is between an analyst who focuses on analytics and a data engineer who focuses on infrastructure. Analytics engineers are responsible for preparing ready-to-use datasets for data analysis, which is usually performed by data analysts or data scientists, and ensuring that the data is prepared for analysis in a timely manner.

SQL, Python, and experience with tools needed to extract, transform, and load data are among the essential skills required for analytics engineers. Having an analytics engineer would be useful in scenarios 2 and 3 mentioned above.

As there are many different approaches to the classification of data roles, theres also a variety of frameworks that can help you assemble and grow your data team. Lets simplify it for an easy start and say that there are different lenses through which a business can evaluate what team will be needed to get started with web data.

Im referring to the web data in this article is big data. Large amounts of data records are usually delivered to you in large files and raw format. It would be best to have data specialists with experience working with large data volumes and the tools used for processing it.

When it comes to tools, you should consider that tools that your organization will use for handling specific types of data will also shape what specialists you will need. If you need to become more familiar with the required tools, consult an expert before hiring a data team or hire professionals to help you select the right tools depending on your business needs.

You may also start building a data team by evaluating which stakeholders the data specialists will work closely with and deciding how this new team will fit into your vision of your organizational structure. For example, will the data team be a part of the engineering team? Will this team mainly focus on the product? Or will it be a separate entity in the organization?

Organizations that have a more advanced data maturity level and are building a product that is powered by data will look at this task through a more complex lens, which involves the companys future vision, aligning on the definition of data across the organization, deciding on who and how will manage it, and how the overall data infrastructure will look as the business grows.

The data team is considered efficient as long as it meets the needs of your business, and almost in every case, the currency of data team efficiency is time and money.

So, you can rely on metrics like the amount of data processed during a specific time or the amount of money you spend. As long as you track this metric at regular intervals, the next thing you want to watch is the dynamics of these metrics. Simply put, if your team is managing to process more data with the same amount of money, it means the team is becoming more efficient.

Another efficiency indicator that combines the aforementioned is how well your team is writing code because you can have a lot of resources and perform iterations quickly, but errors equal more resources spent.

Besides the metrics that are easy to track, one of the most common problems that companies experience is trust in data. Trust in data is precisely what it sounds like. Although there is a way to track the time it takes to perform data-related tasks or see how much it costs, stakeholders may still question the reliability of these metrics and the data itself. This trust can be negatively impacted by negative experiences like previous incidents or simply the lack of communication and information from data owners.

Moreover, working with large volumes of data means spotting errors is a complex task. Still, the organization should be able to trust the quality of the data it uses and the insights it produces using this data.

It is helpful to perform statistical tests allowing the data team to evaluate the quantitative metrics related to data quality, such as fill rates. By doing this, the organization can also accumulate historical data that will allow the data team to spot issues or negative trends in time. Another essential principle to apply in your organization is listening to client feedback regarding the quality of your data.

To sum up, it all comes down to having talented specialists in your data team who can work quickly, with precision, and build trust around the work they are doing.

To sum everything up, here are helpful questions to help you assemble a data team:

I hope this article helped you gain a better understanding of different data roles that are common in organizations working with public web data, why they are essential, which metrics help companies measure the success of their data teams, and finally, how it is all connected to the way your organization thinks about the role of data.

Featured Image Credit: Photo by Sigmund; Provided by Author; From Unsplash; Thanks!

Karolis Didziulis is the Product Director at Coresignal, an industry-leading provider of public web data. His professional expertise comes from over 10 years of experience in B2B business development and more than 6 years in the data industry. Now Karolis's primary focus is to lead Coresignal's efforts in enabling data-driven startups, enterprises, and investment firms to excel in their businesses by providing the largest scale and freshest public web data from the most challenging sources online.

View post:

How to Build an Efficient Data Team to Work with Public Web Data - ReadWrite

Read More..

Data Science And The UK Insurance Industry (Video) – Insurance … – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

In this video, Tim Rourke explains the key challenges insurersin the UK face when embedding data science into theirorganisations.

The proliferation of data available to insurers presents new andexciting opportunities across the value chain.

As a data rich industry, insurance is bursting with moreopportunities than most sectors to use data and analytics toimprove business performance and customer experience. Many insurershave taken steps in this direction - some very successfully, someless so.

There are a number of key challenges insurers need to be awareof.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Insurance from UK

Weightmans

The case of Elizabeth Love v National Health Service Fife Health Board [2023] 23 Edin 18, is the latest case to highlight the battle defenders face in attempting to persuade...

Carey Olsen

Guernsey has introduced significant new protection for insurance policyholders. Claims by policyholders of a Guernsey-licensed insurance company...

Barton Legal

Whilst originally seen as another UK' based contract, NEC is now very much an international form that rivals FIDIC in its international reach.

The rest is here:

Data Science And The UK Insurance Industry (Video) - Insurance ... - Mondaq News Alerts

Read More..

MLOps & Quality Data: The Path to AI Transformation – Spiceworks News and Insights

Data-driven approaches and sound MLOps strategies enable organizations to unlock the full potential of AI and ML. Abhijit Bose of Capital One discusses that while

AI and ML are being used to transform enterprises and improve customer experiences, incomplete machine learning operationalization prevents the full potential of AI strategies.

Its an incredibly exciting time to be working in the field of AI and ML. AI is in the headlines daily, permeating culture and society and creating capabilities and experiences we have never witnessed before. And importantly, AI can transform how organizations evolve to reach decisions, maximize operational efficiency, and provide differentiated customer experience and value. But scaling AI and machine learning to realize its maximum potential is a highly complex process based on a set of standards, tools, and frameworks, broadly known as machine learning operations or MLOps. Much of MLOps is still being developed and is not yet an industry standard.

The quality of an organizations data directly impacts machine learning deployments effectiveness, accuracy, and overall impact. High-quality data makes ML models more resilient, less expensive to maintain, and dependable. It offers the agility to react to data and model score drifts in real-time and makes refitting the model easier so it can re-learn and adjust its outputs accordingly. This requires organizations to create and execute a comprehensive data strategy incorporating data standards, platforms, and governance practices.

This starts with making sure that data scientists and ML engineers have standard tools, ML model development lifecycle (MDLC) standards, and platforms; making sure data is secure, standardized, and accessible; automating model monitoring and observability processes; establishing well-managed, human-centered processes like model governance, risk controls, peer review, and bias mitigation.

See More: The Growth of MLOps and Predictions for Machine Learning

MLOps has a set of core objectives: develop a highly repeatable process over the end-to-end model lifecycle, from feature exploration to model training and deployment in production; hide the infrastructure complexity from data scientists and analysts so that they can focus on their models and optimization strategies; and develop MLOps in such a way that it scales alongside the number of models as well as modeling complexity without requiring an army of engineers. MLOps ensures consistency, availability, and data standardization across the entire ML model design, implementation, testing, monitoring, and management life cycle.

Today, every enterprise serious about effectively driving value with AI and ML is leveraging MLOps in some capacity. MLOps helps standardize and automate certain processes so engineers and data scientists can spend their time on better optimizing their models and business objectives. MLOps can also provide important frameworks for responsible practices to mitigate bias and risk and enhance governance.

Even as businesses increasingly acknowledge what AI can do for them, a seemingly relentless wave of adoption since 2017 began to plateau last year at around 50% to 60% of organizations, according to McKinseys latest State of AI reportOpens a new window . Why? I argue that MLOps programs that standardize ML deployment across organizations are beset by too many data quality issues.

Data quality issues can take several forms. For example, you often see noisy, duplicated, inconsistent, incomplete, outdated, or just flat-out incorrect data. Therefore, a big part of MLOps is to monitor data pipelines and source data because, as most of us know, AI and ML are only as good as the collected, analyzed, and interpreted data. Indeed, the most misunderstood part of MLOps is the link between data quality and the development of AI and ML models. Conversely, incomplete, redundant, or outdated data leads to results nobody can trust or use effectively.

Unfortunately, with so much data being created every second of the day, organizations are losing the ability to manage and track all the information their ML models use to arrive at their decisions. A recent Forrester surveyOpens a new window revealed 73% of North American data management decision-makers find transparency, traceability, and explainability of data flows challenging. Over half (57%) said silos between data scientists and practitioners inhibit ML deployment.

See More: The Competitive Advantage of Quality Data

Data transparency is a persistent challenge with ML because to believe an algorithms insights or conclusions, you must be able to verify its accuracy, lineage, and freshness of data. You must understand the algorithms, the data used, and how the ML model makes decisions.

Doing all those things requires data traceability, which involves tracking the data lifecycle. Data can change as it moves across different platforms and applications from the point of ingestion. For example, multiple variations of merchant names or SKUs could be added to simple transaction data that must be sorted and accounted for before being used in ML models. Data must also be cleansed and transformed before reaching that point.

Rigorous traceability is also important for ensuring that data is timely and relevant. It can quickly degrade or drift when real-world circumstances change, leading to unintended outcomes and decisions. During the pandemic, for instance, demand-planning ML models couldnt keep up with supply chain disruptions, leading to inventory shortages or excesses in various industries.

Successful companies also deploy sophisticated technology platforms for testing, launching, and inspecting data quality within ML models. They supplement those platforms with modern data quality, integration, and observability tools. They bolster everything with policies and procedures like governance, risk controls, peer review, and bias mitigation. In short, they give data scientists, data and ML engineers, model risk officers, and legal professionals the tools, processes, and platforms to do their jobs effectively.

When we have integrated data, governance tools, and AI platforms, MLOps processes work remarkably well. When someone builds an enterprise ML model and pushes it to production, they can begin tracking its entire lifecycle. They can monitor how and where data moves and where it lives, preventing data quality and drift issues. As such, they are more confident their ML models can guide business and operational decisions.

See More: The Evolution of Data Governance

Engineers, data scientists, and model developers understand this. But its up to them to help senior business leaders understand why investing in data tools, technologies, and processes is critical for MLOps and, ultimately, ML. Business depends on the technology imperatives of data and ML teams, and no enterprise organization can hope to compete without an AI/ML roadmap. As Forrester saysOpens a new window , AI is an enterprise essential and is becoming critical for enterprises of all shapes and sizes. Indeed, the analyst firm predicts one in four tech executives will report to their boards on AI progress this year.

Part of that conversation must involve letting senior leadership know they cannot take their feet off their collective data and MLOps gas pedals. Today, many businesses success is tied to MLOps and the technologies data science and ML teams deploy. Leaders must understand the importance of building around a foundation of data and a modern cloud stack. If they dont, they are likely to be outperformed by competitors that do.

What data-driven considerations and approaches should organizations consider to get the most out of MLOps? Let us know on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window . Wed love to hear from you!

Image Source: Shutterstock

Read more:

MLOps & Quality Data: The Path to AI Transformation - Spiceworks News and Insights

Read More..

Curiosity finds evidence of wet and dry seasons on ancient Mars – The Register

The Mars Curiosity rover continues to make discoveries that shed light on the early days of the Red Planet, this time having found evidence that the unforgiving dust world once experienced seasonal weather patterns and flooding.

The evidence came from photographs snapped by the NASA bot of the dry, dusty Martian surface marked by a series of hexagonal shapes that indicate mud covered the surface before drying and cracking.

The patterns Curiosity spotted showed junction angles of around 120 degrees, otherwise known as Y junctions, that only occur after repeat cycles of wet and dry.

Shapes in the mud on Mars that suggest wet and dry seasons ... Click to enlarge. Source: NASA/JPL-Caltech/MSSS/IRAP/Rapin et al/Nature

"In experiments, using clay layers, joint angles progressively tend towards 120 after 10 consecutive dryings with more cycles required to reach a homogeneous distribution centered at 120 and mature patterns of hexagonal shapes," scientists studying the snaps noted in a paper, which was published in Nature this week.

The cracks themselves are mere centimetres deep, which the boffins said suggests short wet-dry cycles "were maintained at least episodically in the long term," which would be yet another favorable condition for the past emergence of life on Mars.

"Wet periods bring molecules together while dry periods drive reactions to form polymers. When these processes occur repeatedly at the same location, the chance increases that more complex molecules formed there," said paper coauthor Patrick Gasda of the Los Alamos National Laboratory's Space Remote Sensing and Data Science group.

If the right organic molecules were present, "it's the perfect place for the formation of polymeric molecules required for life, including proteins and RNA," Gasda said.

Curiosity has made numerous water-related discoveries since arriving in the sulfate-bearing region of Mount Sharp in Mars' Gale Crater last year.

Shortly after it reached that area in October, Curiosity spotted popcorn-textured nodules containing minerals that suggested the prior presence of water, and then in February it clocked rippled rocks in the region that suggested the area was once lapped by waves. Further evidence of landslides potentially signaled rivers flowed down Mount Sharp from elevations beyond where Curiosity has traveled.

More recently, the newer Perseverance Mars rover found evidence of large, high-energy rivers flowing through the Jezero Crater area - another sign that Mars was once wet and possibly filled with life of some kind.

Repeated wet-dry cycles like those discovered by Curiosity, the scientists said, is another nail in the coffin for theories that Mars experienced a "monotonically declining water supply in the aftermath of an asteroid impact or a single volcanic eruption."

Recent theories have suggested that early Martian microbes may have changed the atmosphere drastically enough that Mars cooled until no longer able to support life. Drastic cooling in turn caused Mars' core to freeze, its magnetic field to dissipate, and its atmosphere to evaporate, or so the theory goes.

Visit link:

Curiosity finds evidence of wet and dry seasons on ancient Mars - The Register

Read More..

Best Python Tools for Building Generative AI Applications Cheat Sheet – KDnuggets

KDnuggets has released an insightful new cheat sheet highlighting the top Python libraries for building generative AI applications.

As readers are no doubt aware, generative AI is one of the hottest areas in data science and machine learning right now. Models like ChatGPT have captured public imagination with their ability to generate remarkably high-quality text from simple prompts.

Python has emerged as the go-to language for developing generative AI applications thanks to its versatility, vast ecosystem of libraries, and easy integration with popular AI frameworks like PyTorch and TensorFlow. This new cheat sheet from KDnuggets provides a quick overview of the key Python libraries data scientists should know for building generative apps, from text generation to human-AI chat interfaces and beyond.

For more on which Python tools to use for generative AI application building, check out our latest cheat sheet.

There are many open source Python libraries and frameworks available that enable developers to build innovative Generative AI applications, from image and text generation to Autonomous AI.

Some highlights covered include OpenAI for accessing models like ChatGPT, Transformers for training and fine-tuning, Gradio for quickly building UIs to demo models, LangChain for chaining multiple models together, and LlamaIndex for ingesting and managing private data.

Overall, this cheat sheet packs a wealth of practical guidance into one page. Both beginners looking to get started with generative AI in Python as well as experienced practitioners can benefit from having this condensed reference to the best tools and libraries at their fingertips. The KDnuggets team has done an excellent job compiling and visually organizing the key information data scientists need to build the next generation of AI applications.

Check it out now, and check back soon for more.

Continue reading here:

Best Python Tools for Building Generative AI Applications Cheat Sheet - KDnuggets

Read More..

Unleashing GenAI: Course5’s Innovative Approach to Data Analytics … – CXOToday.com

CXOToday has engaged in an exclusive interview with Jayachandran Ramachandran, Senior VP (AI Labs), Course5 Intelligence

Q 1. How does Course5 leverage AI and GenAI technologies to enhance its services?

Course5 leverages AI and GenAI technologies to enhance its services by using advanced analytics, natural language processing, computer vision, and generative AI to provide data-driven insights and solutions for various business challenges and opportunities.

Course5 uses OpenAIs GPT models to power its enterprise analytics platforms, such as Course5 Compete and Course5 Discovery. Compete is a market and competitive intelligence platform that helps brands drive business growth and strategy. Discovery is an Augmented Analytics Platform that helps users to ask questions in natural language and consume insights from multiple data sources. Course5 also has an AI Lab that focuses on research and product development in artificial intelligence.

Q 2. What sets Course5 apart from other data analytics companies in the market?

We are a pure-play data analytics and insights company focused on helping organizations drive digital transformation using artificial intelligence (AI), advanced analytics, and insights.

Course5s understanding of the omnichannel customer journey, digital business models, and its clients businesses combined with experience across analytical disciplines, application of AI technologies, and consultative problem-solving capabilities help create powerful competitive advantage for Course5 and its clients. The integration of Course5s solutions with its clients critical business workflows and focus on driving end-user adoption and impact for the client organizationrather than just deploying Analytics and Insights initiativesensures there is significant and sustainable value addition for clients.

With a multi-pronged AI strategy and focus on creating IP-based products and accelerators, Course5 is able to scale impact, drive non-linear growth, and compete effectively across the entire data and advanced analytics landscape. Course5 is also one of the few companies to get IP-based recognition from leading industry analysts like Gartner and Forrester.

The Course5 business model of driving the services business with AI-based IP incorporates domain-specific and functional nuances to provide contextualized business solutions. Course5s competitive advantage comes from a mix of factors:

Course5 Intelligence has been recognized by leading industry analysts like Gartner and Forrester for its Analytics and AI capabilities and is one of the few companies to be recognized for its proprietary AI-based platforms. Continuous investments in Course5s AI Labs and planned deep tech acquisitions will continue to help the company drive incomparable value and competitive advantage for clients.

Q 3. Are there any specific industries or domains where Course5 has strong expertise? Are you exploring new sectors to venture into at the moment?

Course5 has deep domain knowledge and expertise across various industries, such as technology, media and telecom, manufacturing, life sciences, consumer packaged goods, and retail.

Besides innovation and customer-centricity, Course5 strives to create synergies for business-oriented solutions. Along with the industries and domains mentioned above, we also offer solutions to banking and financial services, travel and hospitality, automotive, and education.

Q 4. How do you approach data quality and data cleansing to ensure accurate and reliable results?

Course5 is a data science company that provides solutions for various industries and domains. We have a rigorous approach to data quality and data cleansing to ensure accurate and reliable results.

To ensure data quality, we follow several steps:

Q 5. Recently, Course5 integrated GenAI into its product Discovery. How difficult or easy is it to integrate GenAI into a companys existing data analytics process?

Course5 Discovery brings the power of generative AI technologies, advanced AI and analytics models into a platform that delivers actionable insights, to make faster and accurate data backed decisions for businesses that drives substantial and direct impact on business KPIs. A combination of deployment approaches such as prompt engineering and model fine-tuning are required to achieve best results. Generative AI is still evolving and requires adequate guardrails to be put in place so that insights are trustworthy. Robust process is the need of the hour to validate data, models, prompt input and completions so that Generative AI solutions does not induce new risks into the business process.

Q 6. Course5 has its own AI Labs R&D. Could you elaborate on some of the noteworthy work being done there?

AI Labs is at the core of Course5s product and research strategy, allowing us to pursue strategic bets and create a True North for the company and its clients. By incorporating the newest artificial intelligence technologies into our products and solutions such as Course5 Compete, Course5 Discovery and multiple other accelerators, we aim to automate, gain actionable insights, improve experiences, gain competitive advantage, and differentiate our clients businesses from the competition.

We accomplish this by meticulously scanning a plethora of AI frameworks, models, and algorithms, choosing the appropriate ones for rapid experimentation, curating them for business alignment, and making them compatible with enterprises. We apply best practices from design thinking and lean and agile frameworks in our product development methodology to bring in client-centricity, quick market validation, and reduced time-to-market.

Q 7. How can solutions providers such as yourself ensure the ethical use of AI in data analytics processes?

The Course5 team understands the influence and impact of AI decisions on peoples lives, as well as the enterprises responsibility to manage possible ethical and sociotechnical implications as a result. The Course5 Governance framework ensures the models are built with Responsible AI practices covering security, privacy, fairness, explainability, and interpretability.

Course5 is committed to the AI community for the safe and responsible development and use of generative AI models that are overall more powerful than any released technologies. Our ethical and Responsible AI practices are strongly focused on creating an ecosystem of AI for Good. These commitments build on an approach to responsible and secure AI development, that will help pave the way for a future that enhances AI benefits and minimizes its risks.

More:

Unleashing GenAI: Course5's Innovative Approach to Data Analytics ... - CXOToday.com

Read More..