Page 1,465«..1020..1,4641,4651,4661,467..1,4701,480..»

The GOP May Be Losing on Abortion, but Its Winning Extremist Abortion Bans Anyway – The Nation

A protester is taken into custody during a pro-choice protest Washington, DC on October 8, 2022, following the overturn of Roe v. Wade. (Nathan Posner / Anadolu Agency via Getty Images)

The womenand menof Wisconsin who elected pro-choice state Supreme Court Justice Janet Protesiewicz by 11 points last Tuesday were still celebrating their hard work on Friday night when an unelected federal district judge in Amarillo, Tex., took away their right to seek the medication-abortion drug mifepristone. Judge Matthew Kacsmaryk happens to have been chosen by Donald Trump, who was indicted the same day Protesiewicz was elected on charges that he illegally covered up paying hush money to a porn star and a Playboy model he screwed, to keep the news from voters right before the 2016 election.

To recap: Pro-choice forces won an election in Wisconsin, but nonetheless may have lost key abortion rights (Kacsmaryk stayed his decision for a week to give the government time to respond) because Trump tried to steal an election (and likely succeeded). All over the country, small D-democratic voters, who make up a majority of the nation, are having their rights taken away by a minority of ruthless authoritarians.

You dont need me to break down the legal gobbledegook, as my colleague Elie Mystal described Kacsmaryks ruling here, that the judge used to find that the FDA wrongly concluded mifepristone was safe 23 years ago. This former crisis pregnancy center board member, whose toddler used to wear an I survived Roe v. Wade T-shirt, spewed a decision full of junk science and made-up law. He took as a given that a fetus is an unborn person, though fetal personhood laws have failed even in red states. He referred derisively to doctors who provide abortion as abortionists, the language of the often violent anti-abortion right. And he promoted desperate, discredited myths about the tragedy of abortion spewed by those same forces, insisting that the crackpot coalition that brought suit calling mifepristone unsafe had standing (it doesnt) because the abortion victims who might have done so instead are still too traumatized by the tragedy to have the sense to bring their sorrows to Kacsmaryk.

Like other smart legal analysts, Mystal sees the case headed to the Supreme Court this week, especially because a federal judge in Washington challenged Kacsmaryks ruling. That should reassure nobody. Of course, the court should have trouble with a federal judges deciding that the Food and Drug Administration made a mistake approving a prescription drug 23 years ago, as well as with the technical issue of whether the coalition of Christo-medical zealots that brought the case had standing to do so. But as Mystal writes, Its difficult to predict what the six conservative justices who have already thrown out Roe v. Wade and the 50 years of settled law that goes with it will do with these cases. It sure is.

Just before Kacsmaryks decision came down, I was reading Michelle Goldbergs excellent New York Times column, The Abortion Ban Backlash Is Starting to Freak Out Republicans. Its true that, from the 2022 midterms through the Wisconsin vote, the GOP has lost crucial elections largely because of its abortion extremism, and Republican forces from the Wall Street Journal editorial page to anti-choice Ann Coulter are warning party leaders to moderate. The week before, Rebecca Traisters excellent Abortion Wins Elections New York magazine cover story had this indelible quote from pollster Celinda Lake: I dont think Democrats have fully processed that this country is now 10 to 15 percent more pro-choice than it was before Dobbs in state after state and national data.

I believe Lake. And yet, as both Goldberg and Traister acknowledged, the GOP is continuing to push ever-more-extreme anti-abortion legislation anyway. On the heels of Protasiewiczs victory, Idaho criminalized helping a minor secure abortion care, making it a felony punishable by up to five years in prison. Good news: On Wednesday, Michigan Governor Gretchen Whitmer signed a bill repealing the states 1931 abortion ban. Bad news: On Thursday, Tennessees GOP supermajority expelled two young Black legislators from the state assembly for their role in a noisy capitol protest of the states inaction in the face of the gun murders of six peoplethree 9-year-olds and three staffersat Nashvilles Christian Covenant school.

The Tennessee outrage doesnt fit neatly into this story about abortion, but it fits. This is a story about ever-more-dangerous right-wing forces defying democracy to accomplish their goals. I should note: Protasiewiczs victory was also attributable to her promise to preside over fair legislative maps in Wisconsin, for the first time since a GOP majority under Governor Scott Walker took office in 2011. It gerrymandered the state so severely that in 2018 Democrats got 200,000 more votes for the assembly than Republicans but the GOP held almost two-thirds of the seats.

In the wake of Trumps election, new candidates, disproportionately women, surged into elected office. Democrats put new attention into reclaiming statehouses and made progress in 2018; they lost focus during the presidential election year of 2020 (and also faced a deadly pandemic) and lost seats, but they prevailed in 2022, flipping legislative bodies in Michigan, Minnesota, and Pennsylvania and winning crucial victories elsewhere. Democracy is prevailing in those states. Fortunately, local city and county officials in Nashville and Memphis are expected to restore Justin Jones and Justin Pearson to their legislative seats. Democracy might still prevail in Tennessee.

But not really. In Tennessee, 80 percent of voters support abortion rights under some conditions, and large majorities of Tennessee parents favor stricter gun laws (as shown by a poll released just before the Covenant murders). Yet the Republicans who run the state ignore them. (They recently added a few exceptions to the states total abortion ban, but advocates say intimidated state doctors are unlikely to observe those.) Red-state voters support a lot of the same policies blue-state voters do, and yet they elect Republicans for reasons of identity, obstinacy, and, yes, racism.

So at best were going to have an abortion archipelago: Abortion care will be legal in blue states and mostly illegal in red ones. Unless the Supreme Courthaving said in its Dobbs decision that the issue will return to the states where it rightly belongssides with Kacsmaryk and decides to outlaw a drug that is safer than Tylenol, Viagra, and, of course, childbirth. (Most women opt for medication abortions, if they need one.) Such a ruling will leave them with less-safe options, even in blue states.

This is not how democracy is supposed to work, but this is how ours is working now. Still, just like Dobbs provoked a voter backlash that has doomed Republicans in race after race, so would a Supreme Court decision to outlaw mifepristone nationwide. (We can expect the same coalition of lying Republicans and lazy reporters to tell us were wrong about that, as they did in 2022, but this time well ignore them.) The only answer is to keep working, keep running for office, keep voting.

It sucks: Hey Wisconsin voters, you did your job to protect abortion rightsnow we need even more from you! But I dont see an alternative. After years of planning, and millions, perhaps billions, in dark money and other sleazy funding, Republicans have taken over state legislatures and many courts, including the highest court. Democracy wont be safe until we take it all back.

See the article here:
The GOP May Be Losing on Abortion, but Its Winning Extremist Abortion Bans Anyway - The Nation

Read More..

How machine learning can help crack the IT security problem – VentureBeat

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Less than a decade ago, the prevailing wisdom was that every business should undergo digital transformations to boost internal operations and improve client relationships. Next, they were being told that cloud workloads are the future and that elastic computer solutions enabled them to operate in an agile and more cost-effective manner, scaling up and down as needed.

While digital transformations and cloud migrations are undoubtedly smart decisions that all organizations should make (and those that havent yet, what are you doing!), security systems meant to protect such IT infrastructures havent been able to keep pace with threats capable of undermining them.

As internal business operations become increasingly digitized, boatloads more data are being produced. With data piling up, IT and cloud security systems come under increased pressure because more data leads to greater threats of security breaches.

In early 2022, a cyber extortion gang known as Lapsus$ went on a hacking spree, stealing source code and other valuable data from prominent companies, including Nvidia, Samsung, Microsoft and Ubisoft. The attackers had originally exploited the companies networks using phishing attacks, which led to a contractor being compromised, giving the hackers all the access the contractor had via Okta (an ID and authentication service). Source code and other files were then leaked online.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

This attack and numerous other data breaches target organizations of all types, ranging from large multinational corporations to small startups and growing firms. Unfortunately, in most organizations, there are simply too many data points for security engineers to locate, meaning current systems and methods to safeguard a network are fundamentally flawed.

Additionally, organizations are often overwhelmed by the various available tools to tackle these security challenges. Too many tools means organizations invest an exorbitant amount of time and energy not to mention resources in researching, purchasing and then integrating and running these tools. This puts added stress on executives and IT teams.

With so many moving parts, even the best security engineers are left helpless in trying to mitigate potential vulnerabilities in a network. Most organizations simply dont have the resources to make cybersecurity investments.

As a result, they are subject to a double-edged sword: Their business operations rely on the highest levels of security, but achieving that comes at a cost that most organizations simply cant afford.

A new approach to computer security is desperately needed to safeguard businesses and organizations sensitive data. The current standard approach comprises rules-based systems, usually with multiple tools to cover all bases. This practice leaves security analysts wasting time enabling and disabling rules and logging in and out of different systems in an attempt to establish what is and what isnt considered a threat.

The best option for organizations dealing with these ever-present pain points is to leverage machine learning (ML) algorithms. This way, algorithms can train a model based on behaviors, providing any business or organization a secure IT infrastructure. A tailored ML-based SaaS platform that operates efficiently and in a timely manner must be the priority of any organization or business seeking to revamp its security infrastructure.

Cloud-native application protection platforms (CNAPP), a security and compliance solution, can empower IT security teams to deploy and run secure cloud native applications in automated public cloud environments. CNAPPs can apply ML algorithms on cloud-based data to discover accounts with unusual permissions (one of the most common and undetected attack paths) and uncover potential threats including host and open source vulnerabilities.

ML can also knit together many anomalous data points to create rich stories of whats happening in a given network something that would take a human analyst days or weeks to uncover.

These platforms leverage ML through two primary practices. Cloud security posture management (CSPM) handles platform security by monitoring and delivering a full inventory to identify any deviations from customized security objectives and standard frameworks.

Cloud infrastructure entitlements management (CIEM) focuses on identity security by understanding all possible access to sensitive data through every identitys permission. On top of this, host and container vulnerabilities are also taken into account, meaning correct urgency can be applied to ongoing attacks. For example, anomalous behavior seen on a host with known vulnerabilities is far more pressing than on a host without known vulnerabilities.

Another ML-based SaaS option is to outsource the security operations center (SOC) and security incident and event management (SIEM) function to a third party and benefit from their ML algorithm. With dedicated security analysts investigating any and all threats, SaaS can use ML to handle critical security functions such as network monitoring, log management, single-sign on (SSO) and endpoint alerts, as well as access gateways.

SaaS ML platforms offer the most effective way to cover all the security bases. By applying ML to all behaviors, organizations can focus on their business objectives while algorithms pull all the necessary context and insights into a single security platform.

Running the complex ML algorithms to learn a baseline of what is normal in a given network and assessing risk is challenging even if an organization has the personnel to make it a reality. For the majority of organizations, using third-party platforms that have already built algorithms to be trained on data produces a more scalable and secure network infrastructure, doing so far more conveniently and effectively than home grown options.

Relying on a trusted third party to host a SaaS ML platform enables organizations to dedicate more time to internal needs, while the algorithms study the networks behavior to provide the highest levels of security.

When it comes to network security, relying on a trusted third party is no different than hiring a locksmith to repair the locks on your home. Most of us dont know how the locks on our homes work but we trust an outside expert to get the job done. Turning to third-party experts to run ML-algorithms enables businesses and organizations the flexibility and agility they need to operate in todays digital environment.

Maximizing this new approach to security allows all types of organizations to overcome their complex data problems without having to worry about the resources and tools needed to protect their network, providing unparalleled peace of mind.

Ganesh the Awesome (Steven Puddephatt) is a technical sales architect at GlobalDots.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

More:
How machine learning can help crack the IT security problem - VentureBeat

Read More..

AI and Machine Learning in Healthcare for the Clueless – Medscape

Recorded March 6, 2023. This transcript has been edited for clarity.

Robert A. Harrington, MD: Hi. This is Bob Harrington on theheart.org | Medscape Cardiology, and I'm here at the American College of Cardiology meetings in New Orleans, having a great time, by the way. It's really fun to be back live, in person, getting to see friends and colleagues, seeing live presentations, etc. If you've not been to a live meeting yet over the course of the past couple of years, please do start coming again, whether it's American College of Cardiology, American Heart Association, or European Society of Cardiology. It's fantastic.

Putting that aside, I've been learning many things at this meeting, particularly around machine learning, artificial intelligence (AI), and some of the advanced computational tools that people in the data-science space are using.

I'm fortunate to have an expert and, really, a rising thought leader in this field, Dr Jenine John. Jenine is a machine-learning research fellow at Brigham and Women's Hospital, working in Calum MacRae'sresearch group.

What she talked about on stage this morning is what do you have to know about this whole field. I thought we'd go through some of the basic concepts of data science, what machine learning is, what AI is, and what neural networks are.

How do we start to think about this? As practitioners, we're going to be faced with how to incorporate some of this into our practice. You're seeing machine-learning algorithms put into your clinical operations. You're starting to see ways that people are thinking about, for example, Can the machine read the echocardiogram as good as we can? What's appropriate for the machine? What's appropriate for us? What's the oversight of all of this?

We'll have a great conversation for the next 12-20 minutes and see what we can all learn together. Jenine, thank you for joining us here today.

Jenine John, MD: Thank you for having me.

Harrington: Before we get into the specifics of machine learning and what you need to know, give me a little bit of your story. You obviously did an internal medicine residency. You did a cardiology fellowship. Now, you're doing an advanced research fellowship. When did you get bitten by the bug to want to do data science, machine learning, etc.?

John: It was quite late, actually. After cardiology fellowship, I went to Brigham and Women's Hospital for a research fellowship. I started off doing epidemiology research, and I took classes at the public health school.

Harrington: The classic clinical researcher.

John: Exactly. That was great because I gained a foundation in epidemiology and biostatistics, which I believe is essential for anyone doing clinical research. In 2019, I was preparing to write a K grant, and for my third aim, I thought, Oh, I want to make this complex model that uses many variables. This thing called machine learning might be helpful. I basically just knew the term but didn't know much about it.

I talked to my program director who led me to Dr Rahul Deo and Dr Calum MacRae's group that's doing healthcare AI. Initially, I thought I would just collaborate with them.

Harrington: Have their expertise brought into your grant and help to elevate the whole grant? That's the typical thing to do.

John: Exactly. As I learned a bit more about machine learning, I realized that this is a skill set I should really try to develop. I moved full-time into that group and learned how to code and create machine-learning models specifically for cardiac imaging. Six months later, the pandemic hit, so everything took a shift again.

I believe it's a shift for the better because I was exposed to everything going on in digital health and healthcare startups. There was suddenly an interest in monitoring patients remotely and using tech more effectively. I also became interested in how we are applying AI to healthcare and how we can make sure that we do this well.

Harrington: There are a couple of things that I want to expand on. Maybe we'll start this way. Let's do the definitions. How would you define AI and its role in medicine? And then, talk about a subset of that. Define machine learning for the audience.

John: Artificial intelligence and machine learning, the two terms are used pretty much synonymously within healthcare, because when we talk about AI in healthcare, really, we're talking about machine learning. Some people use the term AI differently. They feel that it's only if a system is autonomously thinking independently that you can call it AI. For the purposes of healthcare, we pretty much use them synonymously.

Harrington: For what we're going to talk about today, we'll use them interchangeably.

John: Yes, exactly.

Harrington: Define machine learning.

John: Machine learning is when a machine uses data and learns from the data. It picks up patterns, and then, it can basically produce output based on those patterns.

Harrington: Give me an example that will resonate with a clinical audience. You're an imager, and much of the work so far has been in imaging.

John: Imaging is really where machine learning shines. For example, you can use machine learning on echocardiograms, and you can use it to pick up whether this patient has valvular disease or not. If you feed an AI model enough echocardiograms, it'll start to pick up the patterns and be able to tell whether this person has valvular disease or not.

Harrington: The group that you're working with has been very prominent in being able to say whether they have hypertrophic cardiomyopathy, valve disease, or amyloid infiltrative disease.

There are enough data there that the machine starts to recognize patterns.

John: Yes.

Harrington: You said that you were, at the Harvard School of Public Health, doing what I'll call classic clinical research training. I had the same training. I was a fellow 30-plus years ago in the Duke Databank for Cardiovascular Diseases, and it was about epidemiology and biostatistics and how to then apply those to the questions of clinical research.

You were doing very similar things, and you said something this morning in your presentation that stuck with me. You said you really need to understand these things before you make the leap into trying to understand machine learning. Expand on that a little bit.

John: I think that's so important because right now, what seems to happen is you have the people the data scientists and clinicians and they seem to be speaking different languages. We really need more collaboration and getting on the same page. When clinicians go into data science, I think the value is not in becoming pure data scientists and learning to create great machine-learning models. Rather, it's bringing that clinical thinking and that clinical research thinking, specifically, to data science. That's where epidemiology and biostatistics come in because you really need to understand those concepts so that you understand which questions you should be asking. Are you using the right dataset to ask those questions? Are there biases that could be present?

Harrington: Every week, as you know, we all pick up our journals, and there's a machine-learning paper in one of the big journals all the time. Some of the pushback you'll hear, whether it's on social media or in letters to the editors, is why did you use machine learning for this? Why couldn't you use classical logistic regression?

One of the speakers in your session, I thought, did a nice job of that. He said that often, standard conventional statistics are perfectly fine. Then there are some instances where the machine is really better, and imaging is a great example. Would you talk to the audience a little bit about that?

John: I see it more as a continuum. I think it's helpful to see it that way because right now, we see traditional biostatistics and machine learning as completely different. Really, it's a spectrum of tools. There are simple machine-learning methods where you can't really differentiate much from statistical methods, and there's a gray zone in the middle. For simpler data, such as tabular data, maybe.

Harrington: Give the audience an example of tabular data.

John: For example, if you have people who have had a myocardial infarction (MI), and then you have characteristics of those individuals, such as age, gender, and other factors, and you want to use those factors to predict who gets an MI, in that instance, traditional regression may be best. When you get to more complex data, that's where machine learning really shines. That's where it gets exciting because they are questions that we haven't been able to ask before with the methods that we have. Those are the questions that we want to start using machine learning to answer.

Harrington: We've all seen papers again over the past few years. The Mayo Group has published a series of these about information that you can derive from the EKG. You can derive, for example, potassium levels from the EKG. Not the extremes that we've all been taught, but subtle perturbations. I think I knew this, but I was still surprised to hear it when one of your co-speakers said that there are over 30,000 data points in the typical EKG.

There's no way you can use conventional statistics to understand that.

John: Exactly. One thing I was a little surprised to see is that machine learning does quite well with estimating the age of the individual on the EKG. If you show a cardiologist an EKG, we could get an approximate estimate, but we won't be as good as the machine. Modalities like EKG and echocardiogram, which have so many more data points, are where the machine can find patterns that even we can't figure out.

Harrington: The secret is to ingest a huge amount of data. One of the things that people will ask me is, "Well, why is this so hot now?" It's hot now for a couple of reasons, one of which is that there's an enormous amount of data available. Almost every piece of information can be put into zeros and ones. Then there's cloud computing, which allows the machine to ingest this enormous amount of information.

You're not going to tell the age of a person from a handful of EKGs. It's thousands to millions of EKGs that machines evaluated to get the age. Is that fair?

John: This is where we talk about big data because we need large amounts of data for the machine to learn how to interpret these patterns. It's one of the reasons I'm excited about AI because it's stimulating interest in multi-institution collaborations and sharing large datasets.

We're annotating, collecting, and organizing these large multi-institutional datasets that can be used for a variety of purposes. We can use the full range of analytic approaches, machine learning or not, to learn more about patients and how to care for them.

Harrington: I've heard both Calum and Rahul talk about how they can get echocardiograms, for example, from multiple institutions. As the machine gets better and better at reading and interpreting the echocardiograms or looking for patterns of valvular heart disease, they can even take a more limited imaging dataset and apply what they've learned from the larger expanded dataset, basically to improve the reading of that echocardiogram.

One of the things it's going to do, I think, is open up the opportunity for more people to contribute their data beyond the traditional academics.

John: Because so much data are needed for AI, there's a role for community centers and other institutions to contribute data so that we can make robust models that work not only in a few academic centers but also for the majority of the country.

Harrington: There are two more topics I want to cover. We've been, in some ways, talking about the hope of what we're going to use this for to make clinical medicine better. There's also what's been called the hype, the pitfalls, and the perils. Then I want to get into what do you need to know, particularly if you're a resident fellow, junior faculty member.

Let's do the perils and the hype. I hear from clinicians, particularly clinicians of my generation, that this is just a black box. How do I know it's right? People point to, for example, the Epic Sepsis Model, which failed miserably, with headlines all over the place. They worry about how they know whether it's right.

John: That's an extremely important question to ask. We're still in the early days of using AI and trying to figure out the pitfalls and how to avoid them. I think it's important to ask along the way, for each study, what is going on here. Is this a model that we can trust and rely on?

I also think that it's not inevitable that AI will transform healthcare just yet because we are so early on, and there is hype. There are some studies that aren't done well. We need more clinicians understanding machine learning and getting involved in these discussions so that we can lead the field and actually use the AI to transform healthcare.

Harrington: As you push algorithms into the healthcare setting, how do we evaluate them to make sure that the models are robust, that the data are representative, and that the algorithm is giving us, I'll call it, the right answer?

John: That's the tough part. I think one of the tools that's important is a prospective trial. Not only creating an algorithm and implementing right away but rather studying how it does. Is it actually working prospectively before implementing it?

We also need to understand that in healthcare, we can't necessarily accept the black box. We need explainability and interpretability, to get an understanding of the variables that are used, how they're being used within the algorithm, and how they're being applied.

One example that I think is important is that Optum created a machine-learning model to predict who was at risk for medical complications and high healthcare expenditures. The model did well, so they used the model to determine who should get additional resources to prevent these complications.

It turns out that African Americans were utilizing healthcare less, so their healthcare expenditure was lower. Because of that, the algorithm was saying these are not individuals who need additional resources.

Harrington: It's classic confounding.

John: There is algorithmic bias that can be an issue. That's why we need to look at this as clinical researchers and ask, "What's going on here? Are there biases?"

Harrington: One of the papers over the past couple of years came from one of our faculty members at Stanford, which looked at where the data are coming from for these models. It pointed out that there are many states in this country that contribute no data to the AI models.

That's part of what you're getting at, and that raises all sorts of equity questions. You're in Massachusetts. I'm in California. There is a large amount of data coming from those two states. From Mississippi and Louisiana, where we are now, much less data. How do we fix that?

John: I think we fix it by getting more clinicians involved. I've met so many passionate data scientists who want to contribute to patient care and make the world a better place, but they can't do it alone. They can't recruit health centers in Mississippi. We need clinicians and clinical researchers who will say, "I want to help with advancing healthcare, and I want to contribute data so that we can make this work." Currently, we have so many advances in some ways, but AI can open up so many new opportunities.

Harrington: There's a movement to assure that the algorithm is fair, right? That's the acronym that's being used to make sure that the data are representative of the populations that you're interested in and that you've eliminated the biases.

I'm always intrigued. When you talk to your friends in the tech world, they say, "Well, we do this all the time. We do A/B testing." They just constantly run through algorithms through A/B testing, which is a randomized study. How come we don't do more of that in healthcare?

John: I think it's complicated because we don't have the systems to do that effectively. If we had a system where patients come into the emergency room and we're using AI in that manner, then maybe we could start to incorporate some of these techniques that the tech industry uses. That's part of the issue. One is setting up systems to get the right data and enough data, and the other is how do we operationalize this so that we can effectively use AI within our systems and test it within our systems.

Harrington: As a longtime clinical researcher and clinical trialist, I've always asked why it is that clinical research is separate from the process of clinical care.

If we're going to effectively evaluate AI algorithms, for example, we've got to break down those barriers and bring research more into the care domain.

John: Yes. I love the concept of a learning health system and incorporating data and data collection into the clinical care of patients.

Harrington: Fundamentally, I believe that the clinicians make two types of decisions, one of which is that the answer is known. I always use the example of aspirin if you're having an ST-segment elevation MI. That's known. It shouldn't be on the physician to remember that. The system and the algorithms should enforce that. On the other hand, for much of what we do, the answer is not known, or it's uncertain.

Why don't we allow ongoing randomization to help us decide what is appropriate? We're not quite there yet, but I hope before the end of my career that we can push that closer together.

All right. Final topic for you. You talked this morning about what you need to know. Cardiology fellows and residents must approach you all the time and say, "Hey, I want to do what you do," or, "I don't want to do what you do because I don't want to learn to code, but I want to know how to use it down the road."

What do you tell students, residents, and fellows that they need to know?

John: I think all trainees and all clinicians, actually, should understand the fundamentals of AI because it is being used more and more in healthcare, and we need to be able to understand how to interpret the data that are coming out of AI models.

I recommend looking up topics as you go along. Something I see is clinicians avoid papers that involve AI because they feel they don't understand it. Just dive in and start reading these papers, because most likely, you will understand most of it. You can look up topics as you go along.

There's one course I recommend online. It's a free course through Coursera called AI in Healthcare Specialization. It's a course by Stanford, and it does a really good job of explaining concepts without getting into the details of the coding and the math.

Other than that, for people who want to get into the coding, first of all, don't be afraid to jump in. I recently talked to a friend who is a gastroenterologist, and she said, "I'd love to get into AI, but I don't think I'd be good at it." I asked, "Well, why not?" She said, "Because men tend to be good at coding."

I do not think that's true.

Harrington: I don't think that's true either.

John: It's interesting because we're all affected to some extent by the notions that society has instilled in us. Sometimes it takes effort to go beyond what you think is the right path or what you think is the traditional way of doing things, and ask, "What else is out there. What else can I learn?"

If you do want to get into coding, I would say that it's extremely important to join a group that specializes in healthcare AI because there are so many pitfalls that can happen. There are mistakes that could be made without you realizing it if you try to just learn things on your own without guidance.

Harrington: Like anything else, join an experienced research group that's going to impart to you the skills that you need to have.

John: Exactly.

Harrington: The question about women being less capable coders than men, we both say we don't believe that, and the data don't support that. It's interesting. At Stanford, for many years, the most popular major for undergraduate men has been computer science. In the past few years, it's also become the most popular major for undergrad women at Stanford.

We're starting to see, to your point, that maybe some of those attitudes are changing, and there'll be more role models like you to really help that next generation of fellows.

Final question. What do you want to do when you're finished?

John: My interests have changed, and now I'm veering away from academia and more toward the operational side of things. As I get into it, my feeling is that currently, the challenge is not so much creating the AI models but rather, as I said, setting up these systems so that we can get the right data and implement these models effectively. Now, I'm leaning more toward informatics and operations.

I think it's an evolving process. Medicine is changing quickly, and that's what I would say to trainees and other clinicians out there as well. Medicine is changing quickly, and I think there are many opportunities for clinicians who want to help make it happen in a responsible and impactful manner.

Harrington: And get proper training to do it.

John: Yes.

Harrington: Great. Jenine, thank you for joining us. I want to thank you, the listeners, for joining us in this discussion about data science, artificial intelligence, and machine learning.

My guest today on theheart.org | Medscape Cardiology has been Dr Jenine John, who is a research fellow at Brigham and Women's Hospital, specifically in the data science and machine learning realm.

Again, thank you for joining.

Robert A. Harrington, MD, is chair of medicine at Stanford University and former president of the American Heart Association. (The opinions expressed here are his and not those of the American Heart Association.) He cares deeply about the generation of evidence to guide clinical practice. He's also an over-the-top Boston Red Sox fan.

Follow Bob Harrington on Twitter

Follow theheart.org | Medscape Cardiology on Twitter

Follow Medscape on Facebook, Twitter, Instagram, and YouTube

View post:
AI and Machine Learning in Healthcare for the Clueless - Medscape

Read More..

The AI singularity is here – InfoWorld

Mea culpa: I was wrong. The artificial intelligence (AI) singularity is, in fact, here. Whether we like it or not, AI isnt something that will possibly, maybe impact software development in the distant future. Its happening right now. Today. No, not every developer is taking advantage of large language models (LLMs) to build or test code. In fact, most arent. But for those who are, AI is dramatically changing the way they build software. Its worth tuning into how theyre employing LLMs like ChatGPT to get some sense of how you can use such tools to make yourself or your development teams much more productive.

One of the most outspoken advocates for LLM-enhanced development is Simon Willison, founder of the Datasette open source project.As Willison puts it, AI allows me to be more ambitious with my projects. How so? ChatGPT (and GitHub Copilot) save me an enormous amount of figuring things out time. For everything from writing a for loop in Bash to remembering how to make a cross-domain CORS request in JavaScriptI dont need to even look things up anymore, I can just prompt it and get the right answer 80% of the time.

For Willison and other developers, dramatically shortening the figuring out process means they can focus more attention on higher-value development rather than low-grade trial and error.

For those concerned about the imperfect code LLMs can generate (or outright falsehoods), Willison says in a podcast not to worry. At least, not to let that worry overwhelm all the productivity gains developers can achieve, anyway. Despite these non-trivial problems, he says, You can get enormous leaps ahead in productivity and in the ambition of the kinds of projects that you take on if you can accept both things are true at once: It can be flawed and lying and have all of these problems and it can also be a massive productivity boost.

The trick is to invest time learning how to manipulate LLMs to make them what you need. Willison stresses, To get the most value out of themand to avoid the many traps that they set for the unwary useryou need to spend time with them and work to build an accurate mental model of how they work, what they are capable of, and where they are most likely to go wrong.

For example, LLMs such as ChatGPT can be useful for generating code, but they can perhaps be even more useful for testing code (including code created by LLMs). This is the point that GitHub developer Jaana Dogan has been making. Again, the trick is to put LLMs to use, rather than just asking the AI to do your job for you and waiting on the beach while it completes the task. LLMs can help a developer with her job, not replace the developer in that job.

Sourcegraph developerSteve Yegge is willing to declare, LLMs arent just the biggest change since social, mobile, or cloudtheyre the biggest thing since the World Wide Web. And on the coding front, theyre the biggest thing since IDEs and Stack Overflow, and may well eclipse them both. Yegge is an exceptional developer, so when he says, If youre not pants-peeingly excited and worried about this yet, well you should be, its time to take LLMs seriously and figure out how to make them useful for ourselves and our companies.

For Yegge, one of the biggest concerns with LLMs and software is also the least persuasive. I, for one, have wrung my hands that developers relying on LLMs still have to take responsibility for the code, which seems problematic given how imperfect the code is that emerges from LLMs.

Except, Yegge says, this is a ridiculous concern, and hes right:

The point, to follow Willisons argument, isnt to create pristine code. Its to save a developer time so that she can spend more time trying to build that pristine code. As Dogan might say, the point is to use LLMs to generate tests and reviews that discover all the flaws in our not-so-pristine code.

Yegge summarizes, You get the LLM to draft some code for you thats 80% complete/correct [and] you tweak the last 20% by hand. Thats a five-times productivity boost. Who doesnt want that?

The race is on for developers to learn how to query LLMs to build and test code but also to learn how to train LLMs with context (like code samples) to get the best possible outputs. When you get it right, youll sound likeHigher Grounds Matt Bateman, gushing I feel like I got a small army of competent hackers to both do my bidding and to teach me as I go. Its just pure delight and magic. This is why AWS and other companies are scrambling to devise waysto enable developers to be more productive with their platforms (feeding training material into the LLMs).

Stop imagining a future without LLM-enabled software development and instead get started today.

The rest is here:
The AI singularity is here - InfoWorld

Read More..

10 TensorFlow Courses to Get Started with AI & Machine Learning – Fordham Ram

Looking for ways to improve your TensorFlow machine learning skills?

As TensorFlow gains popularity, it has become imperative for aspiring data scientists and machine learning engineers to learn this open-source software library for dataflow and differentiable programming. However, finding the rightTensorFlow course that suits your needs and budget can be tricky.

In this article, we have rounded up the top 10 online free and paid TensorFlow courses that will help you master this powerful machine learning framework.

Lets dive into TensorFlow and see which of our top 10 picks will help you take your machine-learning skills to the next level.

This course from Udacity is available free of cost. The course has 4 modules, each teaching you how to use models from TF Lite in different applications. This course will teach you everything you need to know to use TF Lite for Internet of Things devices, Raspberry Pi, and more.

The course starts with an overview of TensorFlow Lite, then moves on to:

This course is ideal for people proficient in Python, iOS, Swift, or Linux.

Duration: 2 months

Price: Free

Certificate of Completion: No

With over 91.534 enrolled students and thousands of positive reviews, this Udemy course is one of the best-selling TensorFlow courses. This course was created by Jos Portilla. She is famous for her record-breaking Udemy course, The Complete Python 3 Bootcamp, with over 1.5 million students enrolled in it.

As you progress through this course, you will learn to use TensorFlow for various tasks, including image classification with Convolutional Neural Networks (CNN). Youll also learn how to design your own neural network from scratch and analyze time series.

Overall, this course is excellent for learning TensorFlow fundamentals using Python. The course covers the basics of TensorFlow and more and does not require any prior knowledge of Machine Learning.

Duration: 14 hrs

Price: Paid

Certificate of Completion: Yes

TensorFlow: Intro to TensorFlow for Deep Learning is third in our list of free TensorFlow courses one should definitely check out. This course includes a total of 10 modules. In the first part of the course, Dr. Sebastian Thrun, co-founder of Udacity, gives an interview about machine learning and Udacity.

Initially, youll learn about the MNIST fashion dataset. Then, as you progress through the course, youll learn how to employ a DNN model that categorizes pictures using the MNIST fashion dataset.

The course covers other vital subjects, including transfer learning and forecasting time series.

This course is ideal for students who are fluent in Python and have some knowledge of linear algebra.

Duration: 2 months

Price: Free

Certificate of Completion: No

This course from Coursera is an excellent way to learn about the basics of TensorFlow. In this program, youll learn how to design and train neural networks and explore fascinating new AI and machine learning areas.

As you train a network to recognize real-world images, youll also learn how convolutions could be used to boost a networks speed. Additionally, youll train a neural network to recognize human speech with NLP systems.

Even though auditing the courses is free, certification will cost you. However, if you complete the course within 7 days of enrolling, you can claim a full refund and get a certificate.

This course is for those who already have some prior experience.

Duration: 2 months

Price: free

Certificate of Completion: Yes

This is a free Coursera course on TensorFlow introduction for AI. To get started, you must first click on Enroll for Free and sign up. Then, youll be prompted to select your preferred subscription period in a new window.

There will be a button that says Audit the Course.. By clicking on the button, it will allow you to access the course for free.

As part of the first week of this course, Andrew Ng, the instructor, will provide a brief overview. Later, there will be a discussion about what the course is all about.

The Fashion MNIST Dataset is introduced in the second Week as a context for the fundamentals of computer vision. The purpose of this section is for you to put your knowledge into practice by writing your own computer vision neural network (CVNN) code.

Those with some Python experience will benefit the most from this course.

Duration: 4 months

Price: Free

Certificate of Completion: Yes

For those seeking TensorFlow Developer Certification in 2023, TensorFlow Developer Certificate in 2023: Zero to Mastery is an excellent choice since it is comprehensive, in-depth, and top-quality.

In this online course, youll learn everything you need to know to advance from knowing zero about TensorFlow to being a fully certified member of Googles TensorFlow Certification Network, all under the guidance of Daniel Bourke, a TensorFlow Accredited Professional.

The course will involve completing exercises, carrying out experiments, and designing models for machine learning and applications under the guidance of TensorFlow Certified Expert Daniel Bourke.

By enrolling in this 64-hour course, you will learn everything you need to know about designing cutting-edge deep learning solutions and passing the TensorFlow Developer certification exam.

This course is a right fit for anyone wanting to advance from TensorFlow novice to Google Certified Professional.

Duration: 64 hrs

Price: Paid

Certificate of Completion: Yes

This is yet another high-quality course that is free to audit. This course features a five-week study schedule.

This online course will teach you how to use Tensorflow to create models for deep learning from start to finish. Youll learn via engaging in hands-on programming sessions led by an experienced instructor, where you can immediately put what youve learned into practice.

The third and fourth weeks focus on model validation, normalization, The Hub Modules for Tensorflow, etc. And the final Week is dedicated to a Project for Capstone. Students in this course will be exposed to a great deal of hands-on learning and work.

This course is ideal for those who are already familiar with Python and understand the Machine learning fundamentals.

Duration: 26 hrs

Price: Free

Certificate of Completion: No

This hands-on course introduces you to Googles cutting-edge Deep Learning framework, TensorFlow, and shows you how to use it.

This program is geared toward learners who are in a bit of a rush to get to full speed. However, it also provides in-depth segments for those interested in learning more about the theory behind things like loss functions and gradient descent methods, etc.

This course will teach you how to build Python recommendation systems with TensorFlow. As far as the course goes, it was created by Lazy Programmer, one of the best instructors on Udemy for machine learning.

Furthermore, you will create an app that predicts the stock market using Python. If you prefer hands-on learning through projects, this TensorFlow course is ideal for you.

This is a fantastic resource for those new to programming and just getting their feet wet in the fields of Data Science and Machine Learning.

Duration: 23.5 hrs

Price: Paid

Certificate of Completion: Yes

This resource is excellent for learning TensorFlow and machine learning on Google Cloud. The course offers an advanced TensorFlow environment for building robust and complex deep models using deep learning.

People who are just getting started will find this course one of the most promising. It has five modules that will teach you a lot about TensorFlow and machine learning.

A course like this is perfect for those who are just starting.

Duration: 4 months

Price: Free

Certificate of Completion: Paid Certificate

This course, developed by Hadelin de Ponteves, the Ligency I Team, and Luka Anicin, will introduce you to neural networks and TensorFlow in less than 13 hours. The course provides a more basic introduction to TensorFlow and Keras than its counterparts.

In this course, youll begin with Python syntax fundamentals, then proceed to program neural networks using TensorFlow and Googles Machine Learning framework.

A major advantage of this course is using Colab for labs and assignments. The advantage of Colab is that students have less chance to make mistakes, plus you get an excellent, shareable online portfolio of your work.

This course is intended for programmers who are already comfortable working with Python.

Duration: 13 hrs

Price: Paid

Certificate of Completion: Yes

In conclusion, weve discussed 10 online free and paid TensorFlow courses that can help you learn and improve your skills in this powerful machine-learning framework. Weve seen that there are options available for beginners and more advanced users and that some courses offer hands-on projects and real-world applications.

If youre interested in taking your TensorFlow skills to the next level, we encourage you to explore some of the courses weve covered in this post. Whether youre looking for a free introduction or a more in-depth paid course, theres something for everyone.

So dont wait enroll in one of these incredibly helpful courses today and start learning TensorFlow!

And as always, wed love to hear your thoughts and experiences in the comments below. What other TensorFlow courseshave you tried? Let us know!

Online TensorFlow courses can be suitable for beginners, but some prior knowledge of machine learning concepts can be helpful. Choosing a course that aligns with your skill level and offers clear explanations of the foundational concepts is important. Some courses may assume prior knowledge of Python programming or linear algebra, so its important to research the course requirements before enrolling.

The duration of a typical TensorFlow course can vary widely, ranging from a few weeks to several months, depending on the level of depth and complexity. The amount of time you should dedicate to learning each Week will depend on the TensorFlow course and your schedule, but most courses recommend several hours of study time per Week to make meaningful progress.

Some best practices for learning TensorFlow online include setting clear learning objectives, taking comprehensive notes, practicing coding exercises regularly, seeking help from online forums or community groups, and working on real-world projects to apply your knowledge. To ensure youre progressing and mastering the concepts, track your progress, regularly test your understanding of the material, and seek feedback from peers or instructors.

Prerequisites for online TensorFlow courses may vary, but basic programming skills and familiarity with Python are often required. A solid understanding of linear algebra and calculus can help understand the underlying mathematical concepts. Some courses may also require hardware, such as a powerful graphics processing unit (GPU), for training large-scale deep learning models. Its important to carefully review the course requirements before enrolling.

Some online TensorFlow courses offer certifications upon completion, but there are no official degrees in TensorFlow. Earning a certification can demonstrate your knowledge and proficiency in the framework, which can help advance your career in machine learning or data science. However, its important to supplement your knowledge with real-world projects and practical experience to be successful in the field.

See the original post:
10 TensorFlow Courses to Get Started with AI & Machine Learning - Fordham Ram

Read More..

Using Machine Learning To Increase Yield And Lower Packaging … – SemiEngineering

Packaging is becoming more and more challenging and costly. Whether the reason is substrate shortages or the increased complexity of packages themselves, outsourced semiconductor assembly and test (OSAT) houses have to spend more money, more time and more resources on assembly and testing. As such, one of the more important challenges facing OSATs today is managing die that pass testing at the fab level but fail during the final package test.

But first, lets take a step back in the process and talk about the front-end. A semiconductor fab will produce hundreds of wafers per week, and these wafers are verified by product testing programs. The ones that pass are sent to an OSAT for packaging and final testing. Any units that fail at the final testing stage are discarded, and the money and time spent at the OSAT dicing, packaging and testing the failed units is wasted (figure 1).

Fig. 1: The process from fab to OSAT.

According to one estimate, based on the price of a 5nm wafer for a high-end smartphone, the cost of package assembly and testing is close to 30% of the total chip cost (Table 1). Given this high percentage (30%), it is considerably more cost-effective for an OSAT to only receive wafers that are predicted to pass the final package test. This ensures fewer rejects during the final package testing step, minimized costs, and more product being shipped out. Machine learning could offer manufacturers a way to accomplish this.

Table 1: Estimated breakdown of the cost of a chip for a high-end smartphone.

Using traditional methods, an engineer obtains inline metrology/wafer electrical test results for known good wafers that pass the final package test. The engineer then conducts a correlation analysis using a yield management software statistics package to determine which parameters and factors have the highest correlation to the final test yield. Using these parameters, the engineer then performs a regression fit, and a linear/non-linear model is generated. In addition, the model set forth by the yield management software is validated with new data. However, this is not a hands-off process. A periodic manual review of the model is needed.

Machine learning takes a different approach. In contrast to the previously mentioned method, which places greater emphasis on finding the model that best explains the final package test data, an approach utilizing machine learning capabilities emphasizes a models predictive ability. Due to the limited capacity of OSATs, a machine learning model trained with metrology and product testing data at the fab level and final test package data at the OSAT level creates representative results for the final package test.

With the deployment of a machine learning model predicting the final test yield of wafers at the OSAT, bad wafers will be automatically tagged at the fab in a manufacturing execution system and given an assigned wafer grade of last-to-ship (LTS). Fab real-time dispatching will move wafers with the assigned wafer grade to an LTS wafer bank, while wafers that meet the passing criteria of the machine learning model will be shipped to the OSAT, thus ensuring only good parts are sent to the packaging house for dicing and packaging. Moreover, additional production data would be used to validate the machine learning models predictions, with the end result being increased confidence in the model. A blind test can even examine specific critical parts of a wafer.

The machine learning approach also offers several advantages to more traditional approaches. This model is inherently tolerant of out-of-control conditions, trends and patterns are easily identified, the results can be improved with more data, and perhaps most significantly, no human intervention is needed.

Unfortunately, there are downsides. A large volume of data is needed for a machine learning model to make accurate predictions, but while more data is always welcome, this approach is not ideal for new products or R&D scenarios. In addition, this machine learning approach requires significant allocations of time and resources, and that means more compute power and more time to process complete datasets.

Furthermore, questions will need to be asked about the quality of the algorithm being used. Perhaps it is not the right model and, as a result, will not be able to deliver the correct results. Or perhaps the reasoning for the algorithms predictions are difficult to understand. Simply put: How does the algorithm decide which wafers are, in fact, good and which will be marked Last to Ship? And then there is the matter that incorrect or incomplete data will deliver poor results. Or as the saying goes, garbage in, garbage out.

The early detection and prediction of only good products shipping to OSATs has become increasingly critical, in part because the testing of semiconductor parts is the most expensive part of the manufacturing flow. By only testing good parts through the creation of a highly leveraged yield/operations management platform and machine learning, OSAT houses are able to increase capital utilization and return on investment, thus ensuring cost effectiveness and a continuous supply of finished goods to end customers. While this is one example of the effectiveness of machine learning models, there is so much more to learn about how such approaches can increase yield and lower costs for OSATs.

Follow this link:
Using Machine Learning To Increase Yield And Lower Packaging ... - SemiEngineering

Read More..

Greater Use of Artificial Intelligence and Machine Learning in Finance – Finance Magnates

We have seen aconsiderable surge in the usage of artificial intelligence (AI) and machinelearning in the finance industry in recent years. These technologies are beingadopted by financial institutions in order to automate and optimize theirprocesses, eliminate risks, and acquire insights into client behavior.

AI and machinelearning are transforming the way we do business and proving to be significant tools in the banking industry.

Artificialintelligence (AI) and machine learning (ML) are computer technologies thatallow machines to learn from data, discover patterns, and make judgments. AIentails creating algorithms capable of performing tasks that would normallyneed human intelligence, such as language translation, image recognition, anddecision-making.

Machinelearning is a branch of artificial intelligence that focuses on developingsystems that can learn from data without being explicitly programmed.

Keep Reading

AI and machinelearning have several financial applications. Here are some examples of howthese technologies are being used:

One of the mostsignificant advantages of AI and machine learning is its capacity to detectfraudulent transactions. These technologies are being used by banks andfinancial institutions to examine vast amounts of data and find trends that maysuggest fraudulent conduct. This enables them to detect and prevent fraudbefore it causes harm.

AI and machinelearning can be used to evaluate market data and find investment possibilitiesin investment management. They can also be used to automate trading operations,allowing financial organizations to make more accurate and timely tradingdecisions.

The applicationof AI and machine learning in finance has various advantages. Here are a fewexamples:

While theapplication of AI and machine learning in finance has significant advantages,it also has some drawbacks. Here are a few examples:

Integrationwith current systems: Integrating AI and machine learning into existing systemscan be difficult and may necessitate considerable infrastructure and traininginvestments.

In finance,machine learning has been used for tasks such as risk assessment, frauddetection, portfolio optimization, and trading strategies. However, like anytechnology, machine learning in finance comes with its own set of risks thatneed to be carefully considered and managed.

Machinelearning models are only as good as the data they are trained on. In finance,data can come from various sources, such as historical stock prices, economicindicators, and social media sentiment. However, data quality can vary, andinaccurate, incomplete, or biased data can lead to inaccurate predictions ordecisions. Bias in data, such as gender or racial bias, can also beinadvertently learned by machine learning algorithms, leading to biasedoutcomes in finance, such as biased lending decisions or discriminatorypricing. Therefore, it is crucial to carefully curate and preprocess data tominimize these risks and ensure that machine learning models are trained onreliable and representative data.

Machinelearning models can sometimes be black boxes, meaning that theirdecision-making process may not be easily interpretable or explainable. Infinance, where regulatory requirements and transparency are critical, a lack ofmodel interpretability and explainability can pose risks. It can be challengingto understand how and why a machine learning model makes a particularprediction or decision, which can raise concerns about accountability,fairness, and compliance.

Financialinstitutions need to ensure that machine learning models used in finance aretransparent, explainable, and comply with regulatory requirements to mitigatethe risks associated with model opaqueness.

Machinelearning models are susceptible to overfitting, which occurs when a modelperforms well on the training data but fails to generalize to new, unseen data.Overfitting can lead to inaccurate predictions or decisions in real-worldfinancial scenarios, resulting in financial losses. It is crucial to useappropriate techniques, such as regularization and cross-validation, tomitigate the risks of overfitting and ensure that machine learning models cangeneralize well to new data.

Machinelearning models are trained on data and learn from patterns in data, but theydo not have human-like judgment, intuition, or common sense. In finance, humanoversight is critical to ensure that machine learning models are makingsensible decisions aligned with business objectives and ethical principles.Relying solely on machine learning models without human oversight can lead tounintended consequences, such as incorrect investment decisions, failure todetect anomalies or fraud, or unintended biases.

Financialinstitutions need to strike a balance between automation and human judgment,and carefully monitor and validate the outcomes of machine learning models toreduce risks associated with a lack of human oversight.

The use ofmachine learning in finance requires the collection, storage, and processing ofvast amounts of sensitive financial data. This can make financial institutionsvulnerable to cybersecurity threats, such as data breaches, insider attacks, oradversarial attacks on machine learning models. Data privacy is also a criticalconcern, as machine learning models may inadvertently reveal sensitiveinformation about individuals or businesses.

Financialinstitutions need to implement robust cybersecurity measures, such asencryption, access controls, and intrusion detection, to protect against cyberthreats and ensure compliance with data privacy regulations, such as theGeneral Data Protection Regulation (GDPR) and the California Consumer PrivacyAct (CCPA).

The use ofmachine learning in finance raises ethical and social implications that need tobe carefully considered. For example, the use of machine learning in creditscoring or lending decisions may raise concerns about fairness.

The applicationof artificial intelligence and machine learning in finance is still in itsearly phases, but it is fast evolving. We should expect to see more widespreadadoption of these technologies in the financial industry as they grow moresophisticated and accessible. Here are some examples of probable futureapplications:

AI and machinelearning can be used to examine market data and discover trends that may affectinvesting. This could assist financial firms in making more educated investmentdecisions.

The applicationof AI and machine learning in finance is changing the way financialorganizations operate. These technologies have various advantages, includinghigher accuracy, efficiency, and risk control. However, there are severalissues to consider, such as data quality, openness, and ethical problems.

We shouldanticipate seeing more broad adoption of AI and machine learning in thefinancial industry as they progress, with potential future applicationsincluding personalized financial advising, automated underwriting, fraudprotection, and predictive analytics.

We have seen aconsiderable surge in the usage of artificial intelligence (AI) and machinelearning in the finance industry in recent years. These technologies are beingadopted by financial institutions in order to automate and optimize theirprocesses, eliminate risks, and acquire insights into client behavior.

AI and machinelearning are transforming the way we do business and proving to be significant tools in the banking industry.

Artificialintelligence (AI) and machine learning (ML) are computer technologies thatallow machines to learn from data, discover patterns, and make judgments. AIentails creating algorithms capable of performing tasks that would normallyneed human intelligence, such as language translation, image recognition, anddecision-making.

Machinelearning is a branch of artificial intelligence that focuses on developingsystems that can learn from data without being explicitly programmed.

Keep Reading

AI and machinelearning have several financial applications. Here are some examples of howthese technologies are being used:

One of the mostsignificant advantages of AI and machine learning is its capacity to detectfraudulent transactions. These technologies are being used by banks andfinancial institutions to examine vast amounts of data and find trends that maysuggest fraudulent conduct. This enables them to detect and prevent fraudbefore it causes harm.

AI and machinelearning can be used to evaluate market data and find investment possibilitiesin investment management. They can also be used to automate trading operations,allowing financial organizations to make more accurate and timely tradingdecisions.

The applicationof AI and machine learning in finance has various advantages. Here are a fewexamples:

While theapplication of AI and machine learning in finance has significant advantages,it also has some drawbacks. Here are a few examples:

Integrationwith current systems: Integrating AI and machine learning into existing systemscan be difficult and may necessitate considerable infrastructure and traininginvestments.

In finance,machine learning has been used for tasks such as risk assessment, frauddetection, portfolio optimization, and trading strategies. However, like anytechnology, machine learning in finance comes with its own set of risks thatneed to be carefully considered and managed.

Machinelearning models are only as good as the data they are trained on. In finance,data can come from various sources, such as historical stock prices, economicindicators, and social media sentiment. However, data quality can vary, andinaccurate, incomplete, or biased data can lead to inaccurate predictions ordecisions. Bias in data, such as gender or racial bias, can also beinadvertently learned by machine learning algorithms, leading to biasedoutcomes in finance, such as biased lending decisions or discriminatorypricing. Therefore, it is crucial to carefully curate and preprocess data tominimize these risks and ensure that machine learning models are trained onreliable and representative data.

Machinelearning models can sometimes be black boxes, meaning that theirdecision-making process may not be easily interpretable or explainable. Infinance, where regulatory requirements and transparency are critical, a lack ofmodel interpretability and explainability can pose risks. It can be challengingto understand how and why a machine learning model makes a particularprediction or decision, which can raise concerns about accountability,fairness, and compliance.

Financialinstitutions need to ensure that machine learning models used in finance aretransparent, explainable, and comply with regulatory requirements to mitigatethe risks associated with model opaqueness.

Machinelearning models are susceptible to overfitting, which occurs when a modelperforms well on the training data but fails to generalize to new, unseen data.Overfitting can lead to inaccurate predictions or decisions in real-worldfinancial scenarios, resulting in financial losses. It is crucial to useappropriate techniques, such as regularization and cross-validation, tomitigate the risks of overfitting and ensure that machine learning models cangeneralize well to new data.

Machinelearning models are trained on data and learn from patterns in data, but theydo not have human-like judgment, intuition, or common sense. In finance, humanoversight is critical to ensure that machine learning models are makingsensible decisions aligned with business objectives and ethical principles.Relying solely on machine learning models without human oversight can lead tounintended consequences, such as incorrect investment decisions, failure todetect anomalies or fraud, or unintended biases.

Financialinstitutions need to strike a balance between automation and human judgment,and carefully monitor and validate the outcomes of machine learning models toreduce risks associated with a lack of human oversight.

The use ofmachine learning in finance requires the collection, storage, and processing ofvast amounts of sensitive financial data. This can make financial institutionsvulnerable to cybersecurity threats, such as data breaches, insider attacks, oradversarial attacks on machine learning models. Data privacy is also a criticalconcern, as machine learning models may inadvertently reveal sensitiveinformation about individuals or businesses.

Financialinstitutions need to implement robust cybersecurity measures, such asencryption, access controls, and intrusion detection, to protect against cyberthreats and ensure compliance with data privacy regulations, such as theGeneral Data Protection Regulation (GDPR) and the California Consumer PrivacyAct (CCPA).

The use ofmachine learning in finance raises ethical and social implications that need tobe carefully considered. For example, the use of machine learning in creditscoring or lending decisions may raise concerns about fairness.

The applicationof artificial intelligence and machine learning in finance is still in itsearly phases, but it is fast evolving. We should expect to see more widespreadadoption of these technologies in the financial industry as they grow moresophisticated and accessible. Here are some examples of probable futureapplications:

AI and machinelearning can be used to examine market data and discover trends that may affectinvesting. This could assist financial firms in making more educated investmentdecisions.

The applicationof AI and machine learning in finance is changing the way financialorganizations operate. These technologies have various advantages, includinghigher accuracy, efficiency, and risk control. However, there are severalissues to consider, such as data quality, openness, and ethical problems.

We shouldanticipate seeing more broad adoption of AI and machine learning in thefinancial industry as they progress, with potential future applicationsincluding personalized financial advising, automated underwriting, fraudprotection, and predictive analytics.

Read more:
Greater Use of Artificial Intelligence and Machine Learning in Finance - Finance Magnates

Read More..

For chatbots and beyond: Improving lives with data starts with … – Virginia Tech Daily

ChatGPT, an AI chatbot launched this fall, allows users to ask for help with things such as writing essays, drafting business plans, generating code, and even composing music. As of Dec. 4, ChatGPT already had over 1 million users.

Open AI built its auto-generative system on a model called GPT 3, which is trained on billions of tokens. These tokens, used for natural language processing, are similar to words in a paragraph. For comparisons sake, the novel Harry Potter and the Order of the Phoenix has about 250,000 words and 185,000 tokens. Essentially, ChatGPT has been trained on billions of data points, making this kind of intelligent machine possible.

Jia noted the importance of data quality and how it can impact machine learning results.

If you have bad data feeding into machine learning, you will get bad results, said Jia. We call that 'garbage in, garbage out.' We want to get an understanding, especially a quantitative understanding, of which data is more valuable and which is less valuable for the purpose of data selection.

The importance of more quality-based data has been noticed by ChatGPT developers as they just announced the release of GPT-4. The latest technology is multimodal, meaning images as well as text prompts can spur it to generate content.

A large amount of data is required to develop this type of machine intelligence, but not all data is open sourced or public. Some data sets are owned by private entities and there is privacy involved. Jia hopes that in the future, monetary incentives can be introduced to help acquire these types of data sets and improve the machine learning algorithms that are needed in all industries.

The University of California-Berkeley grad has had conversations with Google Research and Sony AI Research, among others, who are interested in the research benefits. Jia hopes these companies will adopt the technology developed and serve as advocates for data sharing. Sharing data and adopting improved machine learning algorithms will greatly benefit not only industries but individual consumers as well. For instance, if youve ever had a bad experience with a customer service chatbot, youve experienced low-quality data and poor machine learning algorithm design.

Jia hopes to use her background and area expertise to improve these web-based interactions for all. As a school-aged child, Jia always enjoyed math and science, but her decision to enter the electrical and computer engineering field stemmed from her desire to help people.

Both of my parents are doctors. It was amazing to grow up seeing them help patients with some kind of medical formula, said Jia. Thats why I chose to study math and science. You can have a concrete impact. Im using a different kind of formula to help, but I like that pursuing this career has made me feel like I can make a difference in someones life.

The CAREER award is the National Science Foundations most prestigious award for early-career faculty with the potential to serve as academic role models in research and education and to lead advances in their organizations mission. Throughout this project, Jia has demonstrated her desire to serve as an academic role model for graduate, undergraduate, and even K-12 students.

She is a core faculty in theSanghani Center for Artificial Intelligence and Data Analytics, formerly known as the Discovery Analytics Center. The center has more than 20 faculty members and 120 graduate students, two of whom are working directly with Jia to conduct the planned research.

Read the original here:
For chatbots and beyond: Improving lives with data starts with ... - Virginia Tech Daily

Read More..

Machine learning based prediction for oncologic outcomes of renal … – Nature.com

Using the original KORCC database9, two recent studies have been reported28,29. At first, Byun et al.28 assessed the prognosis of non-metastatic clear cell RCC using a deep learning-based survival predictions model. Harrels C-indices of DeepSurv for recurrence and cancer-specific survival were 0.802 and 0.834, respectively. More recently, Kim et al.29 developed ML-based algorithm predicting the probability of recurrence at 5 and 10years after surgery. The highest area under the receiver operating characteristic curve (AUROC) was obtained from the nave Bayes (NB) model, with values of 0.836 and 0.784 at 5 and 10years, respectively.

In the current study, we used the updated KORCC database. It now contains clinical data of more than 10,000 patients. To the best of our knowledge, this is the largest dataset in Asian population with RCC. With this dataset, we could develop much more accurate models with very high accuracy (range, 0.770.94) and F1-score (range, 0.770.97, Table 3). The accuracy values were relatively high compared to the previous models, including the Kattan nomogram, Leibovich model, the GRANT score, which were around 0.75,6,7,8. Among them, the Kattan nomogram was developed using a cohort of 601 patients with clinically localized RCC, and the overall C-index was 74%5. In a subsequent analysis with the same patient group using an additional prognostic variables including tumor necrosis, vascular invasion, and tumor grade, the C-index was as high as 82%30. Their prediction accuracies were not as high as ours yet.

In addition, we could include short-term (3-year) recurrence and survival data, which would be helpful for developing more sophisticated surveillance strategy. The other strength of current study was that most algorithms introduced so far had been applied18,19,20,21,22,23,24,25,26, showing relatively consistent performance with high accuracy. Finally, we also performed an external validation by using a separate (SNUBH) cohort, and achieved well maintained high accuracy and F1-score in both recurrence and survival (Fig.2). External validation of prediction models is essential, especially in case of using the multi-institutional dataset, to ensure and correct for differences between institutions.

AUROC has been mostly used as the standard evaluating performance of prediction models5,6,7,8,29. However, AUROC weighs changes in sensitivity and specificity equally without considering clinically meaningful information6. In addition, the lack of ability to compare performance of different ML models is another limitation of AUROC technique31. Thus, we adopted accuracy and F1-score instead of AUROC as evaluation metrics. F1-score, in addition to SMOTE17, is used as better accuracy metrics to solve the imbalanced data problems27.

RCC is not a single disease, but multiple histologically defined cancers with different genetic characteristics, clinical courses, and therapeutic responses32. With regard to metastatic RCC, the International Metastatic Renal Cell Carcinoma Database Consortium and the Memorial Sloan Kettering Cancer Center risk model have been extensively validated and widely used to predict survival outcomes of patients receiving systemic therapy33,34. However, both risk models had been developed without considering histologic subtypes. Thus, the predictive performance was presumed to have been strongly affected by clear cell type (predominant histologic subtype) RCC. Interestingly, in our previous study using the Korean metastatic RCC registry, we found the both risk models reliably predicted progression and survival even in non-clear cell type RCC35. In the current study, after performing subgroup analysis according to the histologic type (clear vs. non-clear cell type RCC), we also found very high accuracy and F1-score in all tested metrics (Supplemental Tables 3 and 4). Taking together, these findings suggest that the prognostic difference between clear and non-clear cell type RCC seems to be offset both in metastatic and non-metastatic RCC. Further effort is needed to develop and validate a sophisticated prediction model for individual subtypes of non-clear cell type RCC.

The current study had several limitations. First, due to the paucity of long-term follow-up cases at 10years, data imbalance problem could not be avoided. Subsequently, recurrence-free rate at 10-year was reported only to be 45.3%. In the majority of patients, further long-term follow up had not been performed in case of no evidence of disease at five years. However, we adopted both SMOTE and F1-score to solve these imbalanced data problems. The retrospective design of this study was also an inherent limitation. Another limitation was that the developed prediction model only included the Korean population. Validation of the model using data from other countries and races is also needed. In regard of non-clear cell type RCC, the current study cohort is still relatively small due to the rarity of the disease, we could not avoid integrating each subtype and analyzing together. Thus, further studies is still needed to develop and validate a prediction model for each subtypes. In addition, the lack of more accurate classifiers such as cross-validation and bootstrapping is another limitation of current study. Finally, the web-embedded deployment of model should be followed to improve accessibility and transportability.

Read more:
Machine learning based prediction for oncologic outcomes of renal ... - Nature.com

Read More..

Students Use Machine Learning in Lesson Designed to Reveal … – NC State News

In a new study, North Carolina State University researchers had 28 high school students create their own machine-learning artificial intelligence (AI) models for analyzing data. The goals of the project were to help students explore the challenges, limitations and promise of AI, and to ensure a future workforce is prepared to make use of AI tools.

The study was conducted in conjunction with a high school journalism class in the Northeast. Since then, researchers have expanded the program to high school classrooms in multiple states, including North Carolina. NCState researchers are looking to partner with additional schools to collaborate in bringing the curriculum into classrooms.

We want students, from a very young age, to open up that black box so they arent afraid of AI, said the studys lead author Shiyan Jiang, assistant professor of learning design and technology at NCState. We want students to know the potential and challenges of AI, and so they think about how they, the next generation, can respond to the evolving role of AI and society. We want to prepare students for the future workforce.

For the study, researchers developed a computer program called StoryQ that allows students to build their own machine-learning models. Then, researchers hosted a teacher workshop about the machine learning curriculum and technology in one-and-a-half hour sessions each week for a month. For teachers who signed up to participate further, researchers did another recap of the curriculum for participating teachers, and worked out logistics.

We created the StoryQ technology to allow students in high school or undergraduate classrooms to build what we call text classification models, Jiang said. We wanted to lower the barriers so students can really know whats going on in machine-learning, instead of struggling with the coding. So we created StoryQ, a tool that allows students to understand the nuances in building machine-learning and text classification models.

A teacher who decided to participate led a journalism class through a 15-day lesson where they used StoryQ to evaluate a series of Yelp reviews about ice cream stores. Students developed models to predict if reviews were positive or negative based on the language.

The teacher saw the relevance of the program to journalism, Jiang said. This was a very diverse class with many students who are under-represented in STEM and in computing. Overall, we found students enjoyed the lessons a lot, and had great discussions about the use and mechanism of machine-learning.

Researchers saw that students made hypotheses about specific words in the Yelp reviews, which they thought would predict if a review would be positive, or negative. For example, they expected reviews containing the word like to be positive. Then, the teacher guided the students to analyze whether their models correctly classified reviews. For example, a student who used the word like to predict reviews found that more than half of reviews containing the word were actually negative. Then, researchers said students used trial and error to try to improve the accuracy of their models.

Students learned how these models make decisions, and the role that humans can play in creating these technologies, and the kind of perspectives that can be brought in when they create AI technology, Jiang said.

From their discussions, researchers found that students had mixed reactions to AI technologies. Students were deeply concerned, for example, about the potential to use AI to automate processes for selecting students or candidates for opportunities like scholarships or programs.

For future classes, researchers created a shorter, five-hour program. Theyve launched the program in two high schools in North Carolina, as well as schools in Georgia, Maryland and Massachusetts. In the next phase of their research, they are looking to study how teachers across disciplines collaborate to launch an AI-focused program and create a community of AI learning.

We want to expand the implementation in North Carolina, Jiang said. If there are any schools interested, we are always ready to bring this program to a school. Since we know teachers are super busy, were offering a shorter professional development course, and we also provide a stipend for teachers. We will go into the classroom to teach if needed, or demonstrate how we would teach the curriculum so teachers can replicate, adapt, and revise it. We will support teachers in all the ways we can.

The study, High school students data modeling practices and processes: From modeling unstructured data to evaluating automated decisions, was published online March 13 in the journal Learning, Media and Technology. Co-authors included Hengtao Tang, Cansu Tatar, Carolyn P. Ros and Jie Chao. The work was supported by the National Science Foundation under grant number 1949110.

-oleniacz-

Note to Editors: The study abstract follows.

High school students data modeling practices and processes: From modeling unstructured data to evaluating automated decisions

Authors: Shiyan Jiang, Hengtao Tang, Cansu Tatar, Carolyn P. Ros and Jie Chao.

Published: March 13, 2023, Learning, Media and Technology

DOI: 10.1080/17439884.2023.2189735

Abstract: Its critical to foster artificial intelligence (AI) literacy for high school students, the first generation to grow up surrounded by AI, to understand working mechanism of data-driven AI technologies and critically evaluate automated decisions from predictive models. While efforts have been made to engage youth in understanding AI through developing machine learning models, few provided in-depth insights into the nuanced learning processes. In this study, we examined high school students data modeling practices and processes. Twenty-eight students developed machine learning models with text data for classifying negative and positive reviews of ice cream stores. We identified nine data modeling practices that describe students processes of model exploration, development, and testing and two themes about evaluating automated decisions from data technologies. The results provide implications for designing accessible data modeling experiences for students to understand data justice as well as the role and responsibility of data modelers in creating AI technologies.

Go here to see the original:
Students Use Machine Learning in Lesson Designed to Reveal ... - NC State News

Read More..