Page 1,760«..1020..1,7591,7601,7611,762..1,7701,780..»

The U.S. is falling behind in artificial intelligence. Here is what one university is doing about it . – University of Florida

Welcome to From Florida, a podcastthat showcases the student success, teaching excellence andgroundbreakingresearch taking place atthe University of Florida.

To thrive economically and be globally competitive, the U.S. needs to add many more workers who understand and have expertise in artificial intelligence. In this episode, David Reed, inaugural director of the Artificial Intelligence Academic Initiative Center explains how the University of Florida is taking a comprehensive approach to meet that need. Produced by Nicci Brown, Brooke Adams, James Sullivan and Emma Richards. Original music by Daniel Townsend, a doctoral candidate in music composition in the College of the Arts.

For more episodes of From Florida, click here.

Nicci Brown: Artificial intelligence is a part of so much of our day to day lives and it's spurring major societal and economic change. Because of this, the University of Florida is taking a unique approach to this technology. Instead of AI being a focus in only certain colleges or programs, UF is integrating artificial intelligence across the university, from instruction to research to university operations and in disciplines ranging from medicine to the arts.

I'm your host, Nicci Brown, and today on From Florida we are going to talk about the University of Florida's AI initiative and specifically the role of the Artificial Intelligence Academic Initiative Centerin carrying this work forward. Our guest today is David Reed, the inaugural director of the center. Welcome, David.

David Reed: Thank you very much. It's great to be with you today.

Nicci Brown: David, as I mentioned in the introduction, you are the Inaugural Director of the Artificial Intelligence Academic Initiative Center, AI Squared, as we call it. First of all, congrats and, second, what is the purpose of the center?

David Reed: Well, thank you. So, the purpose of the center is really to support all things artificial intelligence at the University of Florida and that's everything from marketing about what we do to enhancing the courses that we offer our students, getting faculty up to speed on artificial intelligence, adding it to their research repertoire if they don't use those techniques already and really just everything and anything related to artificial intelligence.

Nicci Brown: Quite a large role.

David Reed: It is.

Nicci Brown: Could you tell us more about the reasons UF made artificial intelligencea focal point for our campus?

David Reed: Absolutely. So, first of all, artificial intelligence is a big catchall term and we use it for all kinds of things. It's a technique to mine large amounts of data. It's a way to help computers make decisions. And so, when we talk about AI, we really are talking about a broad set of different kinds of things. But what we're finding and what industry partners are telling us is that artificial intelligence is now being used in one way or another in disciplines from A to Z. Everything imaginable. Anywhere you can collect large amounts of data, AI has the potential to really help you understand your business or your art or anything that you're doing. And so, because of that, we feel like it's important for all of our students to have the opportunity to learn how AI is already being used in their current discipline.

Nicci Brown: And so what does that look like as far as courses that are available and student enrollment in those courses?

David Reed: Well, we have over 200 courses in AI and data science already on the books here at UF that students can take and at the moment we have over 6,000 students taking those courses. So, we know that our students are engaged. They already understand the importance of artificial intelligence. But we've also erected things like an undergraduate certificate where an undergraduate can take three courses in AI and come away with really good skills about applying artificial intelligence right in their discipline.

Nicci Brown: Also, there are opportunities for staff at the university as well to learn more about AI.

David Reed: Indeed. We have a whole suite of professional development courses. These are meant to upskill workers who are already employed or people who want to become employed with artificial intelligence skills. They can take these courses and little-by-little they learn the ins and outs of artificial intelligence, but, more importantly, and this is true for our students as well, they learn how artificial intelligence is used right in the specific discipline that they're working in.

Nicci Brown: And I'll fess up, I've signed up for the courses. I've yet to get started. But one of the ones that I was really fascinated in learning more about was the ethics course.

David Reed: Indeed. So artificial intelligence done without an ethical framework often goes awry very quickly and so we require an ethics course for the undergraduate certificate. We also require it for the undergraduate major that we have in data science. It's critically important to understand how artificial intelligence can either be misused in malevolent ways or just misunderstood and used poorly. And the ethics course really helps people understand that.

Nicci Brown: So, we're hiring faculty with specialized expertise in AIto achieve this across-the-curriculum activity and they truly do cross all disciplines. We've heard about some of the courses. Can you tell us a little more about the research that's happening at UF?

David Reed: Sure. So, we've hired over 100 new faculty in artificial intelligence and they're spread across all 16 of the colleges that we have here at UF, and so they really are all over campus. So, for instance, we hired David Grant in the Department of Philosophy within the College of Liberal Arts and Sciences and he actually studies the ethics of artificial intelligence. Specifically, he studies how organizations use AI to make really high impact decisions.

But we have people in architecture. For instance, Vernelle Noel uses AIto study incredible designs of costumes at Trinidad's Carnival. So, there's just these wide uses of artificial intelligence. Joel Davis in business studies how executives and consumers incorporate AI advice into their decision-making process about buying or selling products. Nicolas Gauthier, an anthropologist at the Florida Museum, uses AI to study human-caused changes in the environment, whether it's in the past or the present or predicting the future, and that's really where the AI comes in. And then, lastly, Mickey MacKie in geology uses artificial intelligence to study glaciers. I mean, it just really is the applicability of artificial intelligence is so widespread.

Nicci Brown: Yeah, it's incredible when you think about, and we have had Mickey on the program before, this person who is studying at the University of Florida or researching at the University of Florida and also teaching and she's studying glaciers. It really is this broad range, for sure. What are some of the priority initiatives that you've developed for the center because this is an enormous task that you have and in an inaugural role you really have to set the playing field.

David Reed: Indeed. And because it's university-wide, the projects that we have really vary tremendously. We're trying to support faculty, for one, so we are inviting 40 faculty who study artificial intelligence to a communications workshop that lasts all year, its seven-day long sessions, that'll be starting this fall, where we can teach faculty how to talk about their research in artificial intelligence in new and basically concrete ways. Artificial intelligence can be hard to understand sometimes so we're helping them in their communications efforts. That's one thing.

We're also working with the Career Connection Center. If you're not familiar with them on campus, they are ranked No. 1 or No. 2 every year in career services helping our students get into meaningful jobs after they graduate. But we're working with them to better describe the skills that our students are learning in their courses so that it translates on their resume to jobs so that employers can really understand what it is that they've learned and how it's applicable in the jobs that they're applying for.

We're also trying to incentivize faculty to build out new artificial intelligence courses, and we're doing that in a number of different ways so that students have more opportunity to take courses in artificial intelligence.

And then, lastly, one of our projects coming up this fall is called AI Days and that's October 27 and 28. We're trying to get the whole campus engaged in artificial intelligence. And, for students, we have a pitch competition where they pitch a business idea. We also have a hackathon. And for those two events for students there's $50,000 in cash prizes for the winners of the pitch competition and the hackathon. So that event will be an opportunity for faculty, staff, and students to learn a whole lot more about artificial intelligence.

Nicci Brown: You mentioned a little bit earlier about industry and what you are hearing from partners and, certainly one of the things, particularly as a public institution, as a flagship for the state, we do talk about our service to the state of Florida and I think more broadly to the nation. How do you see that all intertwining? What are those kind of communications that you're having?

David Reed: Yeah, absolutely. So, the National Security Commission on Artificial Intelligence,a commission from the federal government, produced a final reportlast year that said that the United States is woefully behind in producing people who understand AI and can use it and that the United States is vulnerable both in terms of economic competitiveness but also in terms of defensive competitiveness. And so, they called for a better and larger AI workforce by 2025. And that's something that we've taken very seriously. That's why we're no longer teaching AI just in the College of Engineering but spreading that education across the full breadth of the university. So, what we're hearing from industry as well as federal partners and others is they need a skilled workforce immediately. And so, we've taken that to heart. We're the only university really doing this. We're really out in front of all of our competitors by trying to create an AI workforce, people who can apply AI specifically in their discipline, and we're going to be doing that within a year or so easily.

Nicci Brown: I've heard as well that some of the things that we're doing, particularly in the College of Ed, but also in the College of Engineering, is looking at K through 12 and how even if we have students who may not feel that university is for them they can become literate in what AI means and that will help them in their future as well.

David Reed: Yeah, absolutely. So those faculty that you've talked about here at UF are working with the Florida Department of Education to create the nation's first artificial intelligence curriculum for public schools. So, typically, in middle schools, but also in high schools, they're starting to teach the concepts of artificial intelligence and data science, and there are two reasons for that. That will prepare some students to come to university and be more prepared for what they experience here. But for those who don't, they're going to be much better citizens in a digital world if they understand the data that's being collected around them and how it's used and so forth. And so it really is important given the digital world that we live in, given how much artificial intelligence is being used around us all the time, the more literate we are about that, the better.

Nicci Brown: And I think there is something to be said in this range just in terms of democratization of information and access to knowledge and getting that available across all groups. What is the university doing as far as that's concerned?

David Reed: Yeah. That's a key component of what we're trying to do. There are many ways in which we're trying to democratize AI. One is we're teaching it across all disciplines here at UF. That's probably the most straightforward. It doesn't matter what your major is, we have courses designed for you to specifically learn artificial intelligence with no computer programming background required before you start or anything like that.

We're also working with public schools as we just talked about. We're also partnering with a number of other colleges and universities around the state to teach their faculty and their students about artificial intelligence. In particular, Miami-Dade College, which is a Hispanic-serving institution in Miami, we're helping their faculty learn about artificial intelligence so they can create new courses in AI. Also, getting their students to come to the University of Florida for graduate degrees.

In addition, we have FAMU in Tallahassee. We have a partnership with them where we're doing the exact same thing. One with Santa Fe College here in Alachua County and with Palm Beach State College in South Florida, where we're partnering with their faculty, learning together about how they can incorporate artificial intelligence into their courses and, by doing that, their students are also gaining this experience as well.

Nicci Brown: You mentioned those other organizations and other educational institutions. It sounds like what we are building here is a model that is transferable.

David Reed: Indeed. There's nothing special that we're doing here that no other college could do. Anyone could do this if they set their mind to it. We're really fortunate here at UF to have been gifted this incredibly large AI supercomputer and we use it in all kinds of incredible ways, but that's not absolutely necessary for teaching AI across-the-curriculum. This is something that any other college or any other university could do and we're trying to find as many partners who want to walk this road with us and do this with us as we can.

Nicci Brown: That sounds like it's intentional on your part.

David Reed: It is, very much so. When we think about it, we're trying to think of all of the potential ways that a learner might get on the path to learning AI. That includes K-12. It includes tech and vocational schools. It includes community colleges, universities even beyond the University of Florida, and the employees who are already working and need some professional development courses to learn how to use AI. And so, we really want to make this something that everybody can participate in.

Nicci Brown: When we think about AI, quite often the first thing that comes to mind for many people is this cold, dark, futuristic, very non-human approach to things. What would you say to people who have that in their mind?

David Reed: Yeah, I think it's a lot of fun reading science fiction and I like to, too, but the reality of artificial intelligence is it is around us all the time. It's there when you use facial recognition to turn on your iPhone, it's there when Amazon is recommending a product to you, and it isn't going to go away this time.

What we are doing with artificial intelligence, for example, it's not going to replace physicians, but what it can do is allow physicians as a tool to be able to find patients for clinical trials much faster than they would otherwise. It's not going to replace lawyers, for instance, but what it might do is help lawyers understand a wider array of potential case studies or precedents coming before that they can base approaches on in a legal system.

And so it really is the combination of experts in their field utilizing the tools of AI to try and do their work better or in some cases do their work faster. I don't think it's going to create autonomous robots that take over the world, but it is going to help you drive your car more safely and lots of other things, and that kind of work is happening right now. And so that's whats exciting about artificial intelligence

Nicci Brown: And for people who may fear that this is going to take their job, what would you say to them?

David Reed: Yeah, I think the prognosticators who love to talk about this and who probably know vastly more than I do, they do say that there will be some jobs that are lost as a result of automation. And that's been true for a very long time, all the way back to the first industrial revolution. But it's also creating jobs at the same time where the skills and the decision making that the human possesses, think of creativity, for one, that's really required for a particular process, is always going to be necessary. So, if you're doing something that can be fully automated, then that may take those jobs. But I think for the vast majority of people who learn this technique or these skills, they're going to have opportunities to expand their employment opportunities quite greatly.

Nicci Brown: One of the areas that I've been particularly interested in learning more about is in the applications when it comes to agriculture. And, of course, with IFAS, we are so strong here at the University of Florida and it's such a large part of what we do. Could you share a little bit more about some of the ways it's being applied there?

David Reed: Oh, absolutely. Yeah, so precision agriculture is a way to use decision making as well as lots of data to try and be smarter about the ways in which you're trying to, say, grow plants. And so, for instance, you can send drones over agricultural fields and the drones can capture so much data, visual data, as they pass over, but it takes an enormous amount of human effort and human time to then download and look at those videos. And it's only so much information that a human could get from those images, but if you use artificial intelligence, they can mine through that data much faster and do things like find areas that are over watered or underwater. They can also find areas where there's crop damage due to pests.

And so, in thinking about precision agriculture, just the fact that you can fly drones over an agricultural field and pull from that massive amounts of data that can then be analyzed pretty quickly to make very specific changes to the agricultural process, those kinds of things are now getting to be widespread in their use in agriculture. And there are many more examples of how artificial intelligence is being used in agriculture alone.

Nicci Brown: And connected to that, of course, we're very mindful of our environment and preserving our environment and protecting our environment. I would imagine that AI also has some applications in that realm as well.

David Reed: Absolutely. Here at UF, we have the Center for Coastal Solutions where they monitor water quality and air quality. They have a monitoring station, for instance, in Charlotte Harbor in Southwest Florida and they collect massive amounts of data very, very quickly from these monitoring stations and from satellites and other things. And so with that, the company, SAS, it's a statistical analysis software company, they've partnered with the Center for Coastal Solutions to create a data model that we can then apply artificial intelligence to. Just how you store the data is critically important to the process of artificial intelligence. But what they'll be able to do is use that to monitor real time events like predicting red tides, for instance, and then also, in partnering with UF Health, be able to warn people who might be at risk of the effects of red tide, respiratory illness, for instance, in elderly populations before the red tide actually occurs. And so, whether it's environmental or health or agriculture, AI is really being applied in so many different domains.

Nicci Brown: You mentioned earlier about the courses that our students are signing up for. Could you give us a sampling of some of the names of these courses or what they're focused on?

David Reed: Yeah, absolutely. So, one of the things I've said a couple of times is you get to learn about artificial intelligence right in your discipline. So, for the undergraduate certificate, the students would start out with two required courses, one's called Fundamentals of AI, and it's the one that really allows you to wade into the AI pool from the shallow end.

You don't have to have any prior experience to take this course. And then there's the required ethics course, which is fantastic. But once you take those two, the third course in that series is something that's within your major. So, for instance, there's AI in Media and Society. If you care about how artificial intelligence is used in marketing and communications and media and so forth.

There's one for students who are interested in design and construction. It's called AI in the Built Environment. There's one for agriculture and life sciences called AI and Agriculture and Life Sciences. And there are many of these spread across the full breadth of the university, AI and the social sciences and on and on. So, there are lots of these different courses that are diving in and learning how artificial intelligence is applied right in your major.

Nicci Brown: And for those of us who are in the workforce and want to learn more, what are the options there?

David Reed: We have a series of seven different courses that you can take. There's a one-hour teaser, if you will, that you can listen to. It's free to go to that and you can find these on ai.ufl.edu. But these one-hour courses just give you a flavor of what you would learn. For a small amount of money, there's also a four-hour bite size chunk that you can take. Or you can actually sign up for a faculty led course that's a total of 15 contact hours where you do a much deeper dive. And you can learn about the fundamentals of AI, you can learn about AI ethics, but then you can also learn about AI in these different applications. Agriculture is one of them. Health and medicine is coming online soon. Business is already developed and a couple of others. And so it gives you the opportunity to really learn about AI, both the fundamentals, the ethics and how it applies in your area.

Nicci Brown: I can only imagine how busy you are and some of the things that you come in contact with. Is there anything about your work recently that has surprised you and even you were like, "Wow, this is just beyond anything I imagined?

David Reed: Well, yeah. The first thing that really surprised me was, we did a tally to see how many students were engaged in artificial intelligence courses, and I was really hoping it would be 1,000 or maybe 2,000 at the most. But to see that we had 6,000 students already taking AI and data science courses when we had really not started any direct marketing to students to tell them about what we were doing, I was very relieved. That was a wonderful sight and it just tells you the students here at UF are obviously in touch with what they're going to need in their professional lives and so they were already seeking out these courses. And that was just great to see.

Nicci Brown: Are there any other partners that you'd like to mention that you're working with right now that people might be interested in knowing about?

David Reed: Absolutely. We've talked about some of the other colleges that we're working with. We've talked about the fact that we're working with the Florida Department of Education on K-12. Those are really important partnerships.

But we also have partnerships with industry too. Our partnership with NVIDIA is one that has even predated our artificial intelligence initiative. They gifted us this incredible AI supercomputer, but they also put on campus an AI Technology Center where two of their engineers are embedded on our campus with our faculty to help them do their research better on HiPerGator AI, the AI supercomputer. We also have a great partnership with IBM where they made their full suite of artificial intelligence software, including Watson, available to our faculty and staff for free.

We also have partnerships with companies like L3Harris. We did professional development for them. Our faculty at the College of Engineering trained some of their trainers on how to train employees about artificial intelligence and data science, and then turned all of that material over to them. And so we've had a wonderful partnership with them. And we're looking for many other industry partners who might want to partner with us in terms of capstone courses for our seniors who have taken a deep dive into artificial intelligence already. That could give those students the ability to solve some real world problems with real world data and really prepare them for the workforce in a deep and meaningful way.

Nicci Brown: It sounds like this approach is inclusive in every sense of the word.

David Reed: It is in that it covers all of our students. It's graduate and undergraduate and professional and we really are trying to make sure that anybody who wants to be included in this can be.

Nicci Brown: David, could you tell us more about the partnership with the SEC?

David Reed: Oh, absolutely. So, in the work that we're doing trying to teach AI across the curriculum, we're trying to find as many partners who will do that alongside us as we possibly can. The Southeastern Conference, what we typically think of as an athletic conference, also partners on academic missions, too, and the latest one is artificial intelligence. And so we've had a working group that have met, all of the schools of the SEC have had a representative at this meeting over the last year where we've talked about what we're doing in the AI and data science space.

For instance, we've heard from faculty at other institutions about AI centers that they have. We've talked about our ability to teach AI across-the-curriculum here at UF. And at this point we're exchanging ideas and discussing best practices for how we can educate our students in artificial intelligence and create a regional center of excellence in the southeastern United States.

Nicci Brown: David, thank you so much for joining us today. It's been a real pleasure speaking with you.

David Reed: Oh, the pleasure was mine. Thank you very much.

Nicci Brown: Listeners, thank you for joining us. Our executive producer is Brooke Adams, our technical producer is James Sullivan and our editorial assistant is Emma Richards. I hope youll tune in next week.

Originally posted here:

The U.S. is falling behind in artificial intelligence. Here is what one university is doing about it . - University of Florida

Read More..

LifeMine Therapeutics Expands Management Team with Appointments of Martin Stahl, Ph.D., as Chief Scientific Officer and Louis Plamondon, Ph.D., as…

CAMBRIDGE, Mass.--(BUSINESS WIRE)--LifeMine Therapeutics Inc., a biopharmaceutical company reinventing drug discovery by mining genetically-encoded small molecules (GEMs) from the biosphere, today announced the appointments of Martin Stahl, Ph.D., as chief scientific officer and Louis Plamondon, Ph.D., as executive vice president and head of CMC. Dr. Stahl, former global head of lead discovery at Roche, will also lead LifeMines operations at its European offices in Basel, Switzerland.

Martin and Louis are stellar additions to the LifeMine executive team and share in our vision to reinvent small molecule drug discovery through genomic search and retrieval from the biosphere, said Gregory Verdine, Ph.D., co-founder, president and chief executive officer of LifeMine. We continue to make substantial progress at LifeMine scaling our Avatar-Rx platform and advancing our multiple drug discovery programs towards the clinic with unparalleled speed, predictability, and scalability. Martin and Louis experience and expertise will only help further accelerate our efforts.

Martin Stahl, Ph.D., appointed to chief scientific officer

Dr. Stahl joins LifeMine from Roche, where he held a variety of scientific leadership roles over 25 years, including positions in medicinal chemistry, immunology, portfolio management and research technologies. Most recently, he served as global head of lead discovery, an organization comprising biophysics, biostructure, biochemistry, cell engineering, assay development and screening capabilities. During his tenure at Roche, Dr. Stahl built program management for the companys small molecule research portfolio, and led a diverse array of global initiatives, shaping culture, infrastructure and data science.

A chemist by training, Dr. Stahl has published widely on molecular design and was the recipient of an ACS National Award for Computers in Chemistry and Pharmaceutical Sciences. He is an advisory board member of the Journal of Medicinal Chemistry, an editorial advisory board member of ChemMedChem, and he has been a trustee of the Cambridge Crystallographic Data Centre.

Dr. Stahl studied chemistry at The University of Freiburg and at The Julius Maximilians University in Wrzburg, Germany, and obtained a Ph.D. in computational chemistry from The Philipp University of Marburg.

Louis Plamondon, Ph.D., appointed to executive vice president and head of CMC

Prior to joining LifeMine, Dr. Plamondon was senior vice president and head of CMC at Constellation Pharmaceuticals (a MorphoSys company), where he led all pre-development/development activities including production of drug substance and drug product for pre-clinical studies, toxicology studies, clinical studies and preparations for commercialization.

Dr. Plamondons expertise ranges from enabling Phase 1 studies through global registrational application and commercial manufacturing, including establishing clinical and commercial supply chain strategies, vendor and alliance management, portfolio strategy implementation, developing regulatory strategies, negotiating with global health authorities, developing life-cycle management strategies and generating secondary patents to extend product life cycles.

Dr. Plamondon is a co-inventor and leader for Velcade (bortezomib), the first proteasome inhibitor approved for the treatment of patients with multiple myeloma, and co-inventor for Xerava (eravacycline), the first fully synthetic fluorocycline used for the treatment of complicated intra-abdominal infections (cIAI).

Dr. Plamondon holds a Ph.D. in organic chemistry from the Universit de Montral and completed his postdoctoral fellowship at Harvard University.

About LifeMine Therapeutics

LifeMine Therapeutics is reinventing drug discovery by mining genetically-encoded small molecules (GEMs) from the biosphere. Through its proprietary, evolutionarily-derived genomic drug discovery platform, LifeMine aims to bring unparalleled speed, predictability and scalability to small molecule drug discovery. LifeMine has discovered, in genomic space, hundreds of potentially high-impact drug candidates relevant to targets across all major disease areas, and has an initial focus on advancing highly impactful precision medicines in oncology and immune modulation. The company was founded in 2017 by renowned entrepreneur/scientists Gregory Verdine, Ph.D., and Richard Klausner, M.D., and entrepreneur/company-builder WeiQing Zhou. Headquartered in Cambridge, Mass., with offices in Gloucester Harbor, Mass. and Basel, Switzerland, LifeMine has raised more than $295 million from leading life science investors. For additional information, please visit http://www.lifeminetx.com.

Excerpt from:

LifeMine Therapeutics Expands Management Team with Appointments of Martin Stahl, Ph.D., as Chief Scientific Officer and Louis Plamondon, Ph.D., as...

Read More..

Data Scientist I, School of Computer Science CeADAR job with UNIVERSITY COLLEGE DUBLIN (UCD) | 311221 – Times Higher Education

Applications are invited for the position of Data Scientist I in CeADAR - Ireland's Centre for Applied Data Analytics & Artificial Intelligence.

CeADAR is seeking an experienced individual who has a demonstrated successful track record in data science in research industry settings (>2 years). Individuals in this role are expected to have proven experience applying artificial intelligence, machine learning, computational statistics, and statistics to real world problems.

The ideal candidate will have a keen interest in contributing to the development of proof of concepts to allow companies to leverage the benefits of state-of-the-art AI algorithms. Relevant areas of interest include: deep learning, explainable AI, computer vision, privacy preserving machine learning, reinforcement learning, natural language processing, self and semi-supervised learning, active learning and the application of ML/AI approaches to earth observation and remote sensing data.

This is an opportunity to work on a number of diverse and exciting projects in the area of data science with real application to a variety of verticals in the industry sector both at national and EU level, including the start-up ecosystem. The individual will be part of CeADAR's Applied Research Group, participating in projects demanding skills in applied data science for the development of machine learning solutions for industry. CeADAR is based in University College Dublin and is funded by the government to help the companies embrace AI. CeADAR is also the designated European Digital Innovation Hub for AI in Ireland, and thus works extensively in Europe.

The applied research at CeADAR covers broad aspects of AI and data analytics using advanced machine learning to deal with structured and unstructured data coming from many different sectors. This is an exciting opportunity to be involved in this strategic and nationally important area of AI and Analytics where CeADAR is shaping the strategy at national and EU level.

Salary range: 43,000 - 53,000 per annum.

Appointment on the above range will be dependent upon qualifications and experience.

Closing date: 17:00hrs (local Irish time) on 17th October 2022.

Applications must be submitted by the closing date and time specified. Any applications which are still in progress at the closing time of 17:00hrs (Local Irish Time) on the specified closing date will be cancelled automatically by the system. UCD are unable to accept late applications. UCD do not require assistance from Recruitment Agencies. Any CV's submitted by Recruitment Agencies will be returned.

Prior to application, further information (including application procedure) should be obtained from the Work at UCD website: https://www.ucd.ie/workatucd/jobs/.

More:

Data Scientist I, School of Computer Science CeADAR job with UNIVERSITY COLLEGE DUBLIN (UCD) | 311221 - Times Higher Education

Read More..

Feasibility and ethics of using data from the Scottish newborn blood spot archive for research | Communications Medicine – Nature.com

Citizens Jury

In June 2017, we brought together a small, diverse group of citizens to address the question: Would research access to the Guthrie Card heel prick blood tests be in the public interest, and, if so, under what conditions? A Citizens Jury is a well-established method to enable public participation in policy making, allowing informed deliberation on an issue and the provision of recommendations. Our Citizens Jury followed best practice for such deliberative public engagement3. First, we convened a steering group to provide oversight of the materials prepared for the Jurors and to identify a range of expert witnesses to give evidence. Next, we consulted a Patient Participatory Involvement and Engagement (PPIE) panel to review these materials. The academic researchers did not involve themselves directly in the Jury process, other than to provide evidence or observe. The recruitment, facilitation and analysis were conducted by Ipsos MORI, Scotland, to preserve neutrality. Jurors met together for two day-long sittings to hear evidence (neutral, for and against), deliberate and reach conclusions.

Using a quota sample, a representative pool of the adult public was recruited in terms of sex, age, working status and social grade. Additional quotas were set to ensure sufficient representation of people with children under the age of five, and people with a family history of a medical condition. An attitudinal quota was set to ensure inclusion of people with varied levels of trust in public, private and third sector organisations, as previous research has found this to be a significant factor underpinning views of data sharing and use4. A total of 20 were recruited of whom 19 participated on day one, and 18 returned for the second day a week later. Jurors were given monetary recompense for taking part in each sitting.

Day One started with a warmup session sharing their views on health research and health-relevant information, followed by evidence from various experts to stimulate discussion of issues around research use of the newborn blood spots. Day Two included further expert witnesses but more time for Jurors own deliberations and to arrive at a conclusion on the key question. Facilitation tools, such as speed dating techniques, meant that all Jurors could express and reflect on their own views as well as those of others, building up to group and plenary deliberations. The first day focused on more general discussion and information sharing; the second day involved detailed deliberation of the key question and delivering of the verdict. Both days were audio recorded and transcribed for subsequent analysis. A short questionnaire was administered at the end of each day to gauge individual level views and thinking.

Background information was provided by authors SCB and DJP during the morning of Day One. Over the 2 days, six witnesses were called to give evidence and answer Jurors questions. These comprised health care professionals and scientists, a Caldicott Guardian (a person responsible for protecting the confidentiality of peoples health and care information and making sure it is used properly) and a Genewatch spokesperson (representing not-for-profit groups that monitor developments in DNA technologies). Jurors also had access to a short video and other written information about comparative deliberations and policy in California and Denmark. Having heard on Day One some of the health research opportunities uniquely possible if access were granted, an introduction to some of the social, ethical and legal issues, and how the NHS protects privacy, Jurors heard more opposing and critical views on Day Two, with one witness cautioning against allowing research access and another the importance of not compromising the newborn screening programme itself.

The Scottish newborn blood spots archive is stored in around 900 boxes each containing circa 3000 cards in a single secure location under the authority of the Director of the Scottish Newborn Screening laboratory and custodianship of the NHS Research Scotland Greater Glasgow and Clyde Biorepository. A unified digital record is in place from 2000 onwards, but the content and consistency of information available from older cards was not known at the outset. The design and information content of the newborn blood spots was known to have varied over time, but not documented. We were aware that some cards had suffered water damage prior to assembling the nation-wide archive. We also know that for a period of time some cards had been autoclaved before storage. For the vast majority, the newborn blood spots have simply been stored at room temperature. The unified newborn blood spot archive management database has limited, high-level summary information on the contents of each box.

We obtained permission from the Caldicott Guardians of NHS Research Scotland Greater Glasgow and Clyde and of Tayside to retrieve representative boxes from the NHS Research Scotland secure archive. Permission was given for (a) examination and documentation of a sub-sample from each box to provide a snap-shot of the information attached that might be required from the purposes of linkage to other routine health records and (b) sampling of cards corresponding to consented members of Generation Scotland.

Thirty boxes representing each decade from 1965 to 1999 were retrieved and examined by NHS Scotland staff at the NHS Research Scotland Greater Glasgow and Clyde Biorepository. Only summary information was shared with the rest of the study team.

The organisation and information content by box varied over time (Supplementary Fig.1). In some boxes there were unlabelled bundles of cards (Supplementary Fig.1D), but most had 4 labelled foolscap sub-boxes (Supplementary Fig.1B) with multiple, date-labelled bundles of cards (Supplementary Fig.1C).

We drew up a list of potential information that the newborn blood spots might carry from which to judge the feasibility of conducting epidemiological studies by linkage to NHS Scotland routine medical record and potentially additional consented data from research subjects.

For each box, we collected the following information: Box ID; Area Health board; Hospital; Type of card; Date of test; Child forename; Child surname; presence or absence of Community Health Index identifier. No personal information was recorded. Each box took 2 members of staff working in tandem ~2h to document.

Next, circa 1 in 100 cards from each box were examined in detail and the presence or absence of the following features documented: Child DOB; Additional comments on card; Number of blood spots; Size of spots; Mothers CHI; Mothers forename, surname and birth name; Mothers date of birth; Address; Postcode; Whether the cards had been autoclaved prior to archiving; Any other comments.

Cards from 1965 had very little information on them and in many cases did not even record the sex of the baby. Information content increased progressively over time. By the 1990s, the sex of the child and home address were generally recorded.

Over 24,000 Generation Scotland (GS) volunteers were recruited as adults between 2006 and 20115. All were born before 1993, before the digital recording of newborn blood spots began. Consent for linkage to medical records was optional but was given by 98% of volunteers. They were asked to give information about their place of birth (country and council area). A total of 8703 volunteers with linkage consent were born in Glasgow or Tayside area health boards between 1965 and 1992. The set of 30 boxes retrieved and documented for epidemiological purposes were selected on the basis that (a) Greater Glasgow and Tayside were the regions for which we had Caldicott Guardian approval and (b) they were expected to include bundles from Generation Scotland participants as the majority came for these regions. A list of names, birthplace and date of birth was extracted from the GS database and sent to the NHS Greater Glasgow and Clyde Biorepository to look for matching cards. Pseudonymous ids were added to the list so that any samples from matching cards could be labelled and linked back to the GS database after genotyping. Ninety-two matching cards were identified amongst newborn blood spots from Tayside. Of these, 58 were usable for punching having fulfilled the prerequisite of leaving one spot intact (Supplementary Fig.2). Six to ten punches were taken from each card, placed in vials labelled with a pseudonymous ID for matching to the samples donated at baseline by each Generation Scotland volunteer, and couriered to the Edinburgh Clinical Research Laboratory Genetics Laboratory for DNA analysis.

Unlike the Danish Newborn Screening Biobank (DNSB), the Scottish newborn blood spot archive comprises a variety of paper types and storage conditions, particularly for older cards. To establish the effect this might have on the recovery of analysable DNA, a pilot study in 20122014 was undertaken on a de-identified set of 136 newborn blood spots dated from 1965 to 2012 (Table1). The study was mandated by the Scottish Chief Scientist Office, following a favourable opinion from the Scottish Legal Office and North of Scotland Research Ethics Committee. De-identified cards were provided by the Scottish National Dried blood spot collection, Biochemical Genetics Laboratory, Duncan Guthrie Institute, Greater Glasgow Health Authority, Yorkhill, Glasgow. DNA was extracted from 3mm punches using the Sigma ENA kit. Yields varied from sample to sample, but there was no significant effect of date of birth and sufficient material was obtained for Sanger DNA or exome sequencing in 94% of samples (Table1). Exome sequencing used the Ion AmpliSeq exome kit run on IonTorrent Proton sequencer. Data analysis and variant calling used the IonReporter IonExpress variant caller, 4245 million mapped reads, 94.294.7% on target, mean depth 122130 reads per sample. Thirty one of 32 runs met standard QC criteria for variant analysis.

Of the 92 newborn blood spots matched to GS volunteers, 58 (63%) had sufficient dried blood spot material remaining to take 3mm diameter punch samples, while leaving at least one spot intact. These 58 were from Generation Scotland research volunteers born between 1983 and 1989 (i.e. 3238 years between collection and profiling). DNA was extracted from between 6 to 8 blood spot punches using the QIAamp DNA Investigator Kit (Qiagen; cat. no. 56504), following the manufacturers instructions. The concentration of the DNA samples was measured using a Qubit 2.0 fluorometer and the Qubit dsDNA HS assay (Thermo Fisher; cat. no. Q32854). Total yield isolated was between 196 and 1177ng of DNA. Up to 500ng DNA (range 160500ng) underwent bisulfite conversion (Zymo EZ-96). DNA methylation was profiled using the Infinium HumanMethylationEPIC v1.0 BeadChip (Illumina Inc.; cat. no. WG-317-1001), according to the manufacturers protocol (in batches of 8 samples, 56 assayed of the 58 samples processed). Arrays were scanned on an iScan and analysed using GenomeStudio v2011.1.

DNAm profiles were obtained from the 56 individuals using the Illumina MethylationEPIC beadchip, measuring ~850,000 CpGs across the genome. Quality control measures were performed, removing probes with high detection p-values (>0.05) in >5% of samples (N=52,375), or a beadcount <3 in more than 5% of samples (N=5038). Three samples were removed for having >5% of sites with a detection p-value >0.05. In addition to these standard quality control measures, additional checks were performed to ensure newborn blood spots and baseline samples matched with regard to predicted sex and genotype (Supplementary Information Methods, Supplementary Figs.3 and 4 and Supplementary Table1). Quality control and analysis code have been deposited in a public repository6. To access Generation Scotland data, including the data derived in the feasibility study described here, please go to http://www.ed.ac.uk/generation-scotland/for-researchers/access.

Confirmatory analyses were performed using newborn blood spots DNA methylation data to ensure predicted sex (using X-chromosome data) and genotype (using rs control probes on the EPIC array) were consistent with peripheral blood-based genotyping and DNA methylation data on samples collected at baseline recruitment (20062011) (Supplementary Table1). Detailed information on sample checks is presented inSupplementary Information Methods and Supplementary Figs.3 and 4.

An individuals smoking status can be reliably predicted using composite DNA methylation-derived smoking scores, and effects have also been observed in the offspring of mothers who smoked during pregnancy7. Moreover, information from a single probe in the aryl-hydrocarbon receptor repressor gene (AHRR; cg05575921) can serve as a robust marker of smoking, with lower DNA methylation levels associating with current smoker status. Maternal smoking status at the time of sample collection was derived from smoking status at GS baseline, and the years stopped variable for former smokers, where both mother and baby are in GS. DNA methylation-based estimates of smoking status were obtained, using previously validated methods8. A composite score for smoking status (EpiSmokEr) was obtained using Guthrie sample DNAm data and, along with cg05575921 DNA methylation levels, was plotted against maternal smoking status at the time of sampling (Fig.1). Consistent with previous literature, a higher overall value was observed for the EpiSmokEr score in the offspring of current smokers whereas a lower overall value was observed for the offspring of never smokers, supporting an association at the population level (Fig.1a; ever smoker =0.78; sex-adjusted linear regression P=0.026). DNA methylation levels at cg05575921 were also consistent with the literature, with lower overall levels in the offspring of current smokers relative to never smokers (Fig.1b; ever smoker =0.72; sex-adjusted linear regression P=0.05).

Methylation-derived smoking scores from newborn blood spot DNA (y-axis) plotted against maternal smoking status (current, former, never) at time of birth (NCurrentSmokers=10; NFormerSmokers=5; NNeverSmokers=26). Results are shown for the EpiSmokEr score, a composite measure comprised of multiple CpG sites (a), and DNA methylation levels at a single CpG (cg05575921) in the AHRR gene (b). Upper and lower hinges correspond to the upper and lower quartiles, respectively. Whiskers extend to data points as far as 1.5 times the interquartile range. Outlying data points are defined as those beyond the whiskers. Thick horizontal lines represent the median.

The original Sanger DNA and exome sequencing study was mandated by the Scottish Chief Scientist Office, following a favourable opinion from the Scottish Legal Office and the North of Scotland Research Ethics Committee, REC ref. 11/ns.0014. A letter approving the inspection and documentation of newborn blood spots and selective sampling of GS cards for methylation analysis was provided by the Chief Medical Officer for Scotland on 4 September, 2019. The Caldicott Guardians of NHS Greater Glasgow and Clyde and NHS Tayside granted approval on 30 January 2020 and 3 March 2020, respectively. Volunteers for Generation Scotland gave informed consent at the time of recruitment for biological studies, including genetic studies, on their biological samples and for linkage to medical records. A substantial amendment to the Research Tissue Bank approval for Generation Scotland to cover the feasibility study was submitted to the East of Scotland Research Ethics Committee and approved on 13 March 2020.

The Citizens Jury followed INVOLVE guidelines and was conducted by the polling organisation, Ipsos MORI, on behalf of the University of Edinburgh research team. This work was carried out in accordance with the requirements of the international quality standard for Market Research, ISO 20252:2012, and with the Ipsos MORI Terms and Conditions which can be found at http://www.ipsos-mori.com/terms. Ipsos MORI conducted their own internal ethical review through their ethical review team. The Ipsos MORI Project Director (CM) was then responsible for ensuring that the research materials (recruitment screener, participant information sheets, discussion guides) were clear and met the ethics principles on informed consent, right to refuse, principles of anonymity and confidentiality. No sensitive information was collected. Materials were saved in a secure folder with access restricted to the Ipsos MORI team (CM, SD). After completion, all personal information (participant names, contacts details, recordings and transcripts) were securely destroyed using Ipsos MORI digital shredding software.

Further information on research design is available in theNature Research Reporting Summary linked to this article.

The rest is here:

Feasibility and ethics of using data from the Scottish newborn blood spot archive for research | Communications Medicine - Nature.com

Read More..

The Worldwide Industry for Machine Learning in the Life Sciences is Expected to Reach $20.7 Billion by 2027 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Global Markets for Machine Learning in the Life Sciences" report has been added to ResearchAndMarkets.com's offering.

This report highlights the current and future market potential for machine learning in life sciences and provides a detailed analysis of the competitive environment, regulatory scenario, drivers, restraints, opportunities and trends in the market. The report also covers market projections from 2022 through 2027 and profiles key market players.

Companies Mentioned

The publisher analyzes each technology in detail, determines major players and current market status, and presents forecasts of growth over the next five years. Scientific challenges and advances, including the latest trends, are highlighted. Government regulations, major collaborations, recent patents and factors affecting the industry from a global perspective are examined.

Key machine learning in life sciences technologies and products are analyzed to determine present and future market status, and growth is forecast from 2022 to 2027. An in-depth discussion of strategic alliances, industry structures, competitive dynamics, patents and market driving forces is also provided.

Artificial intelligence (AI) is a term used to identify a scientific field that covers the creation of machines (e.g., robots) as well as computer hardware and software aimed at reproducing wholly or in part the intelligent behavior of human beings. AI is considered a branch of cognitive computing, a term that refers to systems able to learn, reason and interact with humans. Cognitive computing is a combination of computer science and cognitive science.

ML algorithms are designed to perform tasks such data browsing, extracting information that is relevant to the scope of the task, discovering rules that govern the data, making decisions and predictions, and accomplishing specific instructions. As an example, ML is used in image recognition to identify the content of an image after the machine has been instructed to learn the differences among many different categories of images.

There are several types of ML algorithms, the most common of which are nearest neighbor, naive Bayes, decision trees, a priori algorithms, linear regression, case-based reasoning, hidden Markov models, support vector machines (SVMs), clustering, and artificial neural networks. Artificial neural networks (ANN) have achieved great popularity in recent years for high-level computing.

They are modeled to act similarly to the human brain. The most basic type of ANN is the feedforward network, which is formed by an input layer, a hidden layer and an output layer, with data moving in one direction from the input layer to the output layer, while being transformed in the hidden layer.

Report Includes

Key Topics Covered:

Chapter 1 Introduction

Chapter 2 Summary and Highlights

Chapter 3 Market Overview

3.1 Introduction

3.1.1 Understanding Artificial Intelligence in Healthcare

3.1.2 Artificial Intelligence in Healthcare Evolution and Transition

Chapter 4 Impact of the Covid-19 Pandemic

4.1 Introduction

4.1.1 Impact of Covid-19 on the Market

Chapter 5 Market Dynamics

5.1 Market Drivers

5.1.1 Investment in Ai Health Sector

5.1.2 Rising Chronic Diseases

5.1.3 Advanced, Precise Results

5.1.4 Increasing Research and Development Budget

5.2 Market Restraints and Challenges

5.2.1 Reluctance Among Medical Practitioners to Adopt Ai-Based Technologies

5.2.2 Privacy and Security of User Data

5.2.3 Hackers and Machine Learning

5.2.4 Ambiguous Regulatory Guidelines for Medical Software

5.3 Market Opportunities

5.3.1 Untapped Potential in Emerging Markets

5.4 Value Chain Analysis

Chapter 6 Market Breakdown by Offering

Chapter 7 Market Breakdown by Deployment Mode

Chapter 8 Market Breakdown by Application

Chapter 9 Market Breakdown by Region

Chapter 10 Regulations and Finance

Chapter 11 Competitive Landscape

Chapter 12 Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/oqwcnh

Go here to see the original:
The Worldwide Industry for Machine Learning in the Life Sciences is Expected to Reach $20.7 Billion by 2027 - ResearchAndMarkets.com - Business Wire

Read More..

Think Fast! Using Machine Learning Approaches to Identify Predictors of Adverse Pregnancy Outcomes in SLE – Rheumatology Network

Unfortunately, a substantial portion of pregnancies in lupus patients are complicated by an adverse pregnancy outcome (APO). This can include preterm delivery, intrauterine growth restriction and foetal mortality. Given the high prevalence of APOs in this group, there has been considerable interest in predicting those at the greatest risk of negative outcomes, to permit enhanced observation and intervention in these patients. The EUREKA algorithm was developed to predict obstetric risk in patients with different subsets of antiphospholipid antibodies and generated significant discourse.1 More recently, machine learning (ML) methodology has been applied by Fazzari et al to a large observational cohort (PROMISSE) to identify additional predictors of APO.2

The PROMISSE cohort enrolled 385 pregnant women with mild to moderate SLE both with and without antiphospholipid antibody positivity. They collected data on pregnancy outcomes between 2003 and 2013 from 9 North American sites. Exclusion criteria included a daily prednisone dose >20mg, a urinary PCR >1000, serum creatinine >1.2mg/dL, type 1 or 2 diabetes mellitus, or systemic hypertension.

Previous work in this cohort has linked increased levels of the complement activation productsBb and sC5b-9 to higher rates of APOs.3 More recently, Fazzari et al have applied several ML approaches to the PROMISSE cohort and compared these to logistic regression modelling to identify predictors of APO in SLE patients.2

Approaches were trialled including least absolute shrinkage and selection operator (LASSO), random forest, neural network, support vector machines (SVM-RBF) gradient boosting, and SuperLearner. These were compared via area under the receiver operating characteristic(AUROC). Forty-one predictors assessed during routine care of patients with SLE were used to build these models.

Fazzari et al identified several risk factors for APO including high disease activity, lupus anticoagulant positivity, thrombocytopenia, and antihypertensive use. When comparing AUROC, the SuperLearner package had the numerically superior area under the curve (AUC) (0.78). However, this was not significantly different to LASSO, SVM-RBF, or random forest (AUC 0.77 in all cases).

Weaknesses of the PROMISSE cohort are its exclusion of high disease activity SLE patients and those with a systemic blood pressure of > 140/90mmHg. Additionally, the proportion of patients with APOs within the cohort was low (18.4%), likely in part related to the stringent exclusion criteria. A recent retrospective Portuguese study of which did not exclude high disease activity of lupus nephritis patients identified a far higher rate of APO (41.4%) in their SLE cohort.4 Indeed, Ntali et al recently demonstrated reduced APOs in SLE patients with low disease activity in a prospective observational study.5 Application of these models in a higher disease burden cohort would therefore be desirable.

This work demonstrates the utility of ML in aiding clinical risk stratification within a complex patient cohort. The utilization of standard clinical variables and comparison of several ML techniques are substantial strengths of this work. However, further validation in external cohorts is desirable. The application of ML methodology in risk stratification within SLE may provide better clarity in a heterogeneous patient cohort. Additionally, in the future, similar methodological approaches could be trialled across the autoimmune connective tissue disease spectrum to provide better prognostic information to patients at diagnosis, irrespective of their diagnostic label.

References:

Visit link:
Think Fast! Using Machine Learning Approaches to Identify Predictors of Adverse Pregnancy Outcomes in SLE - Rheumatology Network

Read More..

Machine learning tool could help people in rough situations make sure their water is good to drink – ZME Science

Imagine for a moment that you dont know if your water is safe to drink. It may be, it may not be just trying to visualize that situation brings a great deal of discomfort, doesnt it? Thats the situation 2.2 billion people find themselves in on a regular basis.

Chlorine can help with that. Chlorine kills pathogens in drinking water and can make water safe to drink at an optimum level. But its not always easy to estimate the optimum amount of chlorine. For instance, if you put chlorine into a piped water distribution system, thats one thing. But if you chlorinate water in a tank, and then people come and take that water home in containers, its a different thing because this water is more prone to recontamination so you need more chlorine in this type of water. But how much? The problem gets even more complicated because if water stays in place too long, chlorine can also decay.

This is particularly a problem in refugee camps, many of which suffer from a severe water crisis.

Ensuring sufficient free residual chlorine (FRC) up to the time and place water is consumed in refugee settlements is essential for preventing the spread of waterborne illnesses. write the authors of the new study. Water system operators need accurate forecasts of FRC during the household storage period. However, factors that drive FRC decay after the water leaves the piped distribution system vary substantially, introducing significant uncertainty when modeling point-of-consumption FRC.

To estimate the right amount of FRC, a team of researchers from York Universitys Lassonde School of Engineering used a machine learning algorithm to estimate chlorine decay.

They focused on refugee camps, which often face problems regarding drinking water, and collected 2,130 water samples from Bangladesh from June to December 2019, noting the level of chlorine and how it decayed. Then, the algorithm was used to develop probabilistic forecasting of how safe the water is to drink.

AI is particularly good at this type of problem: when it has to derive statistical likelihoods of events from a known data set. In fact, the team combined AI with methods routinely used for weather forecasting. So, you input parameters such as the local temperature, water quality, and the condition of the pipes, and then it can make a forecast of how safe the water is to drink at a certain moment. The model estimates how likely it is for the chlorine to be at a certain level and outputs a range of probabilities, which researchers say is better because it allows water operators to plan better.

These techniques can enable humanitarian responders to ensure sufficient FRC more reliably at the point-of-consumption, thereby preventing the spread of waterborne illnesses.

Its not the first time AI has been used to try and help the worlds less fortunate. In fact, many in the field believe thats where AI can make the most difference. Raj Reddy, one of the pioneers of AI recently spoke at the Heidelberg Laureate Forum, explaining that hes most interested in AI being used for the worlds least fortunate people, noting that this type of technology can move the plateau and improve the lives of the people that need it most.

According to a World Bank analysis, machine learning can be useful in helping developing countries rebuild after the pandemic, noting that software solutions such as AI can help countries overcome more quickly and efficiently existing infrastructure gaps. However, other studies suggest that without policy intervention, AI risks exacerbating economic inequality instead of bridging it.

No doubt, the technology has the ability to solve real problems where its needed most. But more research such as this is needed to find how AI can address specific challenges.

The study has been published in PLoS Water.

More here:
Machine learning tool could help people in rough situations make sure their water is good to drink - ZME Science

Read More..

RBI plans to extensively use artificial intelligence, machine learning to improve regulatory supervision – ETCIO

The Reserve Bank is planning to extensively use advanced analytics, artificial intelligence and machine learning to analyse its huge database and improve regulatory supervision on banks and NBFCs.

For this purpose, the central bank is also looking to hire external experts.

While the RBI is already using AI and ML in supervisory processes, it now intends to upscale it to ensure that the benefits of advanced analytics can accrue to the Department of Supervision in the central bank.

The supervisory jurisdiction of the RBI extends over banks, urban cooperative banks (UCB), NBFCs, payment banks, small finance banks, local area banks, credit information companies and select all India financial institutions.

It undertakes continuous supervision of such entities with the help of on-site inspections and off-site monitoring.

The central bank has floated an expression of interest (EoI) for engaging consultants in the use of Advanced Analytics, Artificial Intelligence and Machine Learning for generating supervisory inputs.

"Taking note of the global supervisory applications of AI & ML applications, this Project has been conceived for use of Advance Analytics and AI/ML to expand analysis of huge data repository with RBI and externally, through the engagement of external experts, which is expected to greatly enhance the effectiveness and sharpness of supervision," it said.

Among other things, the selected consultant will be required to explore and profile data with a supervisory focus.

The objective is to enhance the data-driven surveillance capabilities of the Reserve Bank, the EoI said.

Most of these techniques are still exploratory, however, they are rapidly gaining popularity and scale.

On the data collection side, AI and ML technologies are used for real-time data reporting, effective data management and dissemination.

For data analytics, these are being used for monitoring supervised firm-specific risks, including liquidity risks, market risks, credit exposures and concentration risks; misconduct analysis; and mis-selling of products.

Go here to read the rest:
RBI plans to extensively use artificial intelligence, machine learning to improve regulatory supervision - ETCIO

Read More..

Artificial intelligence may improve suicide prevention in the future – EurekAlert

The loss of any life can be devastating, but the loss of a life from suicide is especially tragic.

Around nine Australians take their own lifeeach day, and it is theleading cause of death for Australians aged 1544. Suicide attempts are more common, with some estimates stating that they occur up to 30 times as often as deaths.

Suicide has large effects when it happens. It impacts many people and has far-reaching consequences for family, friends and communities, says Karen Kusuma, a UNSW Sydney PhD candidate in psychiatry at theBlack Dog Institute, who investigates suicide prevention in adolescents.

Ms Kusuma and a team of researchers from the Black Dog Institute and theCentre for Big Data Research in Healthrecently investigated the evidence base of machine learning models and their ability to predict future suicidal behaviours and thoughts. They evaluated the performance of 54 machine learning algorithms previously developed by researchers to predict suicide-related outcomes of ideation, attempt and death.

The meta-analysis, published in theJournal of Psychiatric Research, found machine learning models outperformed traditional risk prediction models in predicting suicide-related outcomes, which have traditionally performed poorly.

Overall, the findings show there is a preliminary but compelling evidence base that machine learning can be used to predict future suicide-related outcomes with very good performance, Ms Kusuma says.

Identifying individuals at risk of suicide is essential for preventing and managing suicidal behaviours. However, risk prediction is difficult.

In emergency departments (EDs), risk assessment tools such as questionnaires and rating scales are commonly used by clinicians to identify patients at elevated risk of suicide. However, evidence suggests they are ineffective in accurately predicting suicide risk in practice.

While there are some common factors shown to be associated with suicide attempts, what the risks look like for one person may look very different in another, Ms Kusuma says. But suicide is complex, with many dynamic factors that make it difficult to assess a risk profile using this assessment process.

A post-mortem analysis of people who died by suicide in Queensland found, of those who received a formal suicide risk assessment,75 per cent were classified as low risk, and none was classified as high risk. Previous research examining the past 50 years of quantitative suicide risk prediction models also found they were onlyslightly better than chance in predicting future suicide risk.

Suicide is a leading cause of years of life lost in many parts of the world, including Australia. But the way suicide risk assessment is done hasnt developed recently, and we havent seen substantial decreases in suicide deaths. In some years, weve seen increases, Ms Kusuma says.

Despite the shortage of evidence in favour of traditional suicide risk assessments, their administration remains a standard practice in healthcare settings to determine a patients level of care and support. Those identified as having a high risk typically receive the highest level of care, while those identified as low risk are discharged.

Using this approach, unfortunately, the high-level interventions arent being given to the people who really need help. So we must look to reform the process and explore ways we can improve suicide prevention, Ms Kusuma says.

Ms Kusuma says there is a need for more innovation in suicidology and a re-evaluation of standard suicide risk prediction models. Efforts to improve risk prediction have led to her research using artificial intelligence (AI) to develop suicide risk algorithms.

Having AI that could take in a lot more data than a clinician would be able to better recognise which patterns are associated with suicide risk, Ms Kusuma says.

In the meta-analysis study, machine learning models outperformed the benchmarks set previously by traditional clinical, theoretical and statistical suicide risk prediction models. They correctly predicted 66 per cent of people who would experience a suicide outcome and correctly predicted 87 per cent of people who would not experience a suicide outcome.

Machine learning models can predict suicide deaths well relative to traditional prediction models and could become an efficient and effective alternative to conventional risk assessments, Ms Kusuma says.

The strict assumptions of traditional statistical models do not bind machine learning models. Instead, they can be flexibly applied to large datasets to model complex relationships between many risk factors and suicidal outcomes. They can also incorporate responsive data sources, including social media, to identify peaks of suicide risk and flag times where interventions are most needed.

Over time, machine learning models could be configured to take in more complex and larger data to better identify patterns associated with suicide risk, Ms Kusuma says.

The use of machine learning algorithms to predict suicide-related outcomes is still an emerging research area, with 80 per cent of the identified studies published in the past five years. Ms Kusuma says future research will also help address the risk of aggregation bias found in algorithmic models to date.

More research is necessary to improve and validate these algorithms, which will then help progress the application of machine learning in suicidology, Ms Kusuma says. While were still a way off implementation in a clinical setting, research suggests this is a promising avenue for improving suicide risk screening accuracy in the future.

Journal of Psychiatric Research

Meta-analysis

People

The performance of machine learning models in predicting suicidal ideation, attempts, and deaths: A meta-analysis and systematic review

29-Sep-2022

The authors declare no conflict of interest.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

More:
Artificial intelligence may improve suicide prevention in the future - EurekAlert

Read More..

Machine vision breakthrough: This device can see ‘millions of colors’ – Northeastern University

An interdisciplinary team of researchers at Northeastern have built a device that can recognize millions of colors using new artificial intelligence techniquesa massive step, they say, in the field of machine vision, a highly specialized space with broad applications for a range of technologies.

The machine, which researchers call A-Eye, is capable of analyzing and processing color far more accurately than existing machines, according to a paper detailing the research published in Materials Today. The ability of machines to detect, or see, color is an increasingly important feature as industry and society more broadly becomes more automated, says Swastik Kar, associate professor of physics at Northeastern and co-author of the research.

In the world of automation, shapes and colors are the most commonly used items by which a machine can recognize objects, Kar says.

The breakthrough is twofold. Researchers were able to engineer two-dimensional material whose special quantum properties, when built into an optical window used to let light into the machine, can process a rich diversity of color with very high accuracysomething practitioners in the field havent been able to achieve before.

Additionally, A-Eye is able to accurately recognize and reproduce seen colors with zero deviation from their original spectra thanks, also, to the machine-learning algorithms developed by a team of AI researchers, helmed by Sarah Ostadabbas, an assistant professor of electrical and computer engineering at Northeastern. The project is a result of unique collaboration between Northeasterns quantum materials and Augmented Cognition labs.

The essence of the technological discovery centers on the quantum and optical properties of the class of material, called transition metal dichalcogenides. Researchers have long hailed the unique materials as having virtually unlimited potential, with many electronic, optoelectronic, sensing and energy storage applications.

This is about what happens to light when it passes through quantum matter, Kar says. When we grow these materials on a certain surface, and then allow light to pass through that, what comes out of this other end, when it falls on a sensor, is an electrical signal which then [Ostadabbass] group can treat as data.

As it relates to machine vision, there are numerous industrial applications for this research tied to, among other things, autonomous vehicles, agricultural sorting and remote satellite imaging, Kar says.

Color is used as one of the principle components in recognizing good from bad, go from no-go, so theres a huge implication here for a variety of industrial uses, Kar says.

Machines typically recognize color by breaking it down, using conventional RGB (red, green, blue) filters, into its constituent components, then use that information to essentially guess at, and reproduce, the original color. When you point a digital camera at a colored object and take a photo, the light from that object flows through a set of detectors with filters in front of them that differentiate the light into those primary RGB colors.

You can think about these color filters as funnels that channel the visual information or data into separate boxes, which then assign artificial numbers to natural colors, Kar says.

So if youre just breaking it down into three components [red, green, blue], there are some limitations, Kar says.

Instead of using filters, Kar and his team used transmissive windows made of the unique two-dimensional material.

We are making a machine recognize color in a very different way, Kar says. Instead of breaking it down into its principal red, green and blue components, when a colored light appears, say, on a detector, instead of just seeking those components, we are using the entire spectral information. And on top of that, we are using some techniques to modify and encode them, and store them in different ways. So it provides us with a set of numbers that help us recognize the original color much more uniquely than the conventional way.

As the light pass through these windows, the machine processes the color as data; built into it are machine learning models that look for patterns in order to better identify the corresponding colors the device analyzes, Ostadabbas says.

A-Eye can continuously improve color estimation by adding any corrected guesses to its training database, the researchers wrote.

Davoud Hejazi, a Northeastern physics Ph.D. student, contributed to research.

For media inquiries, please contact media@northeastern.edu.

Go here to read the rest:
Machine vision breakthrough: This device can see 'millions of colors' - Northeastern University

Read More..