Page 2,652«..1020..2,6512,6522,6532,654..2,6602,670..»

AIMe A standard for artificial intelligence in biomedicine – Innovation Origins

An international research from several universities including Maastricht University (UM) has proposed a standardized registry for artificial intelligence (AI) work in biomedicine. Aim is to improve the reproducibility of results and create trust in the use of AI algorithms in biomedical research and, in the future, in everyday clinical practice. The scientists presented their proposal in the scientific journal Nature Methods.

In the last decades, new technologies have made it possible to develop a wide variety of systems that can generate huge amounts of biomedical data. For example in cancer research. At the same time, completely new possibilities have developed for examining and evaluating this data using artificial intelligence methods. AI algorithms in intensive care units, e.g., can predict circulatory failure at an early stage. That is based on large amounts of data from several monitoring systems by processing a lot of complex information from different sources at the same time.

Read the complete press release here.

Want to be inspired 365 days per year? Heres the opportunity. We offer you one "origin of innovation" a day in a compact Telegram message. Seven days a week, delivered around 8 p.m. CET. Straight from our newsroom. Subscribe here, it's free!

This great potential of AI systems leads to an unmanageable number of biomedical AI applications. Unfortunately, the corresponding reports and publications do not always adhere to best practices or provide only incomplete information about the algorithms used or the origin of the data. This makes assessment and comprehensive comparisons of AI models difficult. The decisions of AIs are not always comprehensible to humans and results are seldomly fully reproducible. This situation is untenable, especially in clinical research, where trust in AI models and transparent research reports are crucial to increase the acceptance of AI algorithms and to develop improved AI methods for basic biomedical research.

To address this problem, an international research team including the UM has proposed the AIMe registry forartificialintelligence in biomedical research, a community-driven registry that enables users of new biomedical AI to create easily accessible, searchable and citable reports that can be studied and reviewed by the scientific community.

The freely accessible registry is available athttps://aime-registry.organd consists of a user-friendly web service that guides users through the AIMe standard and enables them to generate complete and standardised reports on the AI models used. A unique AIMe identifier is automatically created, which ensures that the report remains persistent and can be specified in publications. Hence, authors do not have to cope with the time-consuming description of all facets of the AI used in articles for scientific journals and simply refer to the report in the AIMe registry.

Read next: More focus on the social impact of AI

Original post:
AIMe A standard for artificial intelligence in biomedicine - Innovation Origins

Read More..

New Artificial Intelligence Technology Poised to Transform Heart Imaging – University of Virginia

A new artificial-intelligence technology for heart imaging can potentially improve care for patients, allowing doctors to examine their hearts for scar tissue while eliminating the need for contrast injections required for traditional cardiovascular magnetic resonance imaging.

A team of researchers who developed the technology, including doctors at UVA Health,reports the success of the approach in a new article in the scientific journal Circulation. The team compared its AI approach, known as virtual native enhancement, with contrast-enhanced cardiovascular magnetic resonance scans now used to monitor hypertrophic cardiomyopathy, the most common genetic heart condition. The researchers found that virtual native enhancement produced higher-quality images and better captured evidence of scar in the heart, all without the need for injecting the standard contrast agent required for cardiovascular magnetic resonance scans.

This is a potentially important advance, especially if it can be expanded to other patient groups, said researcher Dr.Christopher Kramer, the chief of the Division of Cardiovascular Medicine at UVA Health, Virginias only designated Center of Excellence by theHypertrophic Cardiomyopathy Association. Being able to identify scar in the heart, an important contributor to progression to heart failure and sudden cardiac death, without contrast, would be highly significant. Cardiovascular magnetic resonance scans would be done without contrast, saving cost and any risk, albeit low, from the contrast agent.

Hypertrophic cardiomyopathy is the most common inheritable heart disease, and the most common cause of sudden cardiac death in young athletes. It causes the heart muscle to thicken and stiffen, reducing its ability to pump blood and requiring close monitoring by doctors.

The new virtual native enhancement technology will allow doctors to image the heart more often and more quickly, the researchers say. It also may help doctors detect subtle changes in the heart earlier, though more testing is needed to confirm that.

The technology also would benefit patients who are allergic to the contrast agent injected for cardiovascular magnetic resonance scans, as well as patients with severely failing kidneys, a group that avoids the use of the agent.

The new approach works by using artificial intelligence to enhance T1-maps of the heart tissue created by magnetic resonance imaging. These maps are combined with enhanced MRI cines, which are like movies of moving tissue in this case, the beating heart. Overlaying the two types of images creates the artificial virtual native enhancement image.

Based on these inputs, the technology can produce something virtually identical to the traditional contrast-enhanced cardiovascular magnetic resonance heart scans doctors are accustomed to reading only better, the researchers conclude. Avoiding the use of contrast and improving image quality in [cardiovascular magnetic resonance] would only help both patients and physicians down the line, Kramer said.

While the new research examined virtual native enhancements potential in patients with hypertrophic cardiomyopathy, the technologys creators envision it being used for many other heart conditions as well.

While currently validated in the [hypertrophic cardiomyopathy] population, there is a clear pathway to extend the technology to a wider range of myocardial pathologies, they write. [Virtual native enhancement] has enormous potential to significantly improve clinical practice, reduce scan time and costs, and expand the reach of [cardiovascular magnetic resonance] in the near future.

The research team consisted of Qiang Zhang, Matthew K. Burrage, Elena Lukaschuk, Mayooran Shanmuganathan, Iulia A. Popescu, Chrysovalantou Nikolaidou, Rebecca Mills, Konrad Werys, Evan Hann, Ahmet Barutcu, Suleyman D. Polat, HCMR investigators, Michael Salerno, Michael Jerosch-Herold, Raymond Y. Kwong, Hugh C. Watkins, Christopher M. Kramer, Stefan Neubauer, Vanessa M. Ferreira and Stefan K. Piechnik.

Kramer has no financial interests in the research, but some of his collaborators are seeking a patent related to the imaging approach. A full list of disclosures is included in the paper.

The research was made possible by work funded by the British Heart Foundation, grant PG/15/71/31731; the National Institutes of Healths National Heart, Lung and Blood Institute, grant U01HL117006-01A1; the John Fell Oxford University Press Research Fund; and the Oxford BHF Centre of Research Excellence, grant RE/18/3/34214. The research was also supported by British Heart Foundation Clinical Research Training Fellowship FS/19/65/34692, National Institute for Health Research (NIHR) Oxford Biomedical Research Centre at The Oxford University Hospitals NHS Foundation Trust, and the National Institutes of Health.

To keep up with the latest medical research news from UVA, subscribe to theMaking of Medicineblog.

Follow this link:
New Artificial Intelligence Technology Poised to Transform Heart Imaging - University of Virginia

Read More..

Here’s how AI will accelerate the energy transition – World Economic Forum

The new IPCC report is unequivocal: more action is urgently needed to avert catastrophic long-term climate impacts. With fossil fuels still supplying more than 80% of global energy, the energy sector needs to be at the heart of this action.

Fortunately, the energy system is already in transition: renewable energy generation is growing rapidly, driven by falling costs and growing investor interest. But the scale and cost of decarbonizing the global energy system remain gigantic, and time is running out.

To-date, most of the energy sectors transition efforts have focused on hardware: new low-carbon infrastructure that will replace legacy carbon-intensive systems. Relatively little effort and investment has focused on another critical tool for the transition: next-generation digital technologies, in particular artificial intelligence (AI). These powerful technologies can be adopted more quickly at larger scales than new hardware solutions, and can become an essential enabler for the energy transition.

Three key trends are driving AIs potential to accelerate energy transition:

1. Energy-intensive sectors including power, transport, heavy industry and buildings are at the beginning of historic decarbonization processes, driven by growing government and consumer demand for rapid reductions in CO2 emissions. The scale of these transitions is huge: BloombergNEF estimates that in the energy sector alone, achieving net-zero emissions will require between $92 trillion and $173 trillion of infrastructure investments by 2050. Even small gains in flexibility, efficiency or capacity in clean energy and low-carbon industry can therefore lead to trillions in value and savings.

2. As electricity supplies more sectors and applications, the power sector is becoming the core pillar of the global energy supply. Ramping up renewable energy deployment to decarbonize the globally expanding power sector will mean more power is supplied by intermittent sources (such as solar and wind), creating new demand for forecasting, coordination, and flexible consumption to ensure that power grids can be operated safely and reliably.

3. The transition to low-carbon energy systems is driving the rapid growth of distributed power generation, distributed storage and advanced demand-response capabilities, which need to be orchestrated and integrated through more networked, transactional power grids.

Navigating these trends presents huge strategic and operational challenges to the energy system and to energy-intensive industries. This is where AI comes in: by creating an intelligent coordination layer across the generation, transmission and use of energy, AI can help energy-system stakeholders identify patterns and insights in data, learn from experience and improve system performance over time, and predict and model possible outcomes of complex, multivariate situations.

AI is already proving its value to the energy transition in multiple domains, driving measurable improvements in renewable energy forecasting, grid operations and optimization, coordination of distributed energy assets and demand-side management, and materials innovation and discovery. But while AIs application in the energy sector has proven promising so far, innovation and adoption remain limited. That presents a tremendous opportunity to accelerate transition towards the zero-emission, highly efficient and interconnected energy system we need tomorrow.

AI holds far greater potential to accelerate the global energy transition, but it will only be realized if there is greater AI innovation, adoption and collaboration across the industry. That is why the World Economic Forum has today released Harnessing AI to Accelerate the Energy Transition, a new report aimed at defining and catalysing the actions that are needed.

The report, written in collaboration with BloombergNEF and Dena, establishes nine 'AI for the energy transition principles' aimed at the energy industry, technology developers and policy-makers. If adopted, these principles would accelerate the uptake of AI solutions that serve the energy transition by creating a common understanding of what is needed to unlock AIs potential and how to safely and responsibly adopt AI in the energy sector.

The principles define the actions that are needed to unlock AIs potential in the energy sector across three critical domains:

1. Governing the use of AI:

2. Designing AI thats fit for purpose:

3. Enabling the deployment of AI at scale:

AI is not a silver bullet, and no technology can replace aggressive political and corporate commitments to reducing emissions. But given the urgency, scale, and complexity of the global energy transition, we cant afford to leave any tools in the toolbox. Used well, AI will accelerate the energy transition while expanding access to energy services, encouraging innovation, and ensuring a safe, resilient, and affordable clean energy system. It is time for industry players and policy makers to lay the foundations for this AI-enabled energy future, and to build a trusted and collaborative ecosystem around AI for the energy transition.

Written by

Espen Mehlum, Head of Energy,Materials & Infrastructure Program-Benchmarking & Regional Action, World Economic Forum

Dominique Hischier, Program Analyst - Energy, Materials Infrastructure Platform, World Economic Forum

Mark Caine, Project Lead, Artificial Intelligence and Machine LearningProject Lead, Artificial Intelligence and Machine Learning, World Economic Forum

The views expressed in this article are those of the author alone and not the World Economic Forum.

Visit link:
Here's how AI will accelerate the energy transition - World Economic Forum

Read More..

An artificial intelligence approach for selecting effective teacher communication strategies in autism education | npj Science of Learning -…

Data collection

A data set was formed through structured classroom observations in 20 full-day sessions over 5 months in 2019 at a special school with criteria of ASC for admission in East London. Participants included three teachers (one male, two females), their teaching assistants (all females), and seven children (four males, three females) aged from 6 to 12 years across 3 classes. The childrens P-scales range from P3 to P6; P-scale commonly ranges from P1 to P8, with P1P3 being developmental non-subject-specific levels, and with P4P8 corresponding to expected levels for typical development at ages 5648. In addition, the children are also described as social or language partners on the SCERTS scale used by the school. In our study, none of the participating students were classified as conversational partners. The attributes of the student cohort are presented in Supplementary Table 3.

A coding protocol was developed through an iterative process with the participating teachers, and a grid was used for recording teacher-student interaction observations. Comments and suggestions from the teachers were taken into consideration and reflected throughout the multiple revised drafts and the final versions of the coding protocol and recording grid. For each observation instance, we recorded the student identifier, time stamp, teaching objective, teaching type, the context for this teaching type, the students observed emotional state, teachers communication strategy, and the corresponding student response (outcome). Where applicable we also recorded additional notes and the type of activity (e.g. yoga). Although notes were used for context and interpretation for the data analysis as a whole, they were not included in our machine learning function experiments given their free-form inconsistency. Table 1 details all the subcategories that were considered as inputs to the machine learning models. Up to two teaching types and teacher communications could be attributed to a single observation; the rest of the categories can only be represented by one subtype. For example, an observation coded as "3, academic, giving instruction/modelling, whole class, positive, verbal/gesture, full response (the time stamp is omitted) represents that student no. "3, being in a positive emotional state, fully responded to a teachers verbal and gesture instruction, when teaching was taking place in a whole class environment, its type was modelling and had an overall academic objective. This may refer to an interaction instance where the teacher is delivering a yoga lesson to the whole class: the teacher is demonstrating a yoga move by gesturing while verbally explaining it and asking the students to do the same; the student then responds by doing the move with an observably happy expression.

All observed adult-student interactions during the school day, permitted by the teachers, were recorded. The aim was to rapidly record situation-strategy-outcome data points "in vivo inside and outside the classroom. Locations of the observations outside the classroom include the playground, library, music room, main hall, canteen, therapy rooms, and garden. Overall, these resources were regularly used throughout the observational sessions. The instances recorded for each student vary slightly from 753 to 880 (=780, =45) and in total a sample of 5460 full observations were collected.

From the 5460 observations we collected, only 5001 are distinct. If we ignore the students response, unique observations are reduced to 4880, and if we also ignore the teachers communication strategy, then this number becomes 4357. Hence, there are instances in our data that are overlapping, but this is expected given that teachers and students may perform similarly throughout a specific teaching session. The level of support for each teacher communication strategy is equal to 3128 (709) times for a verbal communication, 1717 (357) for using an object, 1642 (181) for the gesture, 1465 (575) for a physical prompt, and 981 (165) for a picture, where in parentheses we report the number of times the underpinned communication was the only one performed (from a maximum of two communications). Although the small student and teacher sample does not allow for generalisations, we see that teachers tend to verbally engage with students quite frequently (57.29%), either in combination with another communication or as the sole means of communication. The full student response rate for each communication strategy (irrespectively of co-occurrence with another one) is equal to 64.02% (64.90%, 60.68%) for picture, 60.92% (62.48%, 57.73%) for an object, 60.61% (64.34%, 53.56%) for a physical prompt, 57.67% (59.67%, 51.80%) for a gesture, and 53.20% (55.21%, 46.45%) for a verbal communication; the rates in the parentheses are breakdowns for the language and social partner SCERTS classifications, respectively, reaffirming those language partners are in general more responsive, with a more pronounced relative difference when verbal or physical prompts are deployed. In addition, performing two versus one communication is more effective in producing a full student response. In particular, the full, partial, and no response breakdowns for single communications are 50.58%, 21.84%, and 27.58%, compared to 60.01%, 21.82%, and 18.17% for two teacher communications. Although the presence of two communications naturally increases the probability of choosing the correct means of interaction, the current outcome reaffirms the hypothesis that an incorrect communication strategy does not greatly affect the student when a desirable one co-occurs. The observed features with the greatest bivariate correlation with the student response are the negative emotional state of the student (r=0.184, p0.001), the encouragement/praise teaching type (r=0.124, p0.001), and the redirection teaching type (r=0.124, p0.001).

A machine learning classification task aims to learn a function f:Xy, where ({{{bf{X}}}}in {{mathbb{R}}}^{mtimes n}), y{1,,k}m denote the observations (inputs) and the response variable (outcomes), respectively; m, n, k represent the numbers of observations and outcomes, observation categories (features), and outcome classes, respectively. Here, in the most feature-inclusive case, we define X as an aggregation of six feature categories, namely student attributes (age, sex, P-level, SCERTS classification), teaching objective, teaching type, context for teaching type, the students observed emotional state, and teachers communication strategy. All feature categories, apart from age, were coded as c-dimensional tuples of 1s and 0s, where c is the respective number of different subtypes for each category (Table 1), and ones are used to denote the activated subtype(s). Student age was coded as a real number from 0 to 1, using a linear mapping scheme, where 0 and 1 represent 5 and 12 years of age, respectively. The response variable y takes a binary definition representing two classes, a full response output versus otherwise. The rational behind this merging was to generate a more balanced classification task (56.59% full student response labels) as well as alleviate any issues arising from a miscategorisation of partial (21.86%) or no response (21.55%) outcomes.

We train and evaluate the performance of various machine learning functions in predicting the students type of response. We deploy three broadly used classifiers in the literature: (a) a variant of logistic regression (LR)55 that uses elastic net regularisation56 for feature selection, (b) a random forest (RF)57 with 2000 decision trees, and (c) a Gaussian Process (GP)58 with a composite covariance function (or kernel) that we describe below. We devise three problem formulations, where we incrementally add more elements in the observed data (input). In the first instance, we consider all observed categories apart from student attributes. Then, we include student attributes as part of the feature space and, to represent this change, augment method abbreviations with "-. Finally, in both previous setups, we explore autoregression by including the observed data and student responses for up to the previous =5 teacher-student interactions. While performing autoregression, we maintain all three types of recorded student responses in the input data.

Although logistic regression and random forests treat the increased input space without any particular intrinsic additive modelling, the modularity of the GP allows us to specify more customised covariance functions on these different inputs. GP models assume that f:Xy is a probability distribution over functions denoted as (f({{{bf{x}}}}) sim ,{{mbox{GP}}},(mu ({{{bf{x}}}}),k({{{bf{x}}}},{{{bf{x}}}}^{prime} ))), where ({{{bf{x}}}},{{{bf{x}}}}^{prime}) are rows of X, () is the mean function of the process, and k(,) is the covariance function (or kernel) that captures statistical relationships in the input space. We assume that (x)=0, a common setting for various downstream applications59,60,61,62, and use the following incremental (through summation) covariance functions:

$$k({{{bf{x}}}},{{{bf{x}}}}^{prime} )={k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{c},{{{{bf{x}}}}}_{c}^{prime}) ,$$

(1)

$$k({{{bf{x}}}},{{{bf{x}}}}^{prime} )={k}_{{{{rm{SE}}}}}({{{bf{a}}}},{{{bf{a}}}}^{prime} )+{k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{c},{{{{bf{x}}}}}_{c}^{prime}) ,$$

(2)

$$k({{{bf{x}}}},{{{bf{x}}}}^{prime} )={k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{c},{{{{bf{x}}}}}_{c}^{prime})+{k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{p},{{{{bf{x}}}}}_{p}^{prime})+{k}_{{{{rm{SE}}}}}({{{{bf{y}}}}}_{p},{{{{bf{y}}}}}_{p}^{prime}) ,,{{mbox{and}}},$$

(3)

$$k({{{bf{x}}}},{{{bf{x}}}}^{prime} )={k}_{{{{rm{SE}}}}}({{{bf{a}}}},{{{bf{a}}}}^{prime} )+{k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{c},{{{{bf{x}}}}}_{c}^{prime})+{k}_{{{{rm{SE}}}}}({{{{bf{x}}}}}_{p},{{{{bf{x}}}}}_{p}^{prime})+{k}_{{{{rm{SE}}}}}({{{{bf{y}}}}}_{p},{{{{bf{y}}}}}_{p}^{prime}) ,$$

(4)

where kSE(,) denotes the squared exponential covariance function, xc denotes the current observation including the teachers communication strategy, a is the vector containing student attributes, and xp, yp denote the past observations and student response outcomes, respectively. Therefore, Eq. (1) refers to the kernel in the simplest task formulation where only currently observed data are used, Eq. (2) expands on Eq. (1) by adding a kernel for student attributes, and Eqs. (3) and (4) add kernels for including previous observations and student responses (autoregression). Using an additive problem formulation, where a kernel focuses on a part of the feature space, generates a simpler optimisation task and tends to provide better accuracy63. This is also confirmed by our empirical results.

We apply 10-fold cross-validation as follows. We randomly shuffle the observed samples (5460 in total) and then generate 10 equally sized folds. We use 9 of these folds to train a model, and 1 to test, repeating this training-testing process 10 times, using all formed folds as test sets. By doing this we are solving a task, whereby observations from the same student can exist in both the training and the test sets (although these observations are strictly distinct). That was an essential compromise here given the limited number of different students (7). The same exact training and testing process (and identical data splits) is used for all classification models and problem formulations. We learn the regularisation hyperparameters of logistic regression by cross-validating on the training data; this may result in potentially different choices for each fold. The hyperparameters of the GP models are learned using the Laplace approximation58,64. Performance is assessed using standard classification metrics, and in particular accuracy, precision, recall, and their harmonic mean known as the F1 score. For completeness, we also assess the best-performing model by testing on data from a single student that is not included in the training set, repeating the same process for all students in our cohort (leave-one-student-out, 7-fold cross-validation; see SI for more details).

Ethical approval was granted by the Research Ethics Committee at the Institute of Education, University College London (United Kingdom), where the research was conducted. The parents/guardians of the participating children, the school management, and their teachers gave their written informed consent. All participant information has been anonymised. Raw data and derived data sets were securely stored on the researchers encrypted computer systems with password protection.

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Link:
An artificial intelligence approach for selecting effective teacher communication strategies in autism education | npj Science of Learning -...

Read More..

New Traffic Sensor Uses Artificial Intelligence to Detect Any Vehicle on the Road – autoevolution

And naturally, the closer we get to smart intersections becoming more mainstream, the more technologies to power them go live, some of them with insanely advanced capabilities that nobody would have imagined some 10 years ago.

Iteris, for example, a company providing smart mobility infrastructure management, has come up with the worlds first 1080p high-definition (HD) video and four-dimensional (4D) radar sensor with integrated artificial intelligence (AI) algorithms.

In plain English, this is a traffic monitoring sensor that authorities across the world can install in their systems to get 1080p (thats HD resolution) video as well as 4D radar data using a technology bundling AI algorithms.

This means the new sensor is capable of offering insanely accurate detection, and just as expected, it can spot not only cars, but also trucks, bikes, and many other vehicle types. The parent company says the sensor has been optimized to also detect vulnerable road users, such as pedestrians.

In case youre wondering why a traffic management center (TMC) needs such advanced data, the benefits of this sensor go way beyond the simple approach when someone keeps an eye on the traffic in a certain intersection.

TMCs can be linked to connected cars, so the information collected by the sensor can be transmitted right back on the road where the new-generation vehicles can act accordingly based on the detected information. And this is why AI-powered detection is so important, as it offers extra accuracy, preventing errors and wrong information from being sent to connected cars.

In other words, it can help avoid collisions, reduce the speed when pedestrians are detected, and overall optimize the traffic flow because after all, everybody wants to get rid of traffic jams in the first place.

Were probably still many years away from the moment such complex sensors become more mainstream, but Iteris new idea is the living proof the future is already here. Fingers crossed, however, for authorities across the world to notice how much potential is hiding in this new-gen technology.

The rest is here:
New Traffic Sensor Uses Artificial Intelligence to Detect Any Vehicle on the Road - autoevolution

Read More..

New study will use artificial intelligence to improve treatments for people with multiple long-term conditions – University of Birmingham

The NIHR has awarded 2.5 million for new research led by the University of Birmingham that will use artificial intelligence (AI) to produce computer programmes and tools that will help doctors improve the choice of drugs in patients with clusters of multiple long-term conditions.

Called the OPTIMAL study (OPTIMising therapies, discovering therapeutic targets and AI assisted clinical management for patients Living with complex multimorbidity), the research aims to understand how different combinations of long-term conditions and the medicines taken for these diseases interact over time to worsen or improve a patients health.

The study will be led by Dr Thomas Jackson and Professor Krish Nirantharakumar at the University of Birmingham and carried out in collaboration with the University of Manchester, University Hospitals Birmingham NHS Foundation Trust, NHS Greater Glasgow & Clyde, University of St Andrews,and theMedicines and Healthcare Products Regulatory Agency.

An estimated 14 million people in England are living with two or more long-term conditions, with two-thirds of adults aged over 65 expected to be living with multiple long-term conditions by 2035.

Dr Thomas Jackson, Associate Professor in Geriatric Medicine at the University of Birmingham, said: Currently when people have multiple long-term conditions, we treat each disease separately. This means we prescribe a different drug for each condition, which may not help people with complex multimorbidity which is a term we use when patients have four or more long-term health problem.

A drug for one disease can make another disease worse or better, however, presently we do not have information on the effect of one drug on a second disease. This means doctors do not have enough information to know which drug to prescribe to people with complex multimorbidity.

Krish Nirantharakumar, Professor in Health Data Science and Public Health at the University of Birmingham, added: Through our research, we can group such people based on their mixes of disease. Then we can study the effects of a drug on each disease mix. This should help doctors prescribe better and reduce the number of drugs patients need. This will lead to changes in healthcare policy which would benefit most people with complex multimorbidity.

The research is one of a number of studies being funded by the NIHRs Artificial Intelligence for Multiple Long-Term Conditions (AIM) call, which is aligned to the aims of the NHSX AI Lab, that combine data science and AI methods with health, care and social science expertise to identify new clusters of disease and understand how multiple long-term conditions develop over the life course.

The call will fund up to 23 million of research in two waves, supporting a pipeline of research and capacity building in multiple long-term conditions research. The first wave has invested nearly 12 million into three Research Collaborations, nine Development Awards and a Research Support Facility, including the University of Birmingham-led study.

Improving the lives of people with multiple long-term conditions and their carers through research is an area of strategic focus for the NIHR, with its ambitions set out in its NIHR Strategic Framework for Multiple Long-Term Conditions Research.

Professor Lucy Chappell, NIHR Chief Executive and chair of the AIM funding committee, said: This large-scale investment in research will improve our understanding of clusters of multiple long-term conditions, including how they develop over a persons lifetime.

Over time, findings from this new research will point to solutions that might prevent or slow down the development of further conditions over time. We will also look at how we shape treatment and care to meet the needs of people with multiple long-term conditions and carers.

To date NIHR has invested 11million into research on multiple long-term conditions through two calls in partnership with the Medical Research Council, offering both pump-priming funds and funding to tackle multimorbidity at scale.

See the original post here:
New study will use artificial intelligence to improve treatments for people with multiple long-term conditions - University of Birmingham

Read More..

AI Art: Kolkata Exhibition to Showcase Artworks Created With Assistance of Artificial Intelligence – Gadgets 360

With artificial intelligence (AI) and machine learning (ML) making inroads into what were hitherto exclusive human domains like writing and driving it was only a matter of time that artists too began experimenting with it. Many exhibition centres and auction houses around the world have begun taking interest in art pieces created with AI. The latest in that list is an exhibition set to be held in Kolkata later this month. It will be India's first solo exhibition of AI Art and will feature works of the pioneering artist, Harshit Agarwal.

Emami Art, the Kolkata gallery hosting the exhibition, posed serious questions on its website about how AI will shape the artistic landscape. It started by asking whether AI art is truly the future of contemporary art and whether AI is a competition or collaborator. The exhibition, titled EXO-stential AI Musings on the Posthuman, will try to discuss these issues, the gallery said.

Usually, to create a piece of AI art, artists write algorithms keeping in mind a desired visual outcome. These algorithms give broad directions and allows the machine to learn a specific aesthetic by analysing thousands of images. The machine then creates an image based on what it has learned.

After the AI Art form came into existence in 2015, the initial years were turbulent and only led to the creation of hauntingly familiar yet alien forms. The field has developed considerably in the last five years. Emami Art said it is trying to present the enlarged practice and diversity of AI art through this solo exhibition.

The exhibition will begin September 11 and will last till the end of month. Emami Art described Harshit Agrawal as a pioneer in the developing genre of AI Art and has worked with it since 2015. His work has been nominated twice for the top tech art prize, the Lumen.

On an Instagram post, Agarwal spoke about the exhibition: "Bringing together my #AI art practice of over 6 years since the inception of this field. Spanning themes beyond the novelty hype to explore themes of authorship, gender perceptions, deep rooted social inequities and biases, identity, seemingly universal notions of the everyday- all through this new lens of AI with its unique capabilities of complex data understanding and estrangement. Let's engage consciously with this beast we're increasingly being immersed in, journeying into the #posthuman, instead of being simply sucked into it!"

See more here:
AI Art: Kolkata Exhibition to Showcase Artworks Created With Assistance of Artificial Intelligence - Gadgets 360

Read More..

Artificial Intelligence in Medical Diagnostics Market by Component, Application, End-user and Region – Global Forecast to 2025 -…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence (AI) in Medical Diagnostics Market by Component (Software, Service), Application (In Vivo, Radiology, OBGY,MRI, CT, Ultrasound, IVD), End User (Hospital, Diagnostic Laboratory, Diagnostic Imaging Center)- Global Forecast to 2025" report has been added to ResearchAndMarkets.com's offering.

The global AI in medical diagnostics market is projected to reach USD 3,868 million by 2025 from USD 505 million in 2020, at a CAGR of 50.2% during the forecast period.

Growth in this market is primarily driven by government initiatives to increase the adoption of AI-based technologies, increasing demand for AI tools in the medical field, growing focus on reducing the workload of radiologists, influx of large and complex datasets, growth in funding for AI-based start-ups, and the growing number of cross-industry partnerships and collaborations.

Software segment is expected to grow at the highest CAGR

On the basis of component, the AI in medical diagnostics market is segmented into software and services. The services segment dominated this market in 2020, while the software segment is estimated to grow at a higher CAGR during the forecast period. Software solutions help healthcare providers gain a competitive edge despite the challenges of being short-staffed and facing increasing imaging scan volumes. This is a key factor driving the growth of the software segment.

Hospitals to establish the largest market size of AI in medical diagnostics market

Based on end user, the AI in medical diagnostics market is segmented into hospitals, diagnostic imaging centers, diagnostic laboratories, and other end users. The hospitals segment commanded the largest share of 64.1% of this market in 2019. The large share of this segment can be attributed to the rising number of diagnostic imaging procedures performed in hospitals, the growing inclination of hospitals toward the automation and digitization of radiology patient workflow, increasing adoption of minimally invasive procedures in hospitals to improve the quality of patient care, and the rising adoption of advanced imaging modalities to improve workflow efficiency.

North America To Witness Significant Growth From 2020 to 2025

The AI in medical diagnostics market has been segmented into four main regional segments, namely, North America, Europe, the Asia Pacific, and the Rest of the World. In 2019, North America accounted for the largest market share of 37.6%. However, the APAC market is projected to register the highest CAGR of 53.2% during the forecast period, primarily due to the growth strategies adopted by companies in emerging markets, improved medical diagnostic infrastructure, increasing geriatric population, rising prevalence of cancer, and the implementation of favorable government initiatives.

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/4bmwui

Visit link:
Artificial Intelligence in Medical Diagnostics Market by Component, Application, End-user and Region - Global Forecast to 2025 -...

Read More..

Instagram will soon ask for your age and use artificial intelligence to detect when youre lying – KTLA

Instagram might soon ask for your birthday.

Follow Rich DeMuro onInstagramfor more tech news, tips and tricks.

Facebook says the new question is to create a safer, more private experience for young users. Theyll use the information to weed out content and advertising that might not appropriate for them.

Starting now, Instagram will show a notification asking for your date of birth. You can say no a handful of times, but it might impact your ability to continue using the app.

You might also see a warning screen on a post thats sensitive or graphic if you havent already confirmed your birthday, youll have to enter the information to see the post.

Facebook says they know some people will fib about their date of birth, but they have a solution for that, too. The company has already explained how theyre using artificial intelligence to estimate a users age, especially data scraped from posts that mention Happy Birthday.

Keep in mind, Instagram will only show the new birthday prompt to users that havent previously given their age. If youre curious if youve already shared the information (including through a linked Facebook account), you can go to Instagram > Settings > Account > Personal Information.

Listen to theRich on Techpodcast for answers to your tech questions.

Here is the original post:
Instagram will soon ask for your age and use artificial intelligence to detect when youre lying - KTLA

Read More..

Quarky AI learning companion lets kids play with artificial intelligence and robotics – Gadget Flow

Limited-time offer: get 51% OFF retail price!

day

hours

Made for children from 7 to 14 years old, the Quarky AI learning companion teaches STEM skills in a fun way. Your child can learn about artificial intelligence and robotics with this gadget. In fact, this futuristic companion does so many things. It can be a gesture-controlled robot, follow commands, recognize objects, plan paths, and more. It helps children learn advanced concepts in a fun, hands-on, and engaging way. Use it with the connected and interactive online courses and live sessions thatll help kids learn to code. With a very portable size, its easy to take Quarky with you anywhere. And pair it up with your smartphone, tablet, or laptop on the go. Whether youre new to coding or an expert at it, youll love Quarky and can use Blocks or Python with it. Moreover, the plug-and-play interface offers a hassle-free setup so you can get going.

Read more from the original source:
Quarky AI learning companion lets kids play with artificial intelligence and robotics - Gadget Flow

Read More..