Page 681«..1020..680681682683..690700..»

Tour shows off renovations in UWM engineering building – UWM … – University of WisconsinMilwaukee

A student in the lab of Assistant Professor Jerald Thomas (left) tries to navigate a virtual world while wearing virtual reality goggles. (UWM Photo/Troye Fox)

Alyssa Schnorenberg (left), a scientist in the lab of Professor Brooke Slavens, and student Anthony Nguyen demonstrate how they visualize the muscle activation necessary for normal human movement. (UWM Photo/Troye Fox)

Professor Habib Rahman (center) and his lab members use a robotic arm they designed to take a water bottle from Brian Thompson (left), UWM chief innovation and partnership officer. (UWM Photo/Troye Fox)

Associate Professor Ben Church (right) and graduate students Ian Smith (left) and Elmer Prenzlow are researching how to increase the tensile strength of a metal alloy often used in industry when it is exposed to high temperatures. Tensile strength refers to the maximum stress that a material can bear before breaking when it is stretched or pulled. (UWM Photo/Troye Fox)

Student Patrick Severson explains how internal components of a tire, made of various carbon fiber composites developed in Associate Professor Rani El Hajjars lab, affect tire performance. (UWM Photo/Troye Fox)

With Assistant Professor Chanyeop Park (left) and UWM Chancellor Mark Mone looking on, Hassan Abdallah (right) describes projects in the Center for Sustainable Electrical Energy Systems to Universities of Wisconsin President Jay Rothman. The event Nov. 2 celebrated new research labs in the College of Engineering & Applied Science. (UWM Photo/Troye Fox)

Jay Rothman, Universities of Wisconsin president, holds up a gift from the engineering college a 3D printed miniature of the UWM Pounce statue. (UWM Photo/Troye Fox)

Some of the newer faculty in biomedical engineering spoke with guests about the research in their respective labs, including (from left) Assistant Professors Mahsa Dabagh, Jacob Rammer, Qingsu Cheng and Priya Premnath. (UWM Photo/Troye Fox)

Speakers at the event included (from left) CEAS Associate Dean Andrew Graettinger; CEAS Dean Brett Peters; Universities of Wisconsin President Jay Rothman; UWM Chancellor Mark Mone; Craig Rigby, vice president of echnology at Clarios; UWM Provost Andrew Daire and CEAS Associate Dean Prasenjit Guptasarma. (UWM Photo/Troye Fox)

A two-year renovation has turned the ninth and 10th floors of UWMs Engineering & Mathematical Sciences building into a bright and modern research area and workspace that encourages collaboration.

The College of Engineering & Applied Science showed off the new facilities last week during an open house that included Universities of Wisconsin President Jay Rothman.

We wanted to show off the incredible transformation. But were also highlighting the research and the work that our faculty and students are doing in many different areas, Dean Brett Peters said. Its all about fostering collaboration.

Original post:

Tour shows off renovations in UWM engineering building - UWM ... - University of WisconsinMilwaukee

Read More..

U.S. Army Corps of Engineers Great Lakes and Ohio River Division … – lrd.usace.army.mil

The U.S. Army Corps of Engineers' (USACE) Great Lakes and Ohio River Division (LRD) is scheduled to migrate theseven LRD districtwebsites to http://www.lrd.usace.army.milby Jan. 15, 2024. The goal is to create a digital platform that's easy to manage and simpleto navigate for both first-timeand repeatvisitors.

Great Lakes and Ohio River Division,www.lrd.usace.army.mil, is scheduled to migrate the below sites.

By centralizing,we can quickly identifygaps, streamlinecompliance, reduce operational costs, decreaseerror, eliminatetask duplication, andabove all, buildan enjoyableonline experience for everyone. It's also important for each district to maintain their unique identity and mission sets. It's a tough challenge but we're ready forthe task!

If users find missing documents after the migration, please contact us and we'll help you find what you're looking for.

Learn more about the president's directiveto ensure all federal agencies deliver a digital-first public experience through the 21st Century Integrated Digital Experience Act at digital.gov.

Excerpt from:

U.S. Army Corps of Engineers Great Lakes and Ohio River Division ... - lrd.usace.army.mil

Read More..

Privacy Engineering Domains – International Association of Privacy Professionals

Last Updated: November 2023

Privacy engineering is a critically important discipline in the privacy community. It involves applying systematic, scientific or methodological approaches to include privacy requirements in the design, development and operations of systems and services in various domains, including software development, system design, data science, physical architecture, process design, information technology infrastructure and human-computer interaction/user experience design.

These resources are intended to facilitate a deeper understanding of and collaboration within the increasingly important field of privacy engineering.

The first chart in the series defines privacy engineering and each subsequent chart gives an illustrative overview of some privacy engineering domains, highlighting key responsibilities, skills and organizational governance. The upcoming charts in the series are listed here.

Privacy Engineering Domains

Published charts in series

Defining Privacy Engineering

This chart provides a broad definition of privacy engineering and highlights various domains in which privacy engineers can significantly impact the protection of privacy.

View Chart

IT infrastructure architect

This chart focuses on IT infrastructure architects, whose responsibilities include developing IT infrastructure to ensure data flows between systems have data-use controls in place.

View Chart

Coming soon

Listed below are the upcoming charts to be published in this series.

See original here:

Privacy Engineering Domains - International Association of Privacy Professionals

Read More..

Machine Learning and Artificial Intelligence Tools: The Benefits … – Quality Magazine

Machine Learning and Artificial Intelligence Tools: The Benefits, How They Work, and Avoiding Common Pitfalls | Quality Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

The rest is here:
Machine Learning and Artificial Intelligence Tools: The Benefits ... - Quality Magazine

Read More..

Revolutionizing Food Analysis: The Synergy of Artificial Instruments … – Food Safety Magazine

Revolutionizing Food Analysis: The Synergy of Artificial Instruments and Machine Learning Methods | Food Safety This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

More:
Revolutionizing Food Analysis: The Synergy of Artificial Instruments ... - Food Safety Magazine

Read More..

Ironscales targets image-based email security threats with new … – SiliconANGLE News

Israeli phishing protection startup Ironscales Ltd. today announced an update to its platform capabilities aimed at bolstering defenses against the surge in image-based phishing attacks, including those using QR codes.

The Ironscales Fall 2023 release introduces sophisticated machine learning protections tailored to counter image-based threats, as well as automation features for phishing simulation testing. The threats covered by the update include protection from quishing, or QR code phishing, business email compromise and image-basedattacks that bypass conventional language processing defenses.

The updates are aimed at addressing the rapid advancement in generative artificial intelligence technology, which has significantly expanded the tools available to cyber criminals. Ironscales data analysts observed an alarming 215% increase in image-centric phishing emails in the third quarter of 2023, with the use of malicious QR codes a particular standout.

Ironscales platform now employs optical character recognition and deep-text and image processing to identify and thwart such attacks before they reach end-users. The new features integrate enhanced image recognition and analysis into the companys behavioral analysis framework.

A new autonomous phishing simulation testing functionality reduces processing time for information technology and security teams by creating timely and relevant simulation campaigns. Ironscales customers can skip the manual setup process and put their phishing simulation testing on autopilot, ensuring they deliver phishing simulations based on real-world examples of email attacks.

Enhanced reporting for organization visibility and improved employee awareness in the release deliverenhanced reporting that includes metrics, according to the company, and a comprehensive summary of simulation testing campaign results to compare against industry benchmarks, identify training gaps, measure effectiveness and improve future campaign strategy.

Phishing threats are rapidly evolving in sophistication and its more crucial than ever for organizations to ensure their employees are trained and prepared so they can be a vital layer of defense against these attacks, Chief Executive Eyal Benishti said. Our job is to take the burden off security teams for threat detection and training of their employees. We think that our new Fall 23 release is going to do just that.

Ironscales was last in the news in June when itlaunched an artificial intelligence tool for Microsoft Outlook designed to empower users in threat detection and reporting.Called Themis Co-pilot, the servicegives users the necessary tools to detect and report emerging threats, regardless of their role or cybersecurity expertise.

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

Continued here:
Ironscales targets image-based email security threats with new ... - SiliconANGLE News

Read More..

How AI could lead to a better understanding of the brain – Nature.com

Can a computer be programmed to simulate a brain? Its a question mathematicians, theoreticians and experimentalists have long been asking whether spurred by a desire to create artificial intelligence (AI) or by the idea that a complex system such as the brain can be understood only when mathematics or a computer can reproduce its behaviour. To try to answer it, investigators have been developing simplified models of brain neural networks since the 1940s1. In fact, todays explosion in machine learning can be traced back to early work inspired by biological systems.

However, the fruits of these efforts are now enabling investigators to ask a slightly different question: could machine learning be used to build computational models that simulate the activity of brains?

At the heart of these developments is a growing body of data on brains. Starting in the 1970s, but more intensively since the mid-2000s, neuroscientists have been producing connectomes maps of the connectivity and morphology of neurons that capture a static representation of a brain at a particular moment. Alongside such advances have been improvements in researchers abilities to make functional recordings, which measure neural activity over time at the resolution of a single cell. Meanwhile the field of transcriptomics is enabling investigators to measure the gene activity in a tissue sample, and even to map when and where that activity is occurring.

So far, few efforts have been made to connect these different data sources or collect them simultaneously from the whole brain of the same specimen. But as the level of detail, size and number of data sets increases, particularly for the brains of relatively simple model organisms, machine-learning systems are making a new approach to brain modelling feasible. This involves training AI programs on connectomes and other data to reproduce the neural activity you would expect to find in biological systems.

Several challenges will need to be addressed for computational neuroscientists and others to start using machine learning to build simulations of entire brains. But a hybrid approach that combines information from conventional brain-modelling techniques with machine-learning systems that are trained on diverse data sets could make the whole endeavour both more rigorous and more informative.

The quest to map a brain began nearly half a century ago, with a painstaking 15-year effort in the roundworm Caenorhabditis elegans2. Over the past two decades, developments in automated tissue sectioning and imaging have made it much easier for researchers to obtain anatomical data while advances in computing and automated-image analysis have transformed the analysis of these data sets2.

Connectomes have now been produced for the entire brain of C. elegans3, larval4 and adult5 Drosophila melanogaster flies, and for tiny portions of the mouse and human brain (one thousandth and one millionth respectively)2.

This is the largest map of the human brain ever made

The anatomical maps produced so far have major holes. Imaging methods are not yet able to map electrical connections at scale alongside the chemical synaptic ones. Researchers have focused mainly on neurons even though non-neuronal glial cells, which provide support to neurons, seem to play a crucial part in the flow of information through nervous systems6. And much remains unknown about what genes are expressed and what proteins are present in the neurons and other cells being mapped.

Still, such maps are already yielding insights. In D. melanogaster, for example, connectomics has enabled investigators to identify the mechanisms behind the neural circuits responsible for behaviours such as aggression7. Brain mapping has also revealed how information is computed within the circuits responsible for the flies knowing where they are and how they can get from one place to another8. In zebrafish (Danio rerio) larvae, connectomics has helped to uncover the workings of the synaptic circuitry underlying the classification of odours9, the control of the position and movement of the eyeball10 and navigation11.

Efforts that might ultimately lead to a whole mouse brain connectome are under way although using current approaches, this would probably take a decade or more. A mouse brain is almost 1,000 times bigger than the brain of D. melanogaster, which consists of roughly 150,000 neurons.

Alongside all this progress in connectomics, investigators have been capturing patterns of gene expression with increasing levels of accuracy and specificity using single-cell and spatial transcriptomics. Various technologies are also allowing researchers to make recordings of neural activity across entire brains in vertebrates for hours at a time. In the case of the larval zebrafish brain, that means making recordings across nearly 100,000 neurons12. These technologies include proteins with fluorescent properties that change in response to shifts in voltage or calcium levels, and microscopy techniques that can image living brains in 3D at the resolution of a single cell. (Recordings of neural activity made in this way provide a less accurate picture than electrophysiology recordings, but a much better one than non-invasive methods such as functional magnetic resonance imaging.)

When trying to model patterns of brain activity, scientists have mainly used a physics-based approach. This entails generating simulations of nervous systems or portions of nervous systems using mathematical descriptions of the behaviour of real neurons, or of parts of real nervous systems. It also entails making informed guesses about aspects of the circuit, such as the network connectivity, that have not yet been verified by observations.

In some cases, the guesswork has been extensive (see Mystery models). But in others, anatomical maps at the resolution of single cells and individual synapses have helped researchers to refute and generate hypotheses4.

A lack of data makes it difficult to evaluate whether some neural-network models capture what happens in real systems.

The original aim of the controversial European Human Brain Project, which wrapped up in September, was to computationally simulate the entire human brain. Although that goal was abandoned, the project did produce simulations of portions of rodent and human brains (including tens of thousands of neurons in a model of a rodent hippocampus), on the basis of limited biological measures and various synthetic data-generation procedures.

A major problem with such approaches is that in the absence of detailed anatomical or functional maps, it is hard to assess to what degree the resulting simulations accurately capture what is happening in biological systems20.

Neuroscientists have been refining theoretical descriptions of the circuit that enables D. melanogaster to compute motion for around seven decades. Since it was completed in 201313, the motion-detection-circuit connectome, along with subsequent larger fly connectomes, has provided a detailed circuit diagram that has favoured some hypotheses about how the circuit works over others.

Yet data collected from real neural networks have also highlighted the limits of an anatomy-driven approach.

Gigantic map of fly brain is a first for a complex animal

A neural-circuit model completed in the 1990s, for example, contained a detailed analysis of the connectivity and physiology of the roughly 30 neurons comprising the crab (Cancer borealis) stomatogastric ganglion a structure that controls the animals stomach movements14. By measuring the activity of the neurons in various situations, researchers discovered that even for a relatively small collection of neurons, seemingly subtle changes, such as the introduction of a neuromodulator, a substance that alters properties of neurons and synapses, completely changes the circuits behaviour. This suggests that even when connectomes and other rich data sets are used to guide and constrain hypotheses about neural circuits, todays data might be insufficiently detailed for modellers to be able to capture what is going on in biological systems15.

This is an area in which machine learning could provide a way forward.

Guided by connectomic and other data to optimize thousands or even billions of parameters, machine-learning models could be trained to produce neural-network behaviour that is consistent with the behaviour of real neural networks measured using cellular-resolution functional recordings.

Such machine-learning models could combine information from conventional brain-modelling techniques, such as the HodgkinHuxley model, which describes how action potentials (a change in voltage across a membrane) in neurons are initiated and propagated, with parameters that are optimized using connectivity maps, functional-activity recordings or other data sets obtained for entire brains. Or machine-learning models could comprise black box architectures that contain little explicitly specified biological knowledge but billions or hundreds of billions of parameters, all empirically optimized.

Researchers could evaluate such models, for instance, by comparing their predictions about the neural activity of a system with recordings from the actual biological system. Crucially, they would assess how the models predictions compare when the machine-learning program is given data that it wasnt trained on as standard practice in the evaluation of machine-learning systems.

Axonal projections of neurons in a mouse brain.Credit: Adam Glaser, Jayaram Chandrashekar, Karel Svoboda, Allen Institute for Neural Dynamics

This approach would make brain modelling that encompasses thousands or more neurons more rigorous. Investigators would be able to assess, for instance, whether simpler models that are easier to compute do a better job of simulating neural networks than do more complex ones that are fed more detailed biophysical information, or vice versa.

Machine learning is already being harnessed in this way to improve understanding of other hugely complex systems. Since the 1950s, for example, weather-prediction systems have generally relied on carefully constructed mathematical models of meteorological phenomena, with modern systems resulting from iterative refinements of such models by hundreds of researchers. Yet, over the past five years or so, researchers have developed several weather-prediction systems using machine learning. These contain fewer assumptions in relation to how pressure gradients drive changes in wind velocity, for example, and how that in turn moves moisture through the atmosphere. Instead, millions of parameters are optimized by machine learning to produce simulated weather behaviour that is consistent with databases of past weather patterns16.

This way of doing things does present some challenges. Even if a model makes accurate predictions, it can be difficult to explain how it does so. Also, models are often unable to make predictions about scenarios that were not included in the data they were trained on. A weather model trained to make predictions for the days ahead has trouble extrapolating that forecast weeks or months into the future. But in some cases for predictions of rainfall over the next several hours machine-learning approaches are already outperforming classical ones17. Machine-learning models offer practical advantages, too; they use simpler underlying code and scientists with less specialist meteorological knowledge can use them.

On the one hand, for brain modelling, this kind of approach could help to fill in some of the gaps in current data sets and reduce the need for ever-more detailed measurements of individual biological components, such as single neurons. On the other hand, as more comprehensive data sets become available, it would be straightforward to incorporate the data into the models.

To pursue this idea, several challenges will need to be addressed.

Machine-learning programs will only ever be as good as the data used to train and evaluate them. Neuroscientists should therefore aim to acquire data sets from the whole brain of specimens even from the entire body, should that become more feasible. Although it is easier to collect data from portions of brains, modelling a highly interconnected system such as a neural network using machine learning is much less likely to generate useful information if many parts of the system are absent from the underlying data.

Researchers should also strive to obtain anatomical maps of neural connections and functional recordings (and perhaps, in the future, maps of gene expression) from whole brains of the same specimen. Currently, any one group tends to focus on obtaining only one of these not on acquiring both simultaneously.

How the worlds biggest brain maps could transform neuroscience

With only 302 neurons, the C. elegans nervous system might be sufficiently hard-wired for researchers to be able to assume that a connectivity map obtained from one specimen would be the same for any other although some studies suggest otherwise18. But for larger nervous systems, such as those of D. melanogaster and zebrafish larvae, connectome variability between specimens is significant enough that brain models should be trained on structure and function data acquired from the same specimen.

Currently, this can be achieved only in two common model organisms. The bodies of C. elegans and larval zebrafish are transparent, which means researchers can make functional recordings across the organisms entire brains and pinpoint activity to individual neurons. Immediately after such recordings are made, the animal can be killed, embedded in resin and sectioned, and anatomical measurements of the neural connections mapped. In the future, however, researchers could expand the set of organisms for which such combined data acquisitions are possible for instance, by developing new non-invasive ways to record neural activity at high resolution, perhaps using ultrasound.

Obtaining such multimodal data sets in the same specimen will require extensive collaboration between researchers, investment in big-team science and increased funding-agency support for more holistic endeavours19. But there are precedents for this type of approach, such as the US Intelligence Advanced Research Projects Activitys MICrONS project, which between 2016 and 2021 obtained functional and anatomical data for one cubic millimetre of mouse brain.

Besides acquiring these data, neuroscientists would need to agree on the key modelling targets and the quantitative metrics by which to measure progress. Should a model aim to predict the behaviour of a single neuron on the basis of a past state or of an entire brain? Should the activity of an individual neuron be the key metric, or should it be the percentage of hundreds of thousands of neurons that are active? Likewise, what constitutes an accurate reproduction of the neural activity seen in a biological system? Formal, agreed benchmarks will be crucial to comparing modelling approaches and tracking progress over time.

Lastly, to open up brain-modelling challenges to diverse communities, including computational neuroscientists and specialists in machine learning, investigators would need to articulate to the broader scientific community what modelling tasks are the highest priority and which metrics should be used to evaluate a models performance. WeatherBench, an online platform that provides a framework for evaluating and comparing weather forecasting models, provides a useful template16.

Some will question and rightly so whether a machine-learning approach to brain modelling will be scientifically useful. Could the problem of trying to understand how brains work simply be traded for the problem of trying to understand how a large artificial network works?

Yet, the use of a similar approach in a branch of neuroscience concerned with establishing how sensory stimuli (for example, sights and smells) are processed and encoded by the brain is encouraging. Researchers are increasingly using classically modelled neural networks, in which some of the biological details are specified, in combination with machine-learning systems. The latter are trained on massive visual or audio data sets to reproduce the visual or auditory capabilities of nervous systems, such as image recognition. The resulting networks demonstrate surprising similarities to their biological counterparts, but are easier to analyse and interrogate than the real neural networks.

For now, perhaps its enough to ask whether the data from current brain mapping and other efforts can train machine-learning models to reproduce neural activity that corresponds to what would be seen in biological systems. Here, even failure would be interesting a signal that mapping efforts must go even deeper.

See original here:
How AI could lead to a better understanding of the brain - Nature.com

Read More..

Machine Learning Could Be Used to Better Predict Floods – IEEE Spectrum

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

As the frequency of extreme weather events rises in recent years, there is also a growing need for accurate and precise hydrological knowledge in order to anticipate catastrophic flooding. Hydrologythe study of the Earths water cyclehas played a big role in human civilization for thousands of years. However, in a recent paper, a team of researchers argue that hydrologys outdated methodology is holding the field back, and that it is time for the field to move on from complex theoretical models to predictive models built using machine learning algorithms.

Hydrologists and computer network researchers collaborated on a proof-of-concept machine learning model that can make hydrological predictions. Hydrology models already exist, said Andrea Zanella, professor of information engineering at the University of Padova in Italy, but those traditional models are mathematically complex and require too many input parameters to be feasible.

Using machine learning techniques, researchers were able to train a model that could, using the first 30 minutes of a storm, predict occurrences of water runoff or flooding up to an hour before they might happen. Zanella, who is also coauthor on the study, said that the study was only the first step towards building a model that would ideally predict the occurrence of water runoff with a few hours of lead time, which would give people more time to prepare or evacuate an area if necessary.

Precipitation like rain or snow happens relatively infrequently, so sensors may not record any data at all during a downpour. And when they do, they usually wont have enough data points to capture a storms progression in much detail.

The work towards reaching that goal is not simple at all, Zanella said. But the methodology that we propose seems to be a first step towards that.

Researchers trained their machine learning model with input data parameters like rainfall and atmospheric pressure obtained from sensors at weather stations. Their output data parameters, like soil absorption and runoff volume, was a combination of data they collected and by using traditional theoretical models to generate additional synthetic data. Synthetic data was necessary, Zanella said, because there is a lack of the kind of data necessary to build dependable machine learning models for hydrology.

The lack of data is the result of current data collection practices. Currently, hydrological data is collected using sensors at predetermined time intervalsusually every few hours, or even days. This method of data collection is inefficient because only a small proportion of the collected data is useful for modeling. Precipitation like rain or snow happens relatively infrequently, so sensors may not record any data at all during a downpour. And when they do, they usually wont have enough data points to capture a storms progression in much detail.

In their study, researchers suggest that more sensors and a variable rate of data collection may help solve the problem. Ideally, sensors would significantly ramp up data collection when theres precipitation and slow down collection when conditions are fair.

Output data like the absorption of water by the soil is especially difficult to come by, even though it is important for building machine learning models by matching observations with predictions about runoff effects. The difficulty is in the need to take soil samples and analyze those samples, which is both labor intensive and time consuming.

Zanella said that weather sensors should also incorporate some form of data preprocessing. Currently, researchers downloading data from sensors must sift through a large amount of data to find useful precipitation data. Thats not only time consuming but also uses space that could instead store more relevant data. If data processing were to occur automatically at weather stations, it could help clean up the data and make data storage more efficient.

The study also stressed the importance of improving data visualization tools. As a field with important practical applications, hydrological information should be easy to understand for a wide audience from diverse technical backgroundsbut that currently isnt the case today. For example, graphs that show the intensity of rainfall over time, called hyetographs, are especially notorious for being difficult to understand.

In most cases, when you look at the management of water resources, these people who are in charge are not [technical] experts, Zanella said. So we need to also develop some visualization tools that help these people to understand.

Zanella said researchers from different disciplines will need to collaborate to significantly advance the field of hydrology. He hoped more researchers with wireless communications and networking backgrounds would work in the field to help tackle its challenges.

The researchers published their work on 25 September in IEEE Access.

From Your Site Articles

Related Articles Around the Web

More here:
Machine Learning Could Be Used to Better Predict Floods - IEEE Spectrum

Read More..

Ethical Machine Learning with Explainable AI and Impact Analysis – InfoQ.com

As more decisions are made or influenced by machines, theres a growing need for a code of ethics for artificial intelligence. The main question is, "I can build it, but should I?" Explainable AI can provide checks and balances for fairness and explainability, and engineers can analyze the systems impact on peoples lives and mental health.

Kesha Williams spoke about ethical machine learning at NDC Oslo 2023.

In the pre-machine learning world, humans made hiring, advertising, lending, and criminal sentencing decisions, and these decisions were often governed by laws that regulated the decision-making processes regarding fairness, transparency, and equity, Williams said. But now machines make or heavily influence a lot of these decisions.

A code of ethics is needed because machines can not only imitate and enhance human decision-making, but they can also amplify human prejudices, Williams said.

When people discuss ethical AI, youll hear several terms: fairness, transparency, responsibility, and human rights, Williams mentioned. Overall goals are not to perpetuate bias, consider the potential consequences, and mitigate negative impacts.

According to Williams, ethical AI boils down to one question:

I can build it, but should I? And if I do build it, what guardrails are in place to protect the person thats the subject of the AI?

This is at the heart of ethics in AI, Williams said.

According to William, ethics and risks can be incorporated using explainable AI, which would help us understand how the models make decisions:

Explainable AI seeks to bake in checks and balances for fairness and explainability during each stage of the machine learning lifecycle: problem formation, dataset construction, algorithm selection, training, testing, deployment, monitoring, and feedback.

We all have a duty as engineers to look at the AI/ML systems were developing from a moral and ethical standpoint, Williams said. Given the broad societal impact, mindlessly implementing these systems is no longer acceptable.

As engineers, we must first analyze these systems impact on peoples lives and mental health and incorporate bias checks and balances at every stage of the machine learning lifecycle, Williams concluded.

InfoQ interviewed Kesha Williams about ethical machine learning.

InfoQ: How does machine learning differ from traditional software development?

Kesha Williams: In traditional software development, developers write code to tell the machine what to do, line-by-line, using programming languages like Java, C#, JavaScript, Python, etc. The software spits out the data, which we use to solve a problem.

Machine learning differs from traditional software development in that we give the machine the data first, and it writes the code (i.e., the model) to solve the problem we need to solve. Its the complete reverse to start with the data, which is very cool!

InfoQ: How does bias in AI surface?

Williams: Bias shows up in your data if your dataset is imbalanced or doesnt accurately represent the environment the model will be deployed in.

Bias can also be introduced by the ML algorithm itself even with a well-balanced training dataset, the outcomes might favor certain subsets of the data compared to others.

Bias can show up in your model (once its deployed to production) because of drift. Drift indicates that the relationship between the target variable and the other variables changes over time and degrades the predictive power of the model.

Bias can also show up in your people, strategy, and the action taken based on model predictions.

InfoQ: What can we do to mitigate bias?

Williams: There are several ways to mitigate bias:

See the original post:
Ethical Machine Learning with Explainable AI and Impact Analysis - InfoQ.com

Read More..

UW scientists and NFL player create new MRI machine-learning … – Spectrum News 1

MADISON, Wis.University of Wisconsin-Madison researchers said they were proud to publish a groundbreaking paper on a new MRI machine-learning network.

They determined how brightly colored scans can help surgeons recognize, and accurately remove, an intracerebral hemorrhage (ICH), or bleeding in the brain.

Walter Block, a professor of medical physics and biomedical engineering, leads the research team that developed a special algorithm to support doctors who must act quickly and with precision to extract a brain bleed.

The trick is to visualize it and quantify it so that the surgeon has the information they need, Block said.

Tom Lilieholm a PhD candidate and lead author of the research created the specific algorithm for the new color-coded MRI machine-learning network.

We got pretty high accurate segmentations out of the machine here, 96% accurate clot, 81% accurate edema, he said, showing off one of the studys MRI slides.

Lilieholm said it can show a surgeon in less than a minute just how much of the hemorrhage they can safely remove.

Its really kind of useful to have that, and to have robust data to compare against, Lilieholm said. Thats where Matt kind of came in.

The Matt Lilieholm was referring to is NFL player Matt Henningsen.

Henningsen is from Menomonee Falls. Before becoming a Denver Bronco, he attended UW-Madison, where he excelled on the football field and in the classroom. He earned a bachelors and masters degree from the university.

My task would be to identify the location of the intracerebral hemorrhage and segment both the clot and the edema surrounding the clot, and then move on to every single layer of that image, Henningsen said.

Henningsen spent more than 100 hours gathering data for this new research on brain bleeds. He said he was excited and grateful for the opportunity to be part of this collaboration.

The UW-trained bioengineer and football player said he hopes this project can eventually support and improve something his football profession fears: traumatic brain injury.

You cant diagnose concussion with an MRI currently, he said. But I mean, maybe in the future, if youre able to, you can use machine-learning to potentially detect certain abnormalities that the human eye couldnt necessarily detect or things of that sort. Maybe we could get somewhere.

Originally posted here:
UW scientists and NFL player create new MRI machine-learning ... - Spectrum News 1

Read More..