Page 432«..1020..431432433434..440450..»

5 Ways to Use AI You May Have Never Even Considered – InformationWeek

It's widely believed that AI's potential has only reached the Commodore 64 stage. In other words, the best is yet to come.

As the technology gains momentum, innovation is flourishing, with new applications seemingly limited to only its users' imagination. Consider the following five examples that show how AI will continue to surprise and transform both personal and business activities.

AI can augment and accelerate the way individuals acquire knowledge and skills by fine-tuning educational experiences to the specific needs, learning styles, and multiple intelligences of each learner, says Paul McDonagh-Smith, senior lecturer of IT at MIT's Sloan School of Management, via an email interview. "AI can employ advanced algorithms to customize educational content and feedback based on a student's unique profile and progress, leading to better educational outcomes," he explains. "This innovative application of AI has the potential to revolutionize education by making it more tailored, engaging, and accessible to learners of all backgrounds and abilities."

AI systems like GPT-3 can generate novel concepts and suggestions by analyzing large amounts of text data. "This can help spark new ideas for products, services, and business models that humans may not have thought of on their own," says Scott Lard, general manager and partner at IS&T, an information systems technology search and contingency staffing firm, via email.

Related:Why Your Business Should Consider Using Intelligent Applications

What makes this approach useful is that AI systems can consider far more possibilities and variations than a single human mind, Lard explains. "By analyzing thousands of existing ideas, it can provide fresh perspectives and out-of-the-box thinking that helps organizations innovate."

Lard suggests the best way to get started with AI-enabled idea generation is to simply ask an AI model open-ended questions about potential new ideas and concepts within a specific industry or focus area. Give the AI system as much relevant context as possible to narrow the results, he advises. Then review the generated ideas to see which offer the most potential to explore further. "You can then iterate the process by refining your questions and context to produce even better results over time."

By providing a continuous and non-judgmental presence, AI can help address the escalating demand for mental health support, says Siraj M A, director of data and analytics at project engineering firm Experion Technologies, in an email interview.

Related:AI Investments We Shouldn't Overlook

Virtual companions, powered by AI, can deliver personalized interventions tailored to individual needs and preferences, M A explains. "These AI entities go beyond mere assistance; they can collect and analyze data over time, unraveling patterns crucial for a deeper comprehension of ... mental health."

When used appropriately, AI-driven virtual therapists could surmount geographical constraints, democratizing mental health care globally, M A says. "Such solutions could ensure timely support, especially for those facing barriers due to location or a limited mental health infrastructure."

M A recommends that new adopters should start by bringing together a team of mental health and AI experts to review potential opportunities. "The team should focus on identifying the right use cases in their industry, and then identify solutions that could help with early intervention, therapy support, or diagnostic assistance," he advises. "These objectives should then be evaluated alongside the data, technology and infrastructure available to come up with a list of prioritized use cases that can be pursued."

With the assistance of AI-powered team recommendation engines we can help our hiring managers pinpoint the best candidates for a specific job, says Juan Nassiff, technical manager and solutions architect at custom software development firm BairesDev, via email.

Related:The IT Jobs AI Could Replace and the Ones It Could Create

BairesDev gets more than a million job applications every year, which is virtually impossible to sort through manually, Nassiff says via email. "We leverage complex machine-learning algorithms to match talent in our database with unique project requirements that we have a need for," he states. "Our method goes beyond the traditional use of AI in customer service or data analysis, focusing on optimizing team assembly for software development projects."

Nassiff says the approach is "incredibly useful," since it ensures an equitable and skill-focused hiring process, eliminating the biases that can occur in traditional recruitment practices. "By focusing solely on skills, professional experience per skill, and project requirements, our Team Recommendation Engine enables the assembly of highly effective teams that are tailored to specific client needs," he explains. "This not only improves project outcomes, but also significantly reduces the time and resources typically spent on recruiting and team formation."

By analyzing multiple factors, such as weather and geography, AI can help exterminators build and optimize pest control measures. "This approach is particularly useful because it allows for proactive pest management, reducing the reliance on reactive and potentially harmful chemical interventions," says Rubens Tavares Basso, CTO at pest control software provider Field Routes, via email.

Basso advises potential adopters to consider data privacy and security concerns before implementing AI pest control technology. "Additionally, businesses should be mindful of potential biases in the AI algorithms and regularly update their system to adapt to changing environmental conditions," he says. "AI in pest control software provides a forward-thinking, eco-friendly solution to managing pest issues, promoting sustainable and effective practices in agriculture and other industries."

Read the rest here:
5 Ways to Use AI You May Have Never Even Considered - InformationWeek

Read More..

Groundbreaking Study Questions AUC Metric in Link Prediction Performance – Medriva

In a groundbreaking study led by UC Santa Cruzs Professor of Computer Science and Engineering, C. Sesh Seshadhri, and co-author Nicolas Menand, the effectiveness of the widely used AUC metric in measuring link prediction performance is being questioned. The researchers propose a new metric, VCMPR, which they claim offers a more accurate measure of performance in machine learning (ML) algorithms.

The Area Under the Curve (AUC) metric has been a standard tool for evaluating the performance of machine learning algorithms in link prediction tasks. However, the new research suggests that AUC fails to address the fundamental mathematical limitations of low-dimensional embeddings for link predictions. This inadequacy leads to inaccurate performance measurements, thereby affecting the reliability of decisions made based on these measurements.

The study introduces a novel metric known as VCMPR, which promises to better capture the limitations of machine learning algorithms. Upon testing leading ML algorithms using VCMPR, the researchers found that these methods performed significantly worse than what is generally indicated in popular literature. This revelation has serious implications for the credibility of decision-making in ML, as it suggests that a flawed system used to measure performance could lead to incorrect decisions about which algorithms to use in practical applications.

The findings of this research have considerable consequences for the field of machine learning. The introduction of VCMPR throws a spanner in the works, challenging the status quo and pushing ML researchers to rethink their performance measurement practices. By highlighting the shortcomings of the AUC metric, the study underscores the importance of accurate and comprehensive performance measurement tools for making trustworthy decisions in machine learning.

While the research is undoubtedly groundbreaking, its recommendations are yet to be universally accepted. The machine learning community is currently grappling with the implications of this study, with some experts supporting the switch to VCMPR, while others are apprehensive about abandoning the traditional AUC metric. However, the conversation sparked by this research is crucial, as it pushes the field towards more accurate and reliable performance measurement practices.

This research by UC Santa Cruz signifies a potential paradigm shift in the field of machine learning. By challenging the effectiveness of the AUC metric and proposing a more accurate alternative, it highlights the need for constant innovation and scrutiny in the pursuit of more reliable and trustworthy machine learning practices. Whether or not VCMPR will replace AUC as the standard performance measurement tool is yet to be seen. However, one thing is certain: this research opens up a new chapter in the ongoing endeavor to enhance the accuracy, reliability, and practicality of machine learning applications.

Read the rest here:
Groundbreaking Study Questions AUC Metric in Link Prediction Performance - Medriva

Read More..

Leveraging Artificial Intelligence to Mitigate Ransomware Attacks – AiThority

Swift Strategies to Combat Ransomware Attacks and Emerge Triumphant

The famous MGM hack in Las Vegas is a prime example; the perpetrators got administrative passwords over the phone. Cybercriminals were able to take advantage of recent MOVEit vulnerabilities, which affected government agencies such as the Pentagon and the DOJ. Finding, evaluating, and deciding upon the quickest route to recovery has always been difficult. Artificial intelligence has the potential to greatly impact this area.

Organizations can quickly get back to normal operations with the help of AI, which can help understand the patterns of data corruption caused by attacks. Recognizing which files require restoration is the first step in a successful recovery. Which files have been corrupted? Which servers experienced trouble? Is it possible that important datasets have been altered? In what ways did the malware alter the files? Where can I find clean files in the backups? In the aftermath of an attack, answering these concerns while trying to restore from backups will necessitate an enormous and laborious undertaking.

Read10 AI In Manufacturing Trends To Look Out For In 2024

Organizations hit by ransomware attacks must prioritize reducing the damage it causes. The daily impact on the bottom line of organizations like hospitals, government agencies, and manufacturers when their systems are down because of ransomware is enormous. Examples abound, such as the recent assaults on MGM and Clorox, which resulted in damages amounting to hundreds of millions of dollars.

The organizations reputation takes a hit and the recovery process takes weeks, costing a pretty penny. It is crucial for intelligent recovery to validate data integrity prior to an attack happening. To keep the content clean and secure, data validation should be an ongoing process that is integrated with existing data protection procedures. Even with highly complex and hard-to-detect ransomware variations, data validation sheds light on the criminal actions that accompany these attacks.

This model is not trustworthy. The only trustworthy methodology for cybersecurity data integrity inspection is a combination of large data sets with artificial intelligence and machine learning. The bad guys have brains and are leveraging AI to their advantage more and more. When used maliciously, AI can be a potent weapon. It can identify ransomware corruption just as effectively as it can facilitate intelligent and speedy recovery. When it comes to cyberattacks, enterprises will still have a hard time recovering without AI. The ability to reduce unavailability and data loss is a benefit they reap from AI. Fortunes and company names are on the line, and the stakes couldnt be higher.

Read:Sitecore Ordercloud Delivers Limitless Commerce Capabilities

The key is to understand the distinctions between cyber recovery and catastrophe recovery. While natural disasters like floods and fires do not alter data, hackers can damage and alter entire databases, files, or even the underlying infrastructure. Relying on older backup programs for recovery frequently results in unexpected and expensive problems. Backup images can be encrypted or corrupted, or even connected to cloud-based backups might be severed, in several attacks. Cybercriminals are experts at corrupting data and backups undetected, making recovery a daunting task. Complex ransomware assaults necessitate cutting-edge methods for evaluating data integrity.

This necessitates the continual observation of millions of data points. You can learn a lot about the evolution of file and database content from these data points because they go into great detail. This kind of forensic investigation can only be handled by advanced analytics paired with AI-based machine learning.

Read:Top 10 Benefits Of AI In The Real Estate Industry

Machine learning algorithms that have been trained to identify corrupt patterns can analyze these data points and make informed conclusions regarding the integrity of the data. Artificial intelligence (AI) automation of this inspection process allows for the study of massive data sets that would be almost impossible for humans to handle. Securely unlocking devices, allowing access to bank accounts and medical information are just a few examples of the many everyday applications that use data points and AI-based machine learning. In order to guarantee safety, it depends on collecting a lot of data points.

Security flaws could be easily introduced in the absence of sufficient data points. Machine learning will unlock your phone when you hold it up to your face since it captures a lot of visual data points and has been trained to recognize your face, not your doppelgangers. For instance, the training can take into account your current and future facial appearance, including any glasses you may wear. If this procedure did not incorporate a large amount of data points, the security would be readily compromised, allowing anyone with comparable facial features to unlock the phone with ease.

[To share your insights with us as part of editorial or sponsored content, please write tosghosh@martechseries.com]

More:
Leveraging Artificial Intelligence to Mitigate Ransomware Attacks - AiThority

Read More..

The world’s coral reefs are even bigger than we previously thought – BGR

Our worlds coral reefs are much larger than we previously believed. According to a report shared on The Conversation by Mitchell Lyons, a postdoctoral research fellow at The University of Queensland, and Stuart Phinn, a Professor of Geography at The University of Queensland, researchers found 64,000 square kilometers of coral reef we didnt know existed.

The ground-breaking discovery brings the total size of our planets shallow reefs to roughly 348,000 square kilometers. Thats roughly the size of Germany, the two researchers note in their report. This new figure fully represents the worlds coral reef ecosystems, including coral rubble flats, as well as living walls of coral.

Whats even more astounding about this discovery is that it was mostly only made possible thanks to machine learning. The researchers say that they relied on snorkels, satellites, and machine learning to help them discover the hidden coral reefs. These high-resolution satellites made it possible to view reefs as deep as 30 meters down.

When coupled with the direct observations and records of the worlds coral reefs, the researchers say they were able to ascertain that a large amount of the worlds coral reefs had not been identified or noted down anywhere. And that had to be changed.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

They used machine learning techniques to help create new maps of the coral reefs found around the world, while they relied on satellite imagery and data to create predictions that were as accurate as possible when producing the new maps. Of course, without direct observational data, its hard to confirm the existence of these reefs fully.

But, it is still a huge step forward for the study of the worlds coral reefs, which are constantly in danger due to the ongoing climate change issues plaguing our planet. Perhaps with more useful studies like this, we can get a better understanding of how much is truly at stake.

See the rest here:
The world's coral reefs are even bigger than we previously thought - BGR

Read More..

Vapor IO connects with Comcast on AI-as-a-service offering – Light Reading

Comcast is one of the early edge network partners for a new AI- and 5G-as-a-service offering operated by Vapor IO that will eventually become available nationwide.

Vapor IO's new micro-cloud offering, called Zero Gap AI, is underpinned by the Nvidia MGX platform with the Nvidia GH200 Grace Hopper Superchip and Supermicro's AI-optimized servers.

In use cases focused on enterprise and smart city applications, Zero Gap AI aims to deliver private 5G and GPU-based "micro-clouds" to locations such as retail storefronts, factory floors and city intersections, the company said.

Zero Gap AI initially will be available as a pilot offering in two markets Atlanta and Chicago and will tie into Comcast's network infrastructure there. That activity builds on integration work that Vapor IO and Comcast announced last year focused on tests of low-latency edge services and applications.

"Our low-latency, high-bandwidth network and connectivity services unlock a world of applications for large and small businesses, residences and mobile customers," Comcast Chief Network Officer Elad Nafshi said in a statement. "We're continuously innovating and collaborating with partners like Vapor IO to identify new ways to leverage our network, and Zero Gap AI is a unique opportunity to expand the limits of what we can do together with edge computing services."

Plans to expand

Zero Gap AI will also expand into dozens of other US markets that have access to Vapor IO's "Kinetic Grid" infrastructure, including Dallas, Las Vegas and Seattle.

With the potential to connect more cable headends to the Vapor IO fabric, Vapor IO believes its cable ambitions will extend well beyond Comcast.

"We think this is going to be a big benefit to the cable operators," Vapor IO CEO and founder Cole Crawford said, noting that the company also has "enjoyed a good relationship with CableLabs for several years."

Vapor IO's launch of Zero Gap AI follows the company's buildout of a footprint of network backbone and individual points of presence across 36 US markets (via Vapor IO's own facilities or those of its colocation partners) to put that capability alongside the radio access network (RAN).

Bringing AI to the edge

The broader idea is to bring the kind of as-a-service capability that companies expect from a cloud company without the complexities of having to figure out how to build a wireless network and build out the elements for AI and machine learning. And the inferencing capabilities of AI can then be deployed at the edge instead of on-premises to support enterprises that are operating in multiple markets or wide-scale smart city deployments. Once the AI model is trained and set up, the AI inferencing component is used to make predictions and solve tasks.

"Generative AI costs you money. Inferencing makes you money," Crawford said. "And the inferencing action, I think, is where the industry will make a lot of money."

Vapor IO didn't announce any early, specific deployments of Zero Gap AI, but the company did spell out several potential use cases. A retailer, for instance, could use it for AI-assisted automated checkouts without having to deploy expensive AI gear at each store.

'Computer vision' a driving force

In another example, a city could use Zero Gap AI for "computer vision" services to support a pedestrian safety system across hundreds of intersections without having to deploy AI equipment at every corner. Additionally, construction sites could use computer vision with AI inferencing to determine if everyone working there is wearing a hardhat.

In a more specific example, the City of Las Vegas is working on computer vision inferencing capabilities that would take advantage of thousands of cameras deployed around the city for use in areas such as public safety, law enforcement and traffic management.

"All applications we are seeing demand for today use computer vision in some form or another. Computer vision certainly is the biggest driver," Matt Trifiro, Vapor IO's chief marketing officer, said.

"We think large vision models as a basis for how to do inferencing are going to generate more revenue for the industry than large language models, I think, over the next five years," added Crawford.

Read the original:
Vapor IO connects with Comcast on AI-as-a-service offering - Light Reading

Read More..

Machine learning driven methodology for enhanced nylon microplastic detection and characterization | Scientific Reports – Nature.com

Contamination level and representativeness of subsampled areas

Analysis of the procedural blank sample indicated that contamination from the experiment environment was low. The detailed results are summarized in Table S1.

The positive control sample was used to examine the representativeness of the nine subsampled regions, determined by the ratio of the estimated mass of particles on the filter to the initial 0.05mg of nylon microspheres. It was found that the method slightly overestimated the mass of nylon microspheres, with an obtained ratio of 1.270.06. (Detailed information is displayed in Fig. S1). In an endeavour to optimize the number of regions for our spectral imaging model, we systematically analyzed the ratio between estimated and actual values across varying numbers of regions.

Generally, for identification of MP using the O-PTIR technique, there are three commonly used methods, i.e., DFIR imaging, point spectra measurements, and HSI. However, each of these methods brings some challenges: for example, DFIR imaging is fast yet provides unreliable results while HSI and point spectra measurements allow for accurate results, but they are time-consuming for data collection. With the QCL system integrated within the O-PTIR microscope, the microscope can generate a single frequency IR image of a 480m640m (spatial resolution: 2m) area of a filter in approximately 3min and 20s. When an appropriate wavenumber and a threshold value are selected, the generated image shows the majority of MP particles while ruling out most non-MP particles. With this method however, careful selection of a suitable wavenumber and a threshold value for MP particles are necessary; multiple threshold values might be needed in case of interference from the complex non-MP particles. In our study, the discrimination between MPs and non-MP particles based on single-wavenumber images proved to be unfeasible, as illustrated in Fig. S2.

The second method commonly used for MP identification is point spectra measurements. After particles are observed in the mIRage microscope, point spectra could be collected for each particle and compared against parent plastic to achieve chemical identification of the particles. This method presented two challenges: (1) When using visible light for particle location under the microscope, non-MP particles were inevitably included as spectral acquisition targets, thus adding to the analysis time. (2) For an individual particle, the O-PTIR spectra could vary significantly across different spots of the particle (see example in Fig.1). This necessitates the collection of spectra from multiple areas of the particle to enhance the reliability of identification results. Consequently, the analysis time will be multiplied. For example, it takes 25s to obtain a spectrum (a total of 5 scans acquired for each single spectrum), so if there are 100 particles from the regions of interest on the filter and three spectra are required for each particle, the total analysis time needed is at least 2h. This estimation only accounts for the raw data acquisition, excluding additional durations associated with manual adjustments such as repositioning the objective or refocusing. In light of this, such an approach becomes exceedingly time-intensive, especially when a vast number of particles are in play.

Two spots of the particle encircled in a red dashed line (A) selected for point spectra collection and (B) the corresponding O-PTIR spectra of the two spots. a.u. is arbitrary units. The scale bar is 20m.

HSI was the third method employed for MP identification. HSI generates an image where each pixel contains a full spectrum. Hence, it is a reliable method for MP identification. However, this reliability comes at the cost of drastically longer data collection time, which makes HSI impractical for routine MP analysis. For example, capturing a hyperspectral image for a 480m640m area (spatial resolution of 2m and spectral resolution of 2cm1, from a spectral range of 7691801cm1) requires almost two weeks.

In response to the challenges mentioned above, we have developed a reliable MP detection framework with an improved speed that is suitable for detecting a large quantity of nylon MPs. It can collect spectral data from nine areas (the size of each area is480m640m) of a filter (at a spatial resolution of 2m) within just approximately 2h. Powered by machine learning, the reliability of this framework is not compromised in response to reduced data collection time.

In order to effectively utilize DFIR imaging for high-throughput analysis of MPs, it is crucial to carefully select specific wavenumbers that provide the greatest discriminatory power between MP and non-MP particles. Making incorrect choices in wavenumber selection can directly impact the accuracy of identification. Acquiring too many wavenumbers increases measurement time, resulting in decreased throughput. For instance, adding just one more wavenumber can lead to an approximate 30-min increase in the time required for our proposed MP detection framework to collect data from a single filter. To identify the important wavenumbers and determine the optimal number of such wavenumbers, a database collected from bulk nylon plastic was assembled, containing 1038 spectra of MP and 1052 spectra of non-MP.

We found several types of non-MP particles in our dataset. Figure2 displays the spectra of two non-MP classes (type I non-MP and type II non-MP), along with the mean spectrum of MP, enabling a comparison. Upon initial inspection, type I non-MP exhibits a prominent sharp peak in the 17001800cm1 spectral range, while type II non-MP displays a broad peak in the 10001200cm1 spectral range. In contrast, the apparent characteristic peaks of MPs are two consecutive sharp peaks in the 15001650cm1 range.

Mean spectra for nylon MP class and two non-MP types from the database constructed, following standard normal variate (SNV) to minimize the multiplicative effects.

Two thirds of the spectra from each class were randomly selected as the training dataset for model development, and the remaining samples formed the test dataset. Based on the obtained results, the model utilizing the full wavenumber spectrum yields a correction accuracy rate of 85.31% (see Table 2). The confusion matrix of the SVM-Full wavenumber model (Fig.3A) implies that there are 8 point spectra of MPs wrongly classified as non-MPs and 97 of non-MPs mistakenly assigned as MP.

Confusion matrix showing classification accuracy for the test set of SVM-Full model using full spectral variables (A) and SVM-Four model (B).

Subsequently, the coefficient based feature importance for the full wavenumber model (Fig.4) was plotted to visualize the contribution of individual spectral variables. According to Fig.4, we could choose the important wavenumbers to our dataset based on the feature importance. The higher feature importance signifies stronger discriminative capability. Based on the analysis of the coefficients of the SVM-Full wavenumber model, wavenumbers 1711cm1, 1635cm1, 1541cm1, and 1077cm1 (indicated in Fig.4) showed the feature importance, hence, were selected as important wavenumbers for distinguishing between MPs and non-MPs. As seen from Table 2, the model optimized with these four wavenumbers demonstrates an enhanced correction rate of 91.33%. Meanwhile, the SVM-Four wavenumbers model (Fig.3B) resulted in 34 point spectra of MPs wrongly classified as non-MPs and 28 of non-MPs mistakenly assigned as MP, which shows it is a balanced model for classification tasks. The SVM-Four wavenumbers model appears to outperform the SVM-Full wavenumber model in terms of specificity, CCR, and MCC, suggesting that it is a better model for this classification task. However, the SVM-Full wavenumber model has a higher sensitivity, making it better at identifying true positive cases.

The coefficients (or weights) of the SVM model, which indicate the importance of each feature (wavelength), are then plotted. Four wavenumbers which has relatively higher feature importance than other are marked above the curves (i.e., 1711cm1, 1635cm1, 1541cm1, and 1077cm1).

After the selection of the four important wavenumbers, DFIR images were obtained at the important wavenumbers from the nine subsampled regions of the filter. Particle identification could be performed through visual inspection of these DFIR images. For instance, Fig.5A shows an optical image of a small region of a filter with a particle in the centre, and Fig.5B shows chemical images of that region based on the intensity of 1711cm1, 1635cm1, 1541cm1, and 1077cm1 bands. The absorbance intensity of each chemical image was normalized to the same range. The particle in this region exhibits high signal intensity at 1635cm1 and 1541cm1, while showing weak signal intensity at 1711cm1 and 1077cm1, indicating that it is a MP particle. On the other hand, non-MP particles would show weak signal intensity at 1635cm1 and 1541cm1, while showing strong signal intensity at 1711cm1 and/or 1077cm1 (See Figs. 6A,B for an example of non-MP particles).

An optical image of an area of a prepared filter, with a MP particle in the center of the image (A), single frequency images of that area using 1711cm1, 1635cm1, 1541cm1 and 1077cm1 band intensity, with the absorbance intensity of each chemical image normalized to the same range (B), support vector machine (SVM) prediction results of the particles in this area (C), and normalized O-PTIR spectra of the particle and the bulk plastic (D). The +1 in (C) indicates where the spectrum of the particle in (D) was collected. The scale bar is 20m.

An optical image of an area of a prepared filter, with a non-MP particle in the center of the image (A), single frequency images of that area using 1711cm1, 1635cm1, 1541cm1 and 1077cm1 band intensity, with the absorbance intensity of each chemical image normalized to the same range (B), support vector machine (SVM) prediction results of the particles in this area (C), and normalized O-PTIR spectra of the particle and the bulk plastic. The +1 in (C) indicates where the spectrum of the particle in (D) was collected. The scale bar is 20m.

However, for accurate particle identification, visual inspection is not advisable due to low accuracy. Meanwhile, application of SVM-Full model requires a huge amount of time in the collection of point spectra from all particles. Therefore, an SVM-Four wavenumbers model was trained from the four important wavenumbers to predict each particle accurately. Spectral data at the four important wavenumbers were extracted from the same database used for the SVM-Full wavenumber model. The trained SVM model on the selected four wavenumbers demonstrated good performance, evidenced by a high CCR, MCC, sensitivity and specificity (Table 2).

After applying the SVM classifier to the particle in Fig.5A, each pixel of the particle was labelled as either MP (red) or non-MP (blue), providing an intuitive and accurate identification result. Figure5C displays the SVM prediction results for one example area. As can be seen, most pixels in the particle have been labelled as MP, with a small portion labelled as non-MP. The result for a particle was determined by the majority vote of the labels of all pixels within the particle. Thereby this particle was identified as a MP particle. This was further confirmed by the full spectrum of this particle (Fig.5D). Also, by applying the SVM classifier to the particle in Fig.6A, the particle was predicted to be a non-MP particle (Fig.6C). Figure6D presents a spectrum of this particle, which validates the predicted outcome.

Our developed SVM model offers several distinct advantages over the traditional correlation-based method for MP identification. Firstly, the SVM model only requires four wavenumbers as input, significantly reducing the complexity of data collection compared to the correlation-based approach, which involves obtaining spectra from each particle and calculating correlation coefficients. This efficiency translates into a substantial time-saving advantage. Therefore, the developed method is particularly useful when dealing with a large number of particles on the filter. Secondly, the correlation-based method often relies on establishing a threshold for identification, introducing a subjective element into the process. In contrast, the SVM model automates the assignment of particles to MP or non-MP categories, contributing to a more consistent and reliable MP identification process. Last but not least, once essential wavenumbers are identified and a simplified model is developed, the SVM approach can be extended to identify a range of polymers. This versatility is a significant advantage, enabling the model to adapt to various MP compositions beyond the scope of the original correlation-based method.

Using the novel identification procedure developed, it was possible to investigate the effectiveness of several sample pre-processing steps in a more representative and less biased and efficient way. To this end, high-temperature filtration and alcohol pretreatment were chosen as methods for reducing non-MP. The performance of these two treatments was evaluated separately, including the analysis of the spectra and DFIR images at four selected wavenumbers. The evaluation included an assessment of their impact on the spectra of MP and their effectiveness in removing non-MP. To assess the effectiveness of particle removal, the MP particle/all particle ratio (MP/All) detected by four wavenumbers SVM model was used. A treatment was considered effective if it significantly increased this ratio.

By boiling the nylon bulk, MP particles were released. The released particles were subsequently enriched on the filters through high-temperature filtration and room-temperature filtration, respectively. The mean spectrum of MP from high-temperature filtration, the mean spectrum of MP from room-temperature filtration, and the mean spectrum of nylon bulk were plotted together for comparison (Fig. S3). Results showed that when the mean spectrum of nylon bulk was compared to the mean spectra of MP (regardless of the filtration temperature), no consistent peak shift was found. When the mean spectrum of MP from high-temperature filtration and the mean spectrum of MP from room-temperature filtration were compared, no consistent peak shift was found either. These findings demonstrate that exposure to high temperatures reaching water boiling point will not impact the spectral profiles of MPs when compared to the original bulk plastic.

After the thermal degradation of nylon bulk, the particles released were captured on filters through high-temperature filtration and room-temperature filtration, respectively. Using our developed SVM classifier, particles in the nine subsampled regions of the filter were counted and subsequently the ratio MP/All was calculated. The MP/All ratio from the room-temperature filtration was 0.0900.012, and from the high-temperature filtration was 0.080.012, respectively. The normal t-test results indicated that the effectiveness of high-temperature filtration in removing non-MP was not evident.

Gerhard et al.18 reported that slip agents (such as fatty acid and fatty acid esters) of plastic products are released concomitantly with the release of MP particles, and these slip agents might be dissolved in hot water and washed away during the filtration process. In light of this, our results suggest that the nylon bulk used in our study might have just a small amount fatty acid or their esters. Indeed, Hansen et al.19 reported that as additives in plastics, the amount of slip agents could be as low as 0.1%, and the removal of a small amount of additives from MP samples might not statistically significant. Furthermore, based on observations of the prepared filters, we did not see a thin residue on the room-temperature filter, which was observed by Gerhard et al.18 who confirmed that most part of the thin residue in their experiment was identified as additives. This supports that the amounts of hot water-rinseable additives in our samples were low, however this would generally be sample specific.

After the degradation of nylon bulk in boiling water, the particles released were retained on filters. An alcohol treatment was subsequently applied to the filters to reduce non-MP particles. The mean spectra of MP before and after an alcohol treatment and the mean spectrum of nylon bulk were plotted together and compared (Fig. S4). Results revealed that when the mean spectrum of nylon bulk was compared to the mean spectra of MP (regardless of the alcohol treatment), no consistent peak shift was observed. When the mean spectra of MP before and after the alcohol treatment were compared, no consistent peak shift was observed either.

To further explore the effects of alcohol treatment on released particles, this paragraph focuses on spectral changes of individual particles. The spectral data of individual particles was baseline corrected, smoothed, and normalized to between 0 and 1 prior to comparison. Figure7 shows spectra as well as optical images of 4 MP particles before and after an alcohol rinse. For all four particles presented, peak shifts for signature bands of MP in the range of 769cm1 to 1801cm1 were not observed. Particle 1 has a peak at 1741cm1 before the alcohol treatment; this is a peak that has been assigned to the formation of carbonyl groups during polyamide 66 photo-20 and thermal-oxidation21, which implicates a pathway of oxidation in hot water for the particles during high temperature treatment (filtration at 70C). However, the reduction in signal intensity of this peak after the alcohol treatment might indicate that the alcohol treatment could remove some of the oxidized substances. The spectrum of particle 2 has two new peaks at 1007cm1 and 1029cm1, respectively, after exposure to alcohol, which was possibly due to alcohol residue, as these two new peaks correspond to C=O stretching bonds of alcohol22. No introduction or disappearance of the peak was observed in the spectra of particle 3 and particle 4. By observing the optical images of these MP particles, it can be concluded that alcohol treatment did not have an effect on their morphology.

Optical images of nylon MP particles 1, 2, 3, 4, with the particles circled and marked with numbers. The scale bar is 10m.

Figure8 shows spectra as well as optical images of 4 non-MP particles before and after an alcohol rinse. Particle 1 and particle 2 appear to be yellowish to brownish. These types of non-MPs are easy to be discriminated against from MPs based on visual observation of optical images, as most of the MP particles in our experiments are whitish, similar to the color of their bulk plastic samples. However, judgement based on color is not always correct. Subsequent spectral analyses confirmed that particle 1 and particle 2 are not MP. After the alcohol treatment, most parts of these two particles were washed away, leaving black remnants on the filter. Though the elimination was not complete, it proved that alcohol could remove non-MP particles. Particle 3 is whitish with a glossy surface, and it is a chlorinated polyethylene particle. After the alcohol treatment, particles with a spectrum similar as chlorinated polyethylene (We do not have any appliances containing polyethylene) remained where it had been, and the spectrum was not changed substantially. The glossiness of the particle was reduced; however, this indicates that alcohol treatment could not remove this type of contaminant. Particle 4 is a white particle, and it is covered by a brown, lumpy object on the upper left. The noise spectrum cannot be identified by the database with high certainty. After the alcohol treatment, it appeared dull grey, and its spectrum looked like that of nylon showing five signature peaks (1633cm1, 1533cm1, 1464cm1, 1416cm1, 1370cm1). This implies that the alcohol might be able to remove some contaminants, such as additives, which cover the surface of the MP particle. Li et al.23 have reported the same finding that alcohol could wash away some additives attached to the surface of MP particles. The above experiments prove that an alcohol treatment could remove some particle contaminants and wash away some impurities covering MP particles.

Optical images of non-MP particles 1, 2, 3, 4, with the particles circled and marked with numbers. The scale bar is 10m.

To further explore the significance of alcohol treatment, the developed SVM classifier was used to count the particles in the nine subsampled regions of the filter, based on which the MP/All was calculated. The MP/All ratio before the alcohol treatment was 0.1290.129; and after the alcohol treatment was 0.2860.207, respectively. The paired t-test of the data indicates that an alcohol treatment of the same areas of the filter significantly increases the MP/All (p<0.05). In summary, alcohol treatment was significantly effective in reducing non-MP contaminants.

The proposed MP detection framework was specifically adapted for application to detect MPs released from nylon teabags. However, it's important to note that not the entire framework was employed in this context. Rather, a selective application was implemented, excluding the components based on DFIR imaging and the SVM-Four wavenumber model. After steeping teabags in hot water, MPs were released and collected on a filter through filtration at room temperature. This filter was rinsed with alcohol and air-dried in the fume hood prior to O-PTIR data collection. The contaminants from the teabag are not the same as those originating from reference nylon bulk plastics. For example, teabags might have some contaminants from tea residuals, as noted by Xu et al.13. Particles released from nylon teabags were identified through point spectra measurements due to the relatively low particle count (i.e., <5 particles) observed in the subsampled regions of the filter (see Conventional MP identification).

Characterization of MP particles released from teabags was carried out using the MATLAB image processing toolbox function regionprops, which calculates properties of each particle including area, length (length of the major axis of the fitted ellipse), width (length of the minor axis of the fitted ellipse), and circularity. In Fig.9, we present four optical images toshow nylon MP particles, which have been released from three nylon teabags; they are circled and marked with numbers. To provide a comprehensive analytical context, the spectra of three key references are plotted alongside: a nylon reference sphere, a sample of nylon in bulk form, and the material of the nylon teabag itself. This juxtaposition allows for a direct comparison between the spectra of the isolated particles and these standard nylon references, this contributes to a more detailed understanding of the appearance as well as the spectral properties of the particles.

Optical images of nylon MP particles 1, 2, 3, 4 released from nylon teabag, with the particles circled and marked with numbers.

The average quantity of MP in the nine subsampled regions of the filter was 8.71.2. Extrapolating to the whole filter, we would estimate 31943.7 MP particles released from steeping three teabags, or approximately 106.314.6 MP particles were released from one teabag. The particle counts/quantities of MPs released from teabags previously reported are listed in Table 1. Our reported count is comparable to that reported by Ouyang et al.9, who found 393 MPs using FTIR-based particle-based analysis, although their brewing time was much longer than ours (1h vs 5min). Regarding Hernandez et al.7, their brewing temperature and time are very similar to ours. Nevertheless, as they did not conduct particle-based analysis, their results were overestimated8. The detection limits of O-PTIR spectroscopy and Raman spectroscopy are similar, with O-PTIR spectroscopy being around 500nm and Raman spectroscopy being around 1m. Based on this, we were surprised to find that the number of MPs we detected was one to two orders of magnitude lower than the 5,800 20,400 per teabag (brewed at 95C for 5min) reported by Busse et al.8 using Raman spectroscopy. Busse et al.8 conducted particle-based analysis, indicating that their results should be considered reliable. However, it is important to notethat their use of Raman spectroscopy may have led tomisidentification of non-MP particles as MPs in an unexpected way. To illustrate, Busse et al.8 identified and counted polyethylene (PE) particles in the teabag leachate. However, these PE particles could also be behenamide (CH3(CH2)20CONH2), which is a typical slip additive widely used in PE plastic. Behenamide exhibits a high level of spectral similarity with PE in Raman spectroscopy, up to 90%, mainly due to the strong Raman signal associated with its saturated alkyl chains (i.e., (CH)) and relatively weak Raman signals from carbonyl and amine groups23. The observed disparities between our results and those of Busse et al.8 could also potentially be attributed to the use of different types of teabags.The counts/quantities reported by other studies listed are expressed in the mass of MPs released per teabag11,12, or the number of MP particles per kg of teabags10. Therefore, direct comparisons with these studies are not possible in our paper.Subsequently, the length, width, area, and circularity of each particle were measured and calculated using the MATLAB function regionprops. Figure10A shows the surface area of the MP particles. Except for the two MP particles with the smallest (100m2) and largest (680m2) surface area, the majority of the remaining particles have surface areas ranging from 150 to 550m2. Figure10B shows the distribution of the length of MP particles. As can be seen, the maximum length is 40m and the minimum length is 16m, while most MP particles have a length ranging from 18 to 28m. Figure10C displays the width of the MP particles. As seen from the graph, the smallest width is 9m, while the largest width is 30m. The majority of the MPs have a width range between 12 and 24m. Figure10D shows the circularity of the MP particles. Among all the MP particles, only 4 have a low circularity (0.10.4), while most of the MP particles have circularity ranging from 0.65 to 0.95. Circularity is a measure of how closely a shape resembles a perfect circle. Circularity values near 1 represent perfect circles, while values close to 0 indicate shapes that deviate significantly from circularity. Based on the literature, particles that are more circular in shape are found to be less toxic, while those that deviate from a circular shape, manifesting more stretched or fiber-like, are associated with a higher level of toxicity24.

Length (A), width (B), area (C) and circularity (D) of MP particles released from steeping a single teabag.

View original post here:
Machine learning driven methodology for enhanced nylon microplastic detection and characterization | Scientific Reports - Nature.com

Read More..

Revolutionizing Healthcare: The Impact of Machine Learning | by NIST Cloud Computing Club | Feb, 2024 – Medium

Thanks to technology breakthroughs, the healthcare business has undergone a dramatic transition in recent years. Machine Learning (ML) is at the vanguard of this revolution. Artificial intelligences subset of machine learning is revolutionizing the healthcare industry with the promise of better diagnosis, individualized treatment plans, and more effective healthcare systems.

Machine learning has made significant advances in healthcare, one of which is its unmatched speed and accuracy in analyzing large volumes of medical data. Machine learning algorithms has the ability to sort through genomic data, medical imaging, and electronic health records, revealing patterns and connections that human eyes would miss. This capacity is particularly important for early illness diagnosis and detection.

For example, in radiology, ML algorithms are enhancing the accuracy of medical imaging interpretations. They can quickly analyze complex medical images like MRIs and CT scans, aiding radiologists in detecting abnormalities and identifying potential health issues. This not only expedites the diagnostic process but also improves the precision of medical diagnoses.

The idea of individualized medicine is being revolutionized by machine learning. With the use of individual patient data analysis, including lifestyle factors, genetic information, and therapy responses, machine learning algorithms can customize therapies to meet the specific needs of every patient. This method is more focused, reducing side effects and maximizing the effectiveness of treatment.

In treatment of Cancer, for example, ML is being employed to predict how specific cancer types will respond to various treatment options based on genetic markers. This enables oncologists to recommend personalized treatment plans, improving the chances of successful outcomes and reducing the need for trial-and-error approaches.

Beyond diagnostics and treatment, Machine Learning is also playing a pivotal role in optimizing healthcare systems. Predictive analytics can forecast patient admission rates, enabling hospitals to allocate resources efficiently. ML algorithms can identify trends in patient data to anticipate disease outbreaks, allowing for proactive public health measures.

Furthermore, ML-powered chatbots and virtual health aides are revolutionizing patient relationships. These solutions promote more easily available and convenient healthcare services by offering real-time monitoring for patients with chronic diseases, scheduling appointments, and giving prompt answers to health-related questions.

Although machine learning has bright futures in healthcare, there are obstacles and moral issues to be addressed. Careful thought must be given to matters like algorithm bias, data privacy, and the interpretability of machine learning models. Ensuring the proper implementation of machine learning (ML) in healthcare requires striking a balance between innovation and ethical principles.

In conclusion, the integration of Machine Learning in healthcare is reshaping the industry, from diagnostics to personalized treatment and system optimization. As these technologies continue to advance, they hold the potential to revolutionize patient care, improve outcomes, and usher in a new era of precision medicine. While challenges persist, the ongoing collaboration between healthcare professionals, data scientists, and policymakers is essential for realizing the full benefits of Machine Learning in healthcare.

Visit link:
Revolutionizing Healthcare: The Impact of Machine Learning | by NIST Cloud Computing Club | Feb, 2024 - Medium

Read More..

The Role Of Augmented Analytics In Personal Finance – Seattle Medium

Finances FYI Presented by JPMorgan Chase

In the rapidly evolving landscape of technology, augmented analytics is a transformative force, reshaping how businesses operate and empowering consumers with actionable insights. This paradigm shift is particularly evident in the financial sector, where the integration of artificial intelligence (AI) has played a pivotal role in enhancing decision-making processes and the consumer experience. We delve into the concept of augmented analytics and explore its impact on financial services businesses and their consumers.

Augmented analytics refers to using AI and machine learning (ML) technologies to enhance data analytics, automate insights generation, and facilitate decision-making. Unlike traditional analytics, which often requires extensive expertise and time-consuming processes, augmented analytics leverages AI to automate data preparation, pattern recognition, and predictive modeling. The goal is to empower users, including business analysts and non-technical stakeholders, to make data-driven decisions effortlessly.

The evolution of augmented analytics in finance has been a dynamic journey marked by technological advancements, changing business needs, and an increasing emphasis on data-driven decision-making.

Business Intelligence (BI) tools allow financial institutions to organize and visualize their data with dashboards and reports that offer a user-friendly interface for data exploration. By incorporating machine learning algorithms, augmented analytics can identify patterns, trends, and correlations within data, enabling predictive analytics. This capability is invaluable for forecasting and anticipating future trends, especially in finance. Businesses can quickly respond to market changes, identify opportunities, and mitigate risks more effectively, accelerating the decision-making process with real-time insights.

Augmented analytics automates the process of data cleaning, integration, and interpretation, reducing the time and effort required for data preparation. It ensures that the data used for analysis is accurate and reliable. It also frees up valuable time for analysts to focus on strategic initiatives, allowing organizations to extract more value from their data.

Augmented analytics makes data analysis accessible to a broader audience within an organization using natural language interfaces. Making analytics more accessible to individuals without a technical background democratizes data-driven decision-making and promotes a data-driven culture across all departments.

With the advent of more advanced analytics techniques, the financial industry has shifted its focus from descriptive analytics to predictive analytics. Financial businesses can employ machine learning algorithms for tasks like credit scoring, fraud detection, and risk assessment. AI-driven algorithms have demonstrated significant improvements in accuracy and efficiency in these areas.

The evolution of augmented analytics also brings forth considerations for ethical and responsible AI usage in finance. As technology advances, fairness, transparency, and regulation compliance are paramount.

Augmented analytics can analyze vast datasets to identify meaningful customer segments based on behavior, preferences, and demographics. Marketers can then target specific segments with more personalized and relevant content, improving the chances of engagement and conversion.

By leveraging predictive analytics, augmented analytics can forecast future customer behavior based on historical data. Marketers can use these insights to anticipate customer needs, tailor marketing campaigns, and offer personalized recommendations.

Augmented analytics automates extracting insights from marketing data, enabling marketers to focus on strategy rather than data analysis. This results in quicker decision-making and the ability to respond rapidly to changing market conditions.

Analyzing consumer interactions with content becomes more sophisticated with augmented analytics. Marketers can gain insights into which types of content resonate most with different audience segments, allowing for the optimization of content creation and distribution strategies.

Augmented analytics enables the creation of personalized products and services tailored to individual consumer preferences that enhance customer satisfaction and loyalty. By understanding preferences and behaviors, marketers can deliver tailored messages, offers, and experiences that are more likely to resonate with consumers.

With the ability to process data in real time, augmented analytics allows marketers to quickly adapt their strategies. This is particularly valuable in dynamic environments where consumer preferences and market trends change rapidly.

By analyzing customer data, augmented analytics aids in the identification of factors influencing customer churn. Marketers can then implement targeted retention strategies, such as personalized loyalty programs or special offers, to retain valuable customers.

Augmented analytics can help ensure marketing efforts comply with ethical standards and regulations. It can flag potential privacy issues or inappropriate targeting, enabling marketers to maintain trust with consumers.

The evolution of augmented analytics in finance reflects a broader trend in the industry towards leveraging advanced technologies to gain a competitive edge. As financial institutions continue to navigate an increasingly data-driven landscape, augmented analytics plays a crucial role in driving efficiency, improving decision-making, and ultimately shaping the future of finance. Meanwhile, consumers can expect more personalized experiences, assistance in making better-informed financial decisions, and increased effectiveness of marketing strategies.

Finances FYI is presented by JPMorgan Chase. JPMorgan Chase is making a $30 billion commitment over the next five years to address some of the largest drivers of the racial wealth divide.

Read this article:
The Role Of Augmented Analytics In Personal Finance - Seattle Medium

Read More..

Unraveling the Symphony of Machine Learning Algorithms | by Niladri Das | Feb, 2024 – Medium

In the ever-evolving landscape of technology, one symphony takes the lead, orchestrating innovation and intelligence: Machine Learning. As engineers, diving into the heart of these algorithms opens the door to a realm where data dances, patterns pirouettes, and intelligence perform a mesmerizing ballet. Lets embark on this poetic journey through the intricate world of Machine Learning, unravelling the magic behind its algorithms.

Harmony of Intelligence: Exploring Machine Learning Algorithms

Delve into the rhythmic world of Machine Learning algorithms. Discover how they conduct the symphony of intelligence, creating patterns from chaos. Tune in to the future of tech.

1The Prelude: Understanding Machine Learning

2Dance of Supervised Learning

3Unveiling the Unsupervised Waltz

4Reinforcement Rhapsody

5Enchanting Ensemble of Neural Networks

6Decision Trees: Natures Algorithmic Poetry

7Clustering Chronicles: Grouping Galore

8Regression Revelry: Predicting Possibilities

9 The Gradient Descent Ballet

10Random Forest: A Symphony of Decision Trees

11 The Art of Feature Engineering

12Dimensionality Duet: Reducing Complexity

13Natural Language Processing: Linguistic Harmony

14Generative Adversarial Opera

15Conclusion: The Grand Finale of Learning

In the opening act, we unravel the concept of Machine Learning, where computers learn without explicit programming. Picture it as a grand overture, setting the stage for the algorithms magnificent performance.

Enter the enchanting waltz of Supervised Learning, where algorithms learn from labelled data. Its like a dance instructor guiding the model to perfection, step by step, ensuring it captures the rhythm of the data.

Switch gears to the Unsupervised Waltz, a dance without predefined steps. Algorithms explore patterns and relationships in data, creating an elegant dance where the steps emerge organically.

In the realm of Reinforcement Rhapsody, algorithms learn through trial and error. Its a performance where actions receive applause or correction, shaping a learning experience reminiscent of a musical crescendo.

The Neural Networks ensemble takes centre stage, mimicking the human brains interconnected neurons. Imagine an orchestra where each instrument represents a neuron, creating a symphony of intelligence.

Nature unfolds its algorithmic poetry in Decision Trees. Like the branches of a tree, decisions branch out, creating a harmonious flow of choices a poetic rendition of natures algorithm.

Dive into the Clustering Chronicles, a tale of grouping and categorization. Algorithms harmonize disparate elements, creating clusters that echo the melody of organized information.

In the Regression Revelry, algorithms predict outcomes based on historical data. Its a dance of forecasting, predicting possibilities with each graceful step, bringing forth a cascade of insights.

Join the Gradient Descent Ballet, a choreography of optimization. Algorithms dance towards the minimum error, descending gracefully like dancers moving towards the spotlight of perfection.

Experience the symphony of Decision Trees in a Random Forest. Its a harmonious collaboration where multiple trees blend melodies, creating a robust and melodious algorithmic composition.

Delve into the Art of Feature Engineering, a craft that enhances algorithmic melodies. Engineers sculpt features, shaping the data into a masterpiece that resonates with the algorithms understanding.

In the Dimensionality Duet, algorithms dance in tandem to reduce complexity. Its a performance where unnecessary dimensions are gracefully discarded, leaving a streamlined and elegant representation behind.

Enter the realm of Linguistic Harmony with Natural Language Processing. Algorithms decipher language nuances, creating a harmonious dialogue between machines and human expression.

In the Generative Adversarial Opera, algorithms engage in a captivating duet. One generates, the other critiques, leading to a dynamic performance where creation and refinement dance hand in hand.

As our exploration concludes, witness the Grand Finale of Learning. Reflect on the harmonies of algorithms, and the choreography of data, and envision the future where Machine Learning continues to compose the symphony of technological evolution.

Q1: Can you explain the core concept of Machine Learning in simple terms?

A: Absolutely! Machine Learning is like teaching computers to learn from experience, enabling them to improve their performance over time.

Q2: How does Unsupervised Learning differ from Supervised Learning?

A: In Unsupervised Learning, the algorithm explores data without labelled guidance, while Supervised Learning follows a structured path with labelled data for training.

Q3: What role does Feature Engineering play in the algorithmic world?

A: Feature Engineering is akin to sculpting data; it refines and shapes features to enhance the algorithms understanding and performance.

Q4: Why is Dimensionality Reduction important in Machine Learning?

A: Dimensionality Reduction simplifies data by eliminating unnecessary dimensions, making algorithms more efficient and effective.

Q5: Can you elaborate on the significance of Natural Language Processing?

A: Natural Language Processing enables machines to understand and interpret human language, fostering seamless communication between humans and machines.

Excerpt from:
Unraveling the Symphony of Machine Learning Algorithms | by Niladri Das | Feb, 2024 - Medium

Read More..

New center tackles brain-inspired computing research – Daily Trojan Online

Joshua Yang, a professor of electrical and computer engineering, secured a five-year grant in November 2023 from the United States Air Force to establish and lead a Center of Excellence for researching neuromorphic computing. The COE program is set to launch June 2024 with offices in Seaver Science Center.

Neuromorphic computing is a process in which computers are designed to mimic the human brain in processing information and performing tasks. The center aims to address the U.S. Department of Defenses research objective of building efficient computing devices that can withstand extreme environmental conditions often encountered in U.S. Air Force and Space Force applications.

Neuromorphic computing is at the center of future machine-learning and artificial intelligence with improved efficiency, Yang wrote in a statement to the Daily Trojan. The field is a new way of computing that strives to imitate the human brain by combining computing power, resiliency, learning-efficiency and energy-efficiency.

The grant opportunity, entitled Center of Excellence: Extreme Neuromorphic Materials and Computing, is a collaborative project among the Air Force Office of Scientific Research, Air Force Research Lab Technical Directorates and university researchers. Yang will lead as director of the Center of Neuromorphic Computing and Extreme Environment, along with other researchers from UCLA, Duke University, the Rochester Institute of Technology and the University of Texas at San Antonio.

According to an announcement released by AFRL, the demand for artificial intelligence and data processing computation doubles every three-and-a-half months but the processors performance doubles every three-and-a-half years, emphasizing the need for robust and efficient computing systems.

Yangs primary research focuses on identifying alternatives to traditional materials and devices used in computing systems to enable efficient AI and machine learning capabilities. Yang is also co-director of the Institute for the Future of Computing at USC.

The DOD aims to build computing devices that can sustain harsh environments. These devices, used in unmanned aerial vehicles and satellite operations, are exposed to corrosion, erosion and extreme heat, which necessitate resilient systems for optimal performance and longevity.

Tanvi Gandhi, a graduate student studying electrical and computer engineering with an emphasis in machine learning and data science, said the research could change computing and could open up new horizons.

Ive worked on various [machine learning and data science] projects, Gandhi said. It takes huge processing power and energy consumption to train pretty much any noteworthy neural network so what could even be better by mimicking the way our brain functions [in computing systems]?

Janhavi Pradhan, a graduate student studying applied data science, said she was excited about the research because it hasnt been explored much.

A parallel that I can draw is how deep learning also mimics the way [the] human brain thinks, Pradhan said. Thats why it has gained popularity over regular [machine learning] models. So, processors that model [the] human brain could also possibly have better performance. Thats something that excites me about this.

Read the rest here:
New center tackles brain-inspired computing research - Daily Trojan Online

Read More..