Page 180«..1020..179180181182..190200..»

Slack has been using data from your chats to train its machine learning models – Engadget

Slack trains machine-learning models on user messages, files and other content without explicit permission. The training is opt-out, meaning your private data will be leeched by default. Making matters worse, youll have to ask your organizations Slack admin (human resources, IT, etc.) to email the company to ask it to stop. (You cant do it yourself.) Welcome to the dark side of the new AI training data gold rush.

Corey Quinn, an executive at DuckBill Group, spotted the policy in a blurb in Slacks Privacy Principles and posted about it on X (via PCMag). The section reads (emphasis ours), To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.

In response to concerns over the practice, Slack published a blog post on Friday evening to clarify how its customers data is used. According to the company, customer data is not used to train any of Slacks generative AI products which it relies on third-party LLMs for but is fed to its machine learning models for products like channel and emoji recommendations and search results. For those applications, the post says, Slacks traditional ML models use de-identified, aggregate data and do not access message content in DMs, private channels, or public channels. That data may include things like message timestamps and the number of interactions between users.

A Salesforce spokesperson reiterated this in a statement to Engadget, also saying that we do not build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data.

I'm sorry Slack, you're doing fucking WHAT with user DMs, messages, files, etc? I'm positive I'm not reading this correctly. pic.twitter.com/6ORZNS2RxC

Corey Quinn (@QuinnyPig) May 16, 2024

The opt-out process requires you to do all the work to protect your data. According to the privacy notice, To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at feedback@slack.com with your Workspace/Org URL and the subject line Slack Global model opt-out request. We will process your request and respond once the opt out has been completed.

The company replied to Quinns message on X: To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models.

How long ago the Salesforce-owned company snuck the tidbit into its terms is unclear. Its misleading, at best, to say customers can opt out when customers doesnt include employees working within an organization. They have to ask whoever handles Slack access at their business to do that and I hope they will oblige.

Inconsistencies in Slacks privacy policies add to the confusion. One section states, When developing Al/ML models or otherwise analyzing Customer Data, Slack cant access the underlying content. We have various technical measures preventing this from occurring. However, the machine-learning model training policy seemingly contradicts this statement, leaving plenty of room for confusion.

In addition, Slacks webpage marketing its premium generative AI tools reads, Work without worry. Your data is your data. We dont use it to train Slack AI. Everything runs on Slacks secure infrastructure, meeting the same compliance standards as Slack itself.

In this case, the company is speaking of its premium generative AI tools, separate from the machine learning models its training on without explicit permission. However, as PCMag notes, implying that all of your data is safe from AI training is, at best, a highly misleading statement when the company apparently gets to pick and choose which AI models that statement covers.

Update, May 18 2024, 3:24 PM ET: This story has been updated to include new information from Slack, which published a blog post explaining its practices in response to the community's concerns.

Update, May 19 2024, 12:41 PM ET: This story and headline have been updated to reflect additional context provided by Slack about how it uses customer data.

Read the rest here:
Slack has been using data from your chats to train its machine learning models - Engadget

Read More..

Human Touch vs. Machine Precision: Debating the Role of AI in Content Creation – hackernoon.com

Back in the day, whenever you had an idea and wanted to bring it to life, you had to get a pen and paper. At some later point, you could do this using a computer but you still had to type and the idea still has to come from you. Today, bringing an idea to life in the form of a written piece is as easy as entering a prompt and having an AI-driven tech do the rest.

Artificial intelligence isnt just transforming the world of content creation, it's creating a new playing field. But is it all that it's cracked up to be? The debate between AI precision and human creativity has been an ongoing one. Some argue that AI is incapable of embodying the nuances of human emotions and applying the same to outputs. In this article, we'll explore these arguments by looking at both ends and prescribing a balance between AI precision and human creativity.

Generative AI is a phrase that almost everyone in the field of content creation must have heard of in some way. While many know it in its finished state as a tool for eliminating the time and stress it takes to generate ideas, there's a lot more that has happened before it's able to do this.

Generative AI involves the design and creation of machine learning algorithms that can learn a set of rules by studying an incomprehensible amount of already existing data.

At its core, this training data, as it's called, comprises a multitude of known information with rules that are learned to create something new.

For example, generative AI that has been trained on 2,000 descriptions of the moon might establish rules that say the moon is solid, rocky, and roughly 238,855 miles away. Therefore, when it is asked to generate a piece of text describing the moon, it has every information it needs to do it. Although AI-powered programs can produce intrinsically accurate content, they often struggle with adding a personal touch to it, making the work feel robotic, impersonal, or lacking in depth.

While AI certainly has its strong points, it is both hard to ignore and crucial to acknowledge the unique nuances that humans bring to content creation. People, unlike machines, possess emotional intelligence, critical thinking, and dynamic mental creativity that can be tweaked as the need arises. Artificial intelligence, on the other hand, excels in identifying patterns, processing large amounts of data, and making data-driven decisions with no emotional undertones.

The flexibility of the human mind makes it easy for content creators to adjust the style, tone, essence, and approach of their writing based on the purpose, audience, and medium of the content. There are also intricacies involved in understanding and expressing emotions, which ultimately adds depth to human-written content and makes it more relatable to another person.

Also, the ability to imagine what someone else might be thinking or feeling as they're reading content is a crucial component of content creation that generative AI is unable to do.

The utilization of AI in content creation has become increasingly prevalent in the digital landscape today. Being able to process massive amounts of data in a few seconds and producing content can significantly boost efficiency and productivity. Articles, reports, guest posts, and product descriptions can be generated at a much faster rate than humans.

There's also the fact that AI-generated content gives off a feel of consistency in writing style and tone. This uniformity can help businesses build a strong and reliable brand identity that customers can come to relate with.

Furthermore, scaling up your content to meet a growing and diverse audience is seamless with Artificial Intelligence. Businesses, writers, or brand designers who need to create content for a growing customer base can adapt, incorporate, and produce content accordingly to save time and effort.

Connecting to each customer on a personal level is not far-fetched here, as AI can also be used to automate the process of analyzing customer data and personalizing the content to suit the varied tastes of your audience.

One of the major drawbacks of AI in content creation is the apparent lack of creativity, empathy, and originality in the finished work. Being able to produce massive amounts of content and personalize them if need be, is great for efficiency, but every information produced relies on pre-existing templates and algorithms.

This quickly makes AI output become repetitive, formulaic, and lacking the creative spark of the human mind. There can also be struggles in fully grasping the context and nuances of certain topics. AI may generate content that is scientifically and factually accurate but includes references that are unfamiliar to certain people in terms of relevance and depth.

It is also important to emphasize the potential biases that may have been injected into the training data which an AI tool uses as its information source. If generative AI was trained on data that is skewed in favor of one group over another, then every content generated by the system will perpetuate those biases. These biases can range from selection and confirmation to stereotyping and measurement bias. The effects of this in the long run may lead to reduced interactions, as people start to feel less connected to the essence of the generated content.

A pivotal concept in the ever-changing digital landscape of today is the collaborative approach combining AI precision with human creativity. This collaboration involves the seamless integration of machine learning and capability with the human mind in solving problems. In this symbiotic relationship, both entities can complement each other's strengths and create content that is both factual and full of emotional depth.

AI systems are complex and occasionally unpredictable, so the importance of human oversight acts as a buffer to limit mistakes and ensure these mistakes are corrected to prevent future reoccurrences.

While AI has excelled at creating large amounts of content in little time, there's something about the human touch that adds meaning to information. Generating data that can stand the test of time is something the best AI tools can manage, but connecting with it on a deeper level can only be achieved with the human factor.

For this reason, it's crucial to explore more ways by which humans and AI can collaborate to bring about the best possible piece of content.

Follow this link:
Human Touch vs. Machine Precision: Debating the Role of AI in Content Creation - hackernoon.com

Read More..

Machine Learning Advances Toward Certification For Cockpit Use – Aviation Week

https://aviationweek.com/themes/custom/particle/dist/app-drupal/assets/awn-logo.svg

PilotEye is a vision-based situational-awareness system using a neural network to detect objects.

Credit: Avidyne/Daedalean

Swiss startup Daedalean anticipates certification of the first cockpit avionics to incorporate machine learning (ML) by year-end or early in 2025. The company is working with U.S. avionics manufacturer Avidyne to certify the PilotEye visual awareness system, which uses a neural network to recognize...

Machine Learning Advances Toward Certification For Cockpit Use is published in Advanced Air Mobility Report, an Aviation Week Intelligence Network (AWIN) Market Briefing and is included with your AWIN Premium membership.

Already a member of AWIN or subscribe to Advanced Air Mobility through your company? Loginwith your existing email and password

Not a member? Learn how to access the market intelligence and data you need to stay abreast of what's happening in the air transport community.

Apple app store ID

6447645195

Apple app name

apple-itunes-app

App argument URL

https://shownews.aviationweek.com

Visit link:
Machine Learning Advances Toward Certification For Cockpit Use - Aviation Week

Read More..

Optimizing multi-spectral ore sorting incorporating wavelength selection utilizing neighborhood component analysis for … – Nature.com

Mineral-intensive technologies such as renewable energy and electric vehicles will be in high demand as climate change is addressed and a sustainable energy future is transitioned. Despite this, the mining sector, especially in copper, is facing considerable difficulties due to the growing demand1,2. The depletion of high-grade ore and the rise of high-arsenic copper resources are more prominent issues in this field. Arsenic in low-grade copper ores not only makes mineral processing more difficult, but also causes environmental and health concerns due to the presence of arsenic in wastewater and exhaust gas3,4. The correlation between arsenic exposure and a range of health problems highlights the pressing requirement for ecologically viable methods in the mining sector. The incorporation of modern technologies such as hyperspectral imaging into sensor-based ore sorting systems has significant potential in this situation5. Ore sorting systems can successfully separate high-arsenic ores from valuable material by utilizing the precise and accurate analysis of mineral composition provided by hyperspectral imaging. This not only mitigates environmental dangers but also reduces processing costs. This strategy not only tackles the difficulties presented by ores containing high levels of arsenic, but also aids in the advancement of a mining industry that is environmentally friendly, in line with the objectives of the Paris Agreement.

Sensor-based ore sorting has become a crucial technology in mineral processing, providing numerous advantages that transform conventional mining methods. With sensor-based ore sorting systems, valuable minerals can be selectively extracted from ore streams according to their unique physical and chemical properties based on advanced sensor technologies. This process of selective extraction maximizes the efficient use of resources by effectively separating valuable ore from nonvaluable materials (or gangue minerals). In the field of mineral processing, Sensor-based ore sorting is a vital component as it enhances ore grades and minimizes the amount of waste material that is processed6. Evidence demonstrates that it effectively decreases the usage of energy, water, and reagents, while also minimizing the formation of fine waste, by disposing of trash before undergoing additional processing7,8. To successfully apply sensor-based sorting, it is crucial to select a sensing approach that can efficiently distinguish between ore and waste9. Sensor fusion, the integration of data from several sensing systems, has the potential to enhance the characterization of the detected material and improve sorting capability4. Microwave imaging (MWI) is a promising technique that can penetrate deeply into rock particles and serve as an additional approach for analyzing ores with significant differences in electromagnetic characteristics10. The efficacy of MWI in ore sorting has been validated by simulations and tests, affirming its capability to segregate valuable minerals or metals from unproductive particles. The utilization of sensor-based ore sorting presents substantial advantages in terms of reducing costs and enhancing efficiency in mineral processing.

Ore sorting techniques can be significantly enhanced by leveraging hyperspectral imaging technology, which offers unparalleled capabilities for mineral characterization and classification. Hyperspectral imaging allows ore sorting systems to analyze the distinct spectral fingerprints of minerals over a wide range of wavelengths, unlike traditional sorting methods that only consider physical attributes like size, shape, and density. This enables the identification and differentiation of minerals by analyzing their unique chemical compositions and optical features. Hyperspectral imaging is used in sensor-based ore sorting to analyze ore streams in real-time without causing damage5. This technique offers important details on the mineralogy and quality of the material being processed. By using hyperspectral imaging technology into sorting systems, mining companies can enhance their efficiency, precision, and selectivity in segregating valuable minerals from waste material. As a result, mineral processing enterprises have higher rates of recovery, lower costs of processing, and increased profitability.

The processing of hyperspectral data is more challenging than that of other types of data due to the sheer volume of information collected, which may be affected by issues with its dimensions. High-dimensional spectral bands in hyperspectral images are often highly similar, which makes them susceptible to the "curse of dimensionality," a phenomenon that affects many traditional algorithms11. Within the domain of hyperspectral ore sorting systems, the notion of wavelength selection arises as a crucial strategy for enhancing sorting efficiency and precision. Wavelength selection is the process of strategically identifying and using wavelengths of electromagnetic radiation (light) that provide the most useful information for differentiating between various minerals or compounds in an ore stream. Through the analysis of distinct spectral patterns displayed by minerals at different wavelengths, the process of wavelength selection allows ore sorting systems to concentrate on the specific spectral bands that are most effective in distinguishing the desired minerals. By employing this focused method, the precision, effectiveness, and dependability of mineral identification and separation procedures are enhanced, resulting in better utilization of resources and increased operational performance in mineral processing. The process of choosing the right wavelength is also extremely important to reduce the likelihood of incorrect positive and negative results, maximize the rate at which valuable minerals are recovered, and to reduce the waste stream losses of potentially valuable materials. The significance of ore sorting lies in its ability to facilitate efficient and precise separation of valuable ore from waste or gangue materials. Based on their unique reflectance or absorption properties, sensors can effectively distinguish ore from gangue by using specific wavelengths, such as visible or mid-infrared ones. This enhances the system's ability to choose and efficiently sort materials, especially when working with intricate ores or comparable substances. Utilizing wavelength selection can improve the ability of photometric sensors to distinguish between different substances and simplify the creation of new sensors for the purpose of sorting ores and characterizing minerals12. A variety of advanced techniques are used to analyze multidimensional spectrum data and extract relevant features from hyperspectral data, such as spectral features extraction and machine learning algorithms i.e. linear regression, K-means clustering, neural network13,14,15.

The intricate nature and extensive dimensions of multi-spectral data require the application of sophisticated data classification techniques such as Neighborhood Component Analysis (NCA). Advanced data classification techniques like NCA are needed to handle multi-spectral data due to several reasons. To begin with, hyperspectral data typically includes a substantial number of spectral bands, which might provide computing difficulties during processing and analysis16. The issue can be addressed by using NCA, which involves lowering the dimensionality of the data. This would lead to improved processing and classification efficiency17. Additionally, it is essential to note that conventional classification methods designed for multispectral data may not be appropriate for hyperspectral data, as the latter offers more intricate and comprehensive spectral information18. The NCA method can effectively handle hyperspectral data with a high number of dimensions. It achieves improved classification accuracies by taking into account both spectral and spatial information19. Additionally, NCA offers advantages such as low computational requirements and shorter classification times20. Therefore, advanced techniques like NCA are essential for accurately classifying hyperspectral data while overcoming the challenges associated with high dimensionality and detailed spectral information.

In this study, Neighborhood Component Analysis (NCA) was applied as a preprocessing step to reduce the dimension of Hyperspectral (HS) data of arsenic-bearing minerals by identifying several wavelength bands important for mineral identification. Then the identified wavelength bands were used as inputs to train machine learning algorithms for identifying Arsenic (As) minerals concentration in simulated ore materials. Multispectral (MS) cameras are more cost-effective and provide faster data collecting and processing compared to HS cameras; hence, they are projected to enable mineral identification utilizing data from a few wavelength bands. The HS data of arsenic-bearing minerals (enargite) were used NCA a machine learning method, as a band selector, and identified several wavelength bands important for mineral identification. Then, the data containing only the minimum number of wavelengths were analyzed for identification of mineral contents/ratios using machine learning algorithms. These will improve the selectivity of wavelengths, considering the ore characteristics produced by each mine. In addition, the application of the herein proposed machine learning algorithm for HS images analysis is expected to improve the efficiency of ore selectivity, i.e. improve the speed of the ore sorting process.

To develop environmentally sustainable resources, its essential to develop advanced metal recovery technology for these high-grade arsenic ores, and Sensor-Based Ore Sorting (SBOS) can achieve this. SBOS, when implemented as a presorting process before the normal beneficiation process, can reduce the amount of ore that must be processed to produce a certain amount of value-added metal, which has a significant impact on the economics of the mine and the plant as a whole21,22,23. It can also reduce the environmental impact by reducing the tailings produced in the subsequent beneficiation process. Non-destructively classified tailings are geotechnically stable and can be easily stored due to their low moisture content24. Robben and Wotruba highlighted that the introduction of SBOS would have an impact on both the environmental and economic aspects of the mineral processing process25. However, the authors pointed out that SBOS is still in the market entry stage of the mineral industry and further technological development is required.

Mineral analysis requires knowledge of crystallography as well as chemical analysis26. However, the methods commonly used for mineral analysis, such as Electron Probe Micro Analyzer (EPMA), X-ray diffraction (XRD) and Scanning Electron Microscope (SEM), are relatively time-consuming and depend on experience27, they are not realistic in terms of identification speed, convenience, and economy when used in actual mineral processing operations.

Therefore, SBOS has been developed as a form of mineral identification suitable for beneficiation. In recent years, more and more equipment has been installed that can withstand larger production scales25. SBOS methods have utilized a range of sensing technologies, including X-ray transmission, X-ray fluorescence, optical sensing, and inductive sensing9,10,28. Furthermore, the utilization of area-scan cameras and predictive tracking systems that rely on machine learning approaches have demonstrated potential in minimizing characterization and separation errors29. Researchers have also studied the combination of data from several sensing approaches to improve the sorting ability of these systems6. While different SBOS methods have been developed and introduced particularly focusing on SWIR data, there are few studies or methods on mineral identification/sorting using VNIR short-wavelength HS data. However, there is growing interest in visible to near-infrared (VNIR) spectroscopy for mineral identification, and in some recent studies VNIR wavelengths have been used to classify rocks and minerals30,31.

Sensor-based ore sorting methods and technologies have the potential to significantly improve ore grades and reduce waste in mineral processing6. These methods, which rely on the electromagnetic spectrum, can be classified based on their characteristics and limitations28. An example of a successful method for sorting complicated ores is the utilization of hyperspectral short-wave infrared sensors in conjunction with machine learning, as demonstrated by Tusa32. Sensor-based ore sorting can be applied at various stages in the process flow diagram, making it a versatile and valuable tool in the mining industry33.

In the field of remote sensing, mineral identification in the near-infrared region has been widely used34,35 and they have shown excellent performance in ore classification. On the other hand, the high cost of HS cameras and the time required for data acquisition have been barriers to their application in actual operations, where immediate classification is required. In a previous study24,36, HS data of minerals were analyzed by deep learning to identify minerals. The use of deep learning allows the creation of more versatile and simplified learning models compared to conventional machine learning or identification methods that combine multiple machine learning models. However, since HS images consist of several hundred spectral bands, there is a high correlation between proximity spectra, and data analysis without preprocessing is highly redundant and computationally intensive. Therefore, dimensionality reduction is necessary as a preprocessing step for a large amount of data to be generated.

Dimensionality reduction methods for HS commonly fall into two categories: band extraction and wavelength selection. The band extraction methods map a high-dimensional feature space to a low-dimensional space; therefore, cannot preserve the original physical interpretation of the image and is not applicable as a dimensionality reduction method37. While the wavelength selection method can maintain the original physical interpretation of the images. According to a review by Sun and Du38, wavelength selection methods can be categorized into six groups: ranking-based methods, searching-based methods, clustering-based methods, sparsity-based methods, embedding learning-based methods, and hybrid scheme-based methods.

A variety of studies have explored different techniques for wavelength selection and spectral data classification in mineral processing. Ghosh39 introduced an infrared thermography-based method for sorting alumina-rich iron ores, while Kern40 suggested utilizing short-wavelength infrared and dual-energy X-ray transmission sensors for the Hammerlein SnInZn deposit. Phiri41 investigated the potential of near-infrared sensors for separating carbonate-rich gangue from copper-bearing particles in a coppergold ore sample. Tusa32 advanced the field by evaluating hyperspectral short-wave infrared sensors, combined with machine learning methods, for pre-sorting complex ores. These studies collectively illustrate the potential of various wavelength selection techniques for enhancing the efficiency and effectiveness of ore sorting systems.

Furthermore, numerous research endeavors have investigated the implementation of machine learning algorithms to automate the task of wavelength selection and spectral data classification. Passos42 introduced an automated deep learning pipeline to optimize neural architecture and hyperparameters for spectral classification. Duan43 proposed a template matching approach achieving high accuracy without training, while Wang44 developed a multifunctional optical spectrum analysis technique utilizing support vector machines for optimal accuracy and speed. Baskir45 presented a MATLAB toolbox for user-friendly wavelength selection and automated spectral region selection. These investigations collectively underscore the potential of machine learning in automating and enhancing the process of wavelength selection and spectral data classification.

In addition to this, Advancements in hyperspectral imaging technology have significantly expanded the potential applications of this technology46. However, the complexity of hyperspectral data, including its high dimensionality and size, requires innovative methodologies for effective processing and analysis47. These challenges have led to the development of a range of image processing and machine learning analysis pipelines46. Notably, hyperspectral imaging finds application in microscopy, enabling the capture and identification of different spectral signatures in a single-pass evaluation48.

The effectiveness of machine learning algorithms, particularly Neighborhood Component Analysis (NCA), for multi-spectral data classification in mineral processing has been highlighted in recent research. Jahoda49 and Sinaice50 both emphasize the advantages of combining spectroscopic methods with machine learning for mineral identification. Jahoda49specifically highlights the superiority of machine learning methods in this context, while Sinaice50 proposes a system integrating hyperspectral imaging, NCA, and machine learning for rock and mineral classification. These findings are further supported by Carey51, who stresses the importance of spectrum preprocessing and a weighted-neighbors classifier for optimal mineral spectrum matching performance.

In their previous study Okada et al.24, developed a basic technology of SBOS, using hyperspectral (HS) imaging and deep learning as an expert system for mineral identification. HS is promising as SBOS to avoid As-containing copper minerals technic instead. In that study, HS imaging was used as a sensor to collect the intensity of wavelength data, which was then used to train deep learning algorithms for mineral identification. The HS image is cube-shaped data with dimensions in the wavelength and spatial directions, with wavelength data from the visible to near-infrared regions (400~1000nm, 204 bands). Minerals (hematite, chalcopyrite, galena) identification was performed by analyzing detailed wavelength data of 204 bands in the shorter wavelength range of 4001000nm (from the visible light region to a part of the near-infrared region) using deep learning. However, the HS data used in that study consisted of 204 high-dimensional data, which required heavy computational resources. In addition, the HS camera itself is expensive, which was a barrier to its introduction/implementation in the operating site (mineral processing plant).

Yokoya and Iwasaki52 reported that, since each pixel provides continuous electromagnetic spectral characteristics, its possible to obtain detailed information about the target object. Owing to the high spectral resolution HS imaging is applied in fields such as remote sensing and quality control of food deep and pharmaceuticals. Robben et al.53 pointed out that, minerals show specific absorption characteristics in the near-infrared region from 1300 to 2550nm due to vibrations of the bonding molecules contained in each mineral. A skilled expert can identify some minerals visually (Color), and the continuous electromagnetic spectrum in the short wavelength region is considered to contain optical data with mineral-specific physical properties.

Sorting machines that use MS images with a reduced number of dimensions are now technically feasible. They sort by acquiring specific wavelength information predetermined for each ore type. However, even for the same type of mineral, there are subtle differences in the formation and shape of each mine that affect the spectra. Additionally, the light environment inside each plant varies, which also affects the spectrum. Based on these factors, it is suggested that ore selectivity could be improved by defining the wavelength to be acquired for each ore type. To achieve this, we propose a framework that allows for the selection of spectral bands based on the characteristics of the ore produced. This framework will greatly support the tuning of the sorting process. As a case study, we will use a mineral sample containing arsenic.

The literature review highlights various gaps in current mineral processing practices, emphasizing the need for innovative approaches to improve efficiency and sustainability. While Sensor-Based Ore Sorting (SBOS) offers promise for environmentally friendly metal recovery, further technological development is required to enhance its effectiveness and practicality in operational settings. Traditional mineral analysis methods are time-consuming and impractical for real-time processing, prompting the exploration of faster and more economical techniques. Additionally, the application of machine learning algorithms and hyperspectral imaging for mineral identification presents computational challenges and limitations in the practical implementation due to the high dimensionality of data.

In response to these challenges, the proposed framework integrates Neighborhood Component Analysis (NCA) and machine learning algorithms to address the complexities of mineral identification and sorting using multi-spectral data. By reducing data dimensionality and identifying crucial wavelength bands, the framework enables efficient mineral identification while considering the unique characteristics of each ore type. Furthermore, by utilizing multi-spectral cameras with reduced dimensions, the framework enhances sorting efficiency and selectivity, paving the way for more sustainable mining practices and improved operational outcomes in mineral processing plants. In this study, a clustering-based method, Neighborhood Components Analysis (NCA), was used to perform dimensionality reduction and wavelength selection on HS data. After the selection, the selected bands were learned by machine learning algorithms to experiment with mineral identification. It is expected that mineral identification using fewer wavelengths than HS data will enable data acquisition with less expensive MS cameras and increase the efficiency of mineral identification.

The rest is here:
Optimizing multi-spectral ore sorting incorporating wavelength selection utilizing neighborhood component analysis for ... - Nature.com

Read More..

EVOLVE 2024 Welcomes AWS Leader to Discuss the Transformative Power of Generative AI – GlobeNewswire

AUSTIN, Texas, May 21, 2024 (GLOBE NEWSWIRE) -- Asure (NASDAQ: ASUR), a leading provider of cloud-based Human Capital Management (HCM) software solutions, is pleased to announce that Vidya Sagar Ravipati, Applied Science Manager at the Amazon Web Services (AWS) Generative AI Innovation Center, will be a guest speaker at its EVOLVE 2024 conference, June 4-7 in Austin, TX. Exclusively designed for Asure Reseller partners, EVOLVE offers a blend of strategic insights and practical sessions aimed at helping HCM resellers drive growth and enhance efficiency.

Vidya Sagar Ravipati is a seasoned expert in large-scale distributed systems and machine learning. As the Applied Science Manager at the Generative AI Innovation Center, Vidya Sagar leads a team at the cutting edge of generative AI applications. These include personalized chatbots, hyper-personalization, call center analytics, image and video understanding, knowledge graph-augmented large language models, and personalized advertising. The AWS Generative AI Innovation Center helps customers navigate the new era of innovation with strategic guidance and delivery of impactful Generative AI solutions. Vidyas work is instrumental in helping AWS customers across various industries accelerate their AI and cloud adoption journey.

Next-generation HCM technology is all about integrating advanced AI and machine learning capabilities to create more agile, efficient, and user-centric solutions, said Pat Goepel, Chairman and CEO of Asure. Were thrilled to have Vidya share with our reseller partner community how our collaboration with AWS enables us to push the boundaries of whats possible in HCM, to expedite innovation and deliver exceptional user experiences.

In his session titled Transformative Power of Generative AI, Vidya Sagar Ravipati will delve into how Asure is leveraging the transformative potential of AI through its partnership with AWS and the Generative AI Innovation Center. This collaboration connects AWS AI and ML experts with development teams like Asures, enabling the envisioning, design, and launch of new generative AI products and features.

Attendees will learn how these technologies can optimize workflows, enhance user experiences, and turn innovative ideas into reality faster and more effectively. This session is set to provide invaluable insights into the future of AI and its practical applications in driving digital transformation.

As a member of the AWS Application Modernization Lab, Asure collaborates directly with AWS to define and leverage an innovative development framework to enhance its HCM SaaS (Software as a Service) offerings with advancements like cloud optimization and artificial intelligence (AI) to deliver premium agility and speed to market.

About Asure Asure Software (NASDAQ: ASUR) provides cloud-based Human Capital Management (HCM) software solutions that assist organizations of all sizes in streamlining their HCM processes. Asure's suite of HCM solutions includes HR, payroll, time and attendance, benefits administration, payroll tax management, and talent management. The company's approach to HR compliance services incorporates AI technology to enhance scalability and efficiency while prioritizing client interactions. For more information, please visitwww.asuresoftware.com.

Contact Information: Patrick McKillop Vice President, Investor Relations 617-335-5058 patrick.mckillop@asuresoftware.com

Read more here:
EVOLVE 2024 Welcomes AWS Leader to Discuss the Transformative Power of Generative AI - GlobeNewswire

Read More..

Prediction of incomplete immunization among under-five children in East Africa from recent demographic and health … – Nature.com

Miller, M. A. & Hinman, A. R. In Vaccines, 6th edn (eds Plotkin, S. A., Orenstein, W. A., & Offit, P. A.) 14131426 (W.B. Saunders, 2013).

Ozawa, S. et al. Return on investment from childhood immunization in low- and middle-income countries, 201120. Health Aff. (Project Hope) 35, 199207. https://doi.org/10.1377/hlthaff.2015.1086 (2016).

Article Google Scholar

Bloom, D. E. In Hot Topics in Infection and Immunity in Children VII (eds Curtis, N., Finn, A., & Pollard, A. J.) 18 (Springer, 2011).

Sim, S. Y., Watts, E., Constenla, D., Brenzel, L. & Patenaude, B. N. Return on investment from immunization against 10 pathogens in 94 low- and middle-income countries, 201130. Health Aff. (Project Hope) 39, 13431353. https://doi.org/10.1377/hlthaff.2020.00103 (2020).

Article Google Scholar

Machingaidze, S., Wiysonge, C. S. & Hussey, G. D. Strengthening the expanded programme on immunization in Africa: Looking beyond 2015. PLoS Med. 10, e1001405 (2013).

Article PubMed PubMed Central Google Scholar

Masud, T. & Navaratne, K. V. The expanded program on immunization in Pakistan: Recommendations for improving performance. (2012).

WHO/UNICEF. Progress and challenges with achieving universal immunization coverage. (2020).

WHO and UNICEF: Progress and Challenges with Achieving Universal Immunization Coverage. (WHO/UNICEF Estimates of National Immunization Coverage, J., 2019).

UNICEF. Under Five Mortality. https://data.unicef.org/topic/child-survival/under-five-mortality/ (2023).

WHO/UNICEF. Estimates of National Immunization Coverage. http://www.who.int/news-room/fact-sheets/detail/immunization-coverage (2021).

Debie, A., Lakew, A. M., Tamirat, K. S., Amare, G. & Tesema, G. A. Complete vaccination service utilization inequalities among children aged 1223 months in Ethiopia: A multivariate decomposition analyses. Int. J. Equity Health 19, 65. https://doi.org/10.1186/s12939-020-01166-8 (2020).

Article PubMed PubMed Central Google Scholar

UNICEF. (2020).

Faisal, S. et al. Modeling the factors associated with incomplete immunization among children. Math. Probl. Eng. 2022 (2022).

Negussie, A., Kassahun, W., Assegid, S. & Hagan, A. K. Factors associated with incomplete childhood immunization in Arbegona district, southern Ethiopia: A case-control study. BMC Public Health 16, 27. https://doi.org/10.1186/s12889-015-2678-1 (2016).

Article PubMed PubMed Central Google Scholar

Nour, T. Y. et al. Predictors of immunization coverage among 1223 month old children in Ethiopia: Systematic review and meta-analysis. BMC Public Health 20, 1803. https://doi.org/10.1186/s12889-020-09890-0 (2020).

Article PubMed PubMed Central Google Scholar

Tesema, G. A., Tessema, Z. T., Tamirat, K. S. & Teshale, A. B. Complete basic childhood vaccination and associated factors among children aged 1223 months in East Africa: A multilevel analysis of recent demographic and health surveys. BMC Public Health 20, 1837. https://doi.org/10.1186/s12889-020-09965-y (2020).

Article PubMed PubMed Central Google Scholar

Skull, S. A., Ngeow, J. Y. Y., Hogg, G. & Biggs, B.-A. Incomplete immunity and missed vaccination opportunities in East African immigrants settling in Australia. J. Immigr. Minor. Health 10, 263268. https://doi.org/10.1007/s10903-007-9071-9 (2008).

Article PubMed Google Scholar

Adedokun, S. T., Uthman, O. A., Adekanmbi, V. T. & Wiysonge, C. S. Incomplete childhood immunization in Nigeria: A multilevel analysis of individual and contextual factors. BMC Public Health 17, 236. https://doi.org/10.1186/s12889-017-4137-7 (2017).

Article PubMed PubMed Central Google Scholar

Russo, G. et al. Vaccine coverage and determinants of incomplete vaccination in children aged 1223 months in Dschang, West Region, Cameroon: A cross-sectional survey during a polio outbreak. BMC Public Health 15, 630. https://doi.org/10.1186/s12889-015-2000-2 (2015).

Article PubMed PubMed Central Google Scholar

Mohamud Hayir, T. M., Magan, M. A., Mohamed, L. M., Mohamud, M. A. & Muse, A. A. Barriers for full immunization coverage among under 5 years children in Mogadishu, Somalia. J. Fam. Med. Prim. Care 9, 26642669. https://doi.org/10.4103/jfmpc.jfmpc_119_20 (2020).

Article Google Scholar

Kebede Kassaw, A. A. et al. Spatial distribution and machine learning prediction of sexually transmitted infections and associated factors among sexually active men and women in Ethiopia, evidence from EDHS 2016. BMC Infect. Dis. 23, 49. https://doi.org/10.1186/s12879-023-07987-6 (2023).

Article PubMed PubMed Central Google Scholar

DHS. Data Collection. https://www.dhsprogram.com/Data/.

Etana, B. & Deressa, W. Factors associated with complete immunization coverage in children aged 1223 months in Ambo Woreda, Central Ethiopia. BMC Public Health 12, 566. https://doi.org/10.1186/1471-2458-12-566 (2012).

Article PubMed PubMed Central Google Scholar

Kassahun, M. B., Biks, G. A. & Teferra, A. S. Level of immunization coverage and associated factors among children aged 1223 months in Lay Armachiho District, North Gondar Zone, Northwest Ethiopia: A community based cross sectional study. BMC. Res. Notes 8, 239. https://doi.org/10.1186/s13104-015-1192-y (2015).

Article PubMed PubMed Central Google Scholar

Sheikh, N. et al. Coverage, timelines, and determinants of incomplete immunization in Bangladesh. Trop. Med. Infect. Dis. 3, 72 (2018).

Article PubMed PubMed Central Google Scholar

Bugvi, A. S. et al. Factors associated with non-utilization of child immunization in Pakistan: Evidence from the Demographic and Health Survey 200607. BMC Public Health 14, 232. https://doi.org/10.1186/1471-2458-14-232 (2014).

Article PubMed PubMed Central Google Scholar

Tadesse, H., Deribew, A. & Woldie, M. Predictors of defaulting from completion of child immunization in south Ethiopia, May 2008A case control study. BMC Public Health 9, 150. https://doi.org/10.1186/1471-2458-9-150 (2009).

Article PubMed PubMed Central Google Scholar

Jani, J. V., De Schacht, C., Jani, I. V. & Bjune, G. Risk factors for incomplete vaccination and missed opportunity for immunization in rural Mozambique. BMC Public Health 8, 161. https://doi.org/10.1186/1471-2458-8-161 (2008).

Article PubMed PubMed Central Google Scholar

De, P. & Bhattacharya, B. N. Determinants of child immunization in fourless-developed states of North India. J. Child Health Care 6, 3450 (2002).

Article Google Scholar

Rahman, M. & Obaida-Nasrin, S. Factors affecting acceptance of complete immunization coverage of children under five years in rural Bangladesh. Salud pblica de mxico 52, 134140 (2010).

Article PubMed Google Scholar

Atnafu, A. et al. Prevalence and determinants of incomplete or not at all vaccination among children aged 1236 months in Dabat and Gondar districts, northwest of Ethiopia: Findings from the primary health care project. BMJ Open 10, e041163. https://doi.org/10.1136/bmjopen-2020-041163 (2020).

Article PubMed PubMed Central Google Scholar

Melaku, M. S., Nigatu, A. M. & Mewosha, W. Z. Spatial distribution of incomplete immunization among under-five children in Ethiopia: Evidence from 2005, 2011, and 2016 Ethiopian Demographic and health survey data. BMC Public Health 20, 1362. https://doi.org/10.1186/s12889-020-09461-3 (2020).

Article PubMed PubMed Central Google Scholar

Pedregosa, F. et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 12, 28252830 (2011).

MathSciNet Google Scholar

Chen, T. & Guestrin, C. XGBoost: A Scalable Tree Boosting System. (2016).

Lundberg, S. M. et al. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2, 5667. https://doi.org/10.1038/s42256-019-0138-9 (2020).

Article PubMed PubMed Central Google Scholar

Rawat, S., Rawat, A., Kumar, D. & Sabitha, A. S. Application of machine learning and data visualization techniques for decision support in the insurance sector. Int. J. Inf. Manag. Data Insights 1, 100012 (2021).

Google Scholar

Guo, Y. The 7 steps of machine learning (2017). towardsdatascience.com (2017).

Brownlee, J. Data Preparation for Machine Learning: Data Cleaning, Feature Selection, and Data Transforms in Python (Machine Learning Mastery, 2020).

Yu, L. & Liu, H. In Proceedings of the 20th International Conference on Machine Learning (ICML-03). 856863.

Bekele, W. T. Machine learning algorithms for predicting low birth weight in Ethiopia. BMC Med. Inform. Decis. Mak. 22, 232. https://doi.org/10.1186/s12911-022-01981-9 (2022).

Article PubMed PubMed Central Google Scholar

Bitew, F. H., Sparks, C. S. & Nyarko, S. H. Machine learning algorithms for predicting undernutrition among under-five children in Ethiopia. Public Health Nutr. 112 (2021).

Chilyabanyama, O. N. et al. Performance of machine learning classifiers in classifying stunting among under-five children in Zambia. Children (Basel, Switzerland). https://doi.org/10.3390/children9071082 (2022).

Article PubMed PubMed Central Google Scholar

Emmanuel, M. Application of Machine Learning Methods in Analysis of Infant Mortality in Rwanda: Analysis of Rwanda Demographic Health Survey 201415 Dataset (University of Rwanda, 2021).

Fenta, H. M., Zewotir, T. & Muluneh, E. K. A machine learning classifier approach for identifying the determinants of under-five child undernutrition in Ethiopian administrative zones. BMC Med. Inform. Decis. Mak. 21, 112 (2021).

Article Google Scholar

Kananura, R. M. Machine learning predictive modelling for identification of predictors of acute respiratory infection and diarrhoea in Ugandas rural and urban settings. PLoS Glob. Public Health 2, e0000430. https://doi.org/10.1371/journal.pgph.0000430 (2022).

Article PubMed PubMed Central Google Scholar

Saroj, R. K., Yadav, P. K., Singh, R. & Chilyabanyama, O. N. Machine learning algorithms for understanding the determinants of under-five mortality. BioData Min. 15, 20. https://doi.org/10.1186/s13040-022-00308-8 (2022).

Article PubMed PubMed Central Google Scholar

Tesfaye, B., Atique, S., Azim, T. & Kebede, M. M. Predicting skilled delivery service use in Ethiopia: Dual application of logistic regression and machine learning algorithms. BMC Med. Inform. Decis. Mak. 19, 209. https://doi.org/10.1186/s12911-019-0942-5 (2019).

Article PubMed PubMed Central Google Scholar

Bekkar, M., Djemaa, H. K. & Alitouche, T. A. Evaluation measures for models assessment over imbalanced data sets. J. Inf. Eng. Appl. 3, 1533 (2013).

Google Scholar

Yang, L. & Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 415, 295316. https://doi.org/10.1016/j.neucom.2020.07.061 (2020).

Article Google Scholar

Kebede, S. D. et al. Prediction of contraceptive discontinuation among reproductive-age women in Ethiopia using Ethiopian Demographic and Health Survey 2016 Dataset: A machine learning approach. BMC Med. Inform. Decis. Mak. 23, 9. https://doi.org/10.1186/s12911-023-02102-w (2023).

Article PubMed PubMed Central Google Scholar

Wang, K. et al. Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and SHAP. Comput. Biol. Med. 137, 104813. https://doi.org/10.1016/j.compbiomed.2021.104813 (2021).

Article PubMed Google Scholar

Lundberg, S. M. & Lee, S.-I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30 (2017).

Li, Q., Zhang, Y., Kang, H., Xin, Y. & Shi, C. Mining association rules between stroke risk factors based on the Apriori algorithm. Technol. Health Care. 25, 197205. https://doi.org/10.3233/thc-171322 (2017).

Article PubMed Google Scholar

Chandir, S. et al. Using predictive analytics to identify children at high risk of defaulting from a routine immunization program: Feasibility study. JMIR Public Health Surveill. 4, e9681 (2018).

Article Google Scholar

Mutua, M. K., Kimani-Murage, E. & Ettarh, R. R. Childhood vaccination in informal urban settlements in Nairobi, Kenya: Who gets vaccinated?. BMC Public Health 11, 6. https://doi.org/10.1186/1471-2458-11-6 (2011).

Article PubMed PubMed Central Google Scholar

Landoh, D. E. et al. Predictors of incomplete immunization coverage among one to five years old children in Togo. BMC Public Health 16, 968. https://doi.org/10.1186/s12889-016-3625-5 (2016).

Article PubMed PubMed Central Google Scholar

Pavlopoulou, I. D., Michail, K. A., Samoli, E., Tsiftis, G. & Tsoumakas, K. Immunization coverage and predictive factors for complete and age-appropriate vaccination among preschoolers in Athens, Greece: A cross- sectional study. BMC Public Health 13, 908. https://doi.org/10.1186/1471-2458-13-908 (2013).

Article PubMed PubMed Central Google Scholar

Zewdie, A., Letebo, M. & Mekonnen, T. Reasons for defaulting from childhood immunization program: A qualitative study from Hadiya zone, Southern Ethiopia. BMC Public Health 16, 1240. https://doi.org/10.1186/s12889-016-3904-1 (2016).

Article PubMed PubMed Central Google Scholar

Tauil, M. D. C., Sato, A. P. S. & Waldman, E. A. Factors associated with incomplete or delayed vaccination across countries: A systematic review. Vaccine 34, 26352643. https://doi.org/10.1016/j.vaccine.2016.04.016 (2016).

Read this article:
Prediction of incomplete immunization among under-five children in East Africa from recent demographic and health ... - Nature.com

Read More..

REMINDER – Vector Institute affiliated AI Trust and Safety Experts available for commentary related to the AI Global … – GlobeNewswire

TORONTO, May 21, 2024 (GLOBE NEWSWIRE) -- The second AI Global Forum will take place in South Korea next week gathering government officials, corporate leaders, civil societies, and academics from around the world to discuss the future of AI.

The Vector Institute is affiliated to a significant number of world leading researchers working on AI Trust and Safety available to provide comment in the lead up and during the AI Global Forum.

In addition, Vectors President and CEO, Tony Gaffney will participate in person at the Forum in South Korea on Wednesday May 22nd, 2024 and will be available for comment while on site.

Media availability:

Experts will be available for commentary on AI Trust and Safety in the lead up and during the AI Global Forum.

Vector Institute Affiliated AI Trust and Safety Experts:

Jeff Clune

Jeff focuses on deep learning, including deep reinforcement learning. His research also focuses on AI Safety, including regulatory recommendations and improving the interpretability of agents

Roger Grosse

Rogers research examines training dynamics in deep learning. He is applying his expertise to AI alignment, to ensure the progress of AI is aligned with human values. Some of his recent work has focused on better understanding how large language models work in order to head off the potential for risk in their deployment.

Gillian Hadfield

Gillians current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms.

Sheila McIlraith Sheilas research addresses AI sequential decision making that is human compatible, with a focus on safety, alignment, and fairness. Recent work looked at the impact of ethics education on computer science students.

Rahul G. Krishnan

Rahuls research focuses on building robust and generalizable machine learning algorithms to advance computational medicine. His recent work has developed new algorithms for causal decision making, built risk scores for patients on the transplant waitlist, and created automated guardrails for predictive models deployed in high-risk settings.

Xiaoxiao Li

Xiaoxiao specializes in the interdisciplinary field of deep learning and biomedical data analysis. Her primary mission is to make AI more reliable, especially when it comes to sensitive areas like healthcare.

Nicolas Papernot

Nicolass work focuses on privacy preserving techniques in deep learning, and advancing more secure and trusted machine learning models.

About the Vector Institute Launched in 2017, the Vector Institute works with industry, institutions, startups, and governments to build AI talent and drive research excellence in AI to develop and sustain AI-based innovation to foster economic growth and improve the lives of Canadians. Vector aims to advance AI research, increase adoption in industry and health through programs for talent, commercialization, and application, and lead Canada towards the responsible use of AI. Programs for industry, led by top AI practitioners, offer foundations for applications in products and processes, company-specific guidance, training for professionals, and connections to workforce-ready talent. Vector is funded by the Province of Ontario, the Government of Canada through the Pan-Canadian AI Strategy, and leading industry sponsors from across multiple sectors of Canadian Industry.

This availability is for media only.

For more information or to speak with an AI expert, contact: media@vectorinstitute.ai

Read the original here:
REMINDER - Vector Institute affiliated AI Trust and Safety Experts available for commentary related to the AI Global ... - GlobeNewswire

Read More..

DeepDive: estimating global biodiversity patterns through time using deep learning – Nature.com

Sepkoski, J. J. A factor analytic description of the phanerozoic marine fossil record. Paleobiology 7, 3653 (1981).

Article Google Scholar

Quental, T. B. & Marshall, C. R. Diversity dynamics: molecular phylogenies need the fossil record. Trends Ecol. Evol. 25, 434441 (2010).

Article PubMed Google Scholar

Ezard, T. H., Aze, T., Pearson, P. N. & Purvis, A. Interplay between changing climate and species ecology drives macroevolutionary dynamics. Science 332, 349351 (2011).

Article ADS CAS PubMed Google Scholar

Benton, M. J. Exploring macroevolution using modern and fossil data. Proc. R. Soc. B: Biol. Sci. 282, 20150569 (2015).

Article Google Scholar

Niklas, K. J. Measuring the tempo of plant death and birth. N. Phytol. 207, 254256 (2015).

Article Google Scholar

Rabosky, D. L. & Hurlbert, A. H. Species richness at continental scales is dominated by ecological limits. Am. Nat. 185, 572583 (2015).

Article PubMed Google Scholar

Harmon, L. J. & Harrison, S. Species diversity is dynamic and unbounded at local and continental scales. Am. Nat. 185, 584593 (2015).

Article PubMed Google Scholar

Sepkoski Jr, J. Phanerozoic overview of mass extinction. In Patterns and Processes in the History of Life: Report of the Dahlem Workshop on Patterns and Processes in the History of Life Berlin 1985, June 1621, 277295 (Springer, 1986).

Benton, M. J. & Emerson, B. C. How did life become so diverse? the dynamics of diversification according to the fossil record and molecular phylogenetics. Palaeontology 50, 2340 (2007).

Article Google Scholar

Alroy, J. Geographical, environmental and intrinsic biotic controls on phanerozoic marine diversification. Palaeontology 53, 12111235 (2010).

Article Google Scholar

Weber, M. G., Wagner, C. E., Best, R. J., Harmon, L. J. & Matthews, B. Evolution in a community context: on integrating ecological interactions and macroevolution. Trends Ecol. Evol. 32, 291304 (2017).

Article PubMed Google Scholar

Niklas, K. J., Tiffney, B. H. & Knoll, A. H. Patterns in vascular land plant diversification. Nature 303, 614 616 (1983).

Article Google Scholar

Foote, M., Miller, A., Raup, D. & Stanley, S.Principles of Paleontology (W. H. Freeman, 2007). https://books.google.ch/books?id=8TsDC2OOvbYC

Close, R., Benson, R., Saupe, E., Clapham, M. & Butler, R. The spatial structure of phanerozoic marine animal diversity. Science 368, 420424 (2020).

Article ADS CAS PubMed Google Scholar

Raja, N. B. et al. Colonial history and global economics distort our understanding of deep-time biodiversity. Nat. Ecol. Evol. 6, 145154 (2022).

Article PubMed Google Scholar

Smith, A. B. & McGowan, A. J. The ties linking rock and fossil records and why they are important for palaeobiodiversity studies. Geol. Soc. Lond. Spec. Publ. 358, 17 (2011).

Article ADS Google Scholar

Benson, R., Butler, R., Close, R., Saupe, E. & Rabosky, D. Biodiversity across space and time in the fossil record. Curr. Biol. 31, R1225R1236 (2021).

Article CAS PubMed Google Scholar

Smith, A. B. Largescale heterogeneity of the fossil record: implications for phanerozoic biodiversity studies. Philos. Trans. R. Soc. Lond. Ser. B: Biol. Sci. 356, 351367 (2001).

Article CAS Google Scholar

Alroy, J. Fair sampling of taxonomic richness and unbiased estimation of origination and extinction rates. Paleontol. Soc. Pap. 16, 5580 (2010).

Article Google Scholar

Chao, A. & Jost, L. Coverage-based rarefaction and extrapolation: standardizing samples by completeness rather than size. Ecology 93, 25332547 (2012).

Article PubMed Google Scholar

Raup, D. Taxonomic diversity estimation using rarefaction. Paleobiology 1, 333342 (1975).

Article Google Scholar

Alroy, J. et al. Effects of sampling standardization on estimates of phanerozoic marine diversification. Proc. Natl Acad. Sci. 98, 62616266 (2001).

Article ADS CAS PubMed PubMed Central Google Scholar

Starrfelt, J. & Liow, L. H. How many dinosaur species were there? fossil bias and true richness estimated using a poisson sampling model. Philos. Trans. R. Soc. B: Biol. Sci. 371, 20150219 (2016).

Article Google Scholar

Flannery-Sutherland, J. T., Silvestro, D. & Benton, M. J. Global diversity dynamics in the fossil record are regionally heterogeneous. Nat. Commun. 13, 117 (2022).

Article Google Scholar

Chao, A. Estimating the population size for capture-recapture data with unequal catchability. Biometrics 43, 783791 (1987).

Alroy, J. Limits to species richness in terrestrial communities. Ecol. Lett. 21, 17811789 (2018).

Article PubMed Google Scholar

Alroy, J. On four measures of taxonomic richness. Paleobiology 46, 158175 (2020).

Article Google Scholar

Close, R., Evers, S., Alroy, J. & Butler, R. How should we estimate diversity in the fossil record? testing richness estimators using sampling-standardised discovery curves. Methods Ecol. Evol. 9, 13861400 (2018).

Article Google Scholar

Close, R. et al. The apparent exponential radiation of phanerozoic land vertebrates is an artefact of spatial sampling biases. Proc. R. Soc. B 287, 20200372 (2020).

Article PubMed PubMed Central Google Scholar

Antell, G. T., Benson, R. B. & Saupe, E. E. Spatial standardization of taxon occurrence dataa call to action. Paleobiology https://doi.org/10.1017/pab.2023.36 (2024).

Dunne, E. M., Thompson, S. E., Butler, R. J., Rosindell, J. & Close, R. A. Mechanistic neutral models show that sampling biases drive the apparent explosion of early tetrapod diversity. Nat. Ecol. Evol. 7, 14801489 (2023).

Article PubMed PubMed Central Google Scholar

Hauffe, T., Pires, M. M., Quental, T. B., Wilke, T. & Silvestro, D. A quantitative framework to infer the effect of traits, diversity and environment on dispersal and extinction rates from fossils. Methods Ecol. Evol. 13, 12011213 (2022).

Article Google Scholar

Cermeo, P. et al. Post-extinction recovery of the phanerozoic oceans and biodiversity hotspots. Nature 607, 507511 (2022).

Article ADS PubMed PubMed Central Google Scholar

Hagen, O. et al. gen3sis: a general engine for eco-evolutionary simulations of the processes that shape earths biodiversity. PLoS Biol. 19, e3001340 (2021).

Article CAS PubMed PubMed Central Google Scholar

Hagen, O., Skeels, A., Onstein, R. E., Jetz, W. & Pellissier, L. Earth history events shaped the evolution of uneven biodiversity across tropical moist forests. Proc. Natl Acad. Sci. 118, e2026347118 (2021).

Article CAS PubMed PubMed Central Google Scholar

Vilhena, D. A. & Smith, A. B. Spatial bias in the marine fossil record. PLoS One 8, e74470 (2013).

Article ADS CAS PubMed PubMed Central Google Scholar

Raup, D. M. Taxonomic diversity during the phanerozoic: the increase in the number of marine species since the paleozoic may be more apparent than real. Science 177, 10651071 (1972).

Article ADS CAS PubMed Google Scholar

Raup, D. M. Species diversity in the phanerozoic: a tabulation. Paleobiology 2, 279288 (1976).

Article Google Scholar

Foote, M., Crampton, J. S., Beu, A. G. & Nelson, C. S. Aragonite bias, and lack of bias, in the fossil record: lithological, environmental, and ecological controls. Paleobiology 41, 245265 (2015).

Article Google Scholar

Silvestro, D., Salamin, N. & Schnitzler, J. Pyrate: a new program to estimate speciation and extinction rates from incomplete fossil data. Methods Ecol. Evol. 5, 11261131 (2014).

Article Google Scholar

Cantalapiedra, J. L. et al. The rise and fall of proboscidean ecological diversity. Nat. Ecol. Evol. 5, 12661272 (2021).

Article PubMed Google Scholar

Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. Nature 323, 533536 (1986).

Article ADS Google Scholar

Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 17351780 (1997).

Article CAS PubMed Google Scholar

Gers, F., Schmidhuber, J. & Cummins, F. Learning to forget: continual prediction with lstm. Neural Comput. 12, 24512471 (2000).

Article CAS PubMed Google Scholar

Gal, Y. & Ghahramani, Z. A theoretically grounded application of dropout in recurrent neural networks. Adv. Neural Inform. Process. Syst. 29, 19 (2016).

Gal, Y. & Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning 48, 10501059 (PMLR, 2016).

Silvestro, D. & Andermann, T. Prior choice affects ability of bayesian neural networks to identify unknowns. arXiv preprint arXiv:2005.04987 (2020).

Brusatte, S. L. et al. The extinction of the dinosaurs. Biol. Rev. 90, 628642 (2015).

Article PubMed Google Scholar

Dunne, E. M., Farnsworth, A., Greene, S. E., Lunt, D. J. & Butler, R. J. Climatic drivers of latitudinal variation in late triassic tetrapod diversity. Palaeontology 64, 101117 (2021).

Article Google Scholar

De Celis, A., Narvez, I., Arcucci, A. & Ortega, F. Lagersttte effect drives notosuchian palaeodiversity (crocodyliformes, notosuchia). Historical Biol. 33, 30313040 (2021).

Article Google Scholar

Cleary, T. J., Benson, R. B., Holroyd, P. A. & Barrett, P. M. Tracing the patterns of non-marine turtle richness from the triassic to the palaeogene: from origin to global spread. Palaeontology 63, 753774 (2020).

Article Google Scholar

Silvestro, D. et al. Fossil data support a pre-Cretaceous origin of flowering plants. Nat. Ecol. Evol. 5, 449457 (2021).

Leuenberger, C. & Wegmann, D. Bayesian computation and model selection without likelihoods. Genetics 184, 243252 (2010).

Article PubMed PubMed Central Google Scholar

Marjoram, P., Molitor, J., Plagnol, V. & Tavar, S. Markov chain monte carlo without likelihoods. Proc. Natl Acad. Sci. 100, 1532415328 (2003).

Article ADS CAS PubMed PubMed Central Google Scholar

Read this article:
DeepDive: estimating global biodiversity patterns through time using deep learning - Nature.com

Read More..

Prediction of adolescent weight status by machine learning: a population-based study – BMC Public Health – BMC Public Health

Design and setting

We conducted a retrospective cohort study of P4 students from the 1995/1996 to 2015/2016 academic cohorts, who were followed until Secondary 6 (S6, Grade 12 in the US). P4 students are cognitively competent to provide self-reported measurements [22]. Additionally, we chose a cohort of P6 students from 1995/1996 to 2013/14 academic cohorts to predict weight status after P6, the last year of primary education in Hong Kong before students are promoted to the secondary level. Students who visited at least two years and had completed health measurements records were included. Data were obtained from the Student Health Service (SHS) of the Department of Health in Hong Kong, which has provided voluntary territory-wide annual health assessment services for primary and secondary students since 1995/1996. The health assessment questionnaire changed in 2015/16 [23]. Therefore, we included P4 students during 1995/1996 to 2014/2015, allowing at least one year of follow-up prediction. Fruther details of the survey health assessment scheme can be found elsewhere [24, 25].

Weight was measured to the nearest 0.1kg and height to the nearest 0.1cm were assessed annually at the SHS by well trained healthcare workers or nurses according to the study protocol. Demographics included sex, age and family socioeconomic level. Familys socioeconomic status was indicated by parental educational level, parental occupation and the type of housing [26].

Dietary habits were assessed by breakfast eating habit, sweetness preference during past 7 days, junk food intake habit, fruit/vegetable intake, and milk consumption habit. Physical activity behaviors were assessed by frequency of aerobic exercise each week, hours of doing aerobic exercise each week, and daily hours of TV viewing. All of these predictors in the structured questionnaires had four response options representing different degrees of frequency or duration. Breakfast habits were assessed by the item I usually have breakfast at?, we considered three response categories: (i) home, representing frequently eating at home, (ii) rarely at home, after combining the original categories of fast food stall/cafeteria/restaurant and some other places, and (iii) no breakfast at all, representing never eating at home. Thus, this item can be considered an assessment of the frequency of breakfast eating at home.

Psychological development was assessed using the 60-item self-reported Culture Free Self-Esteem Inventory for Children Questionnaire (CFSEI-2), which has been validated in Hong Kong children and adolescents [27, 28]. The Self-Esteem Inventory (SEI) comprises a total score and four domain scores: (i) general self-esteem denoting childrens overall perception of themselves, the score7 was considered as very-low; (ii) social self-esteem denoting childrens perception of their peer relationship, (iii) school-related self-esteem denoting childrens perception on their ability to achieve academic success, (iv) parent-related self-esteem denoting childrens perception on their familys thoughts. Scores2 in any of these three subscales were considered very-low [27]. Children with a total score19 or a very-low score in any domain were considered to have low self-esteem. A lie scale score was also obtained, and a score2 indicates the corresponding childs self-reported assessment is unreliable [27].

Potential behavioral problems of children and adolescents were assessed using the 4-item Rutter Behavior Questionnaire (RBQ), which has been validated in Hong Kong children [29]. It inquired about behaviors on hyperactivity, conduct, and emotional disturbances and were completed by parents. A RBQ total score19 indicated a potential behavior problem [30]. In total, 25 predictors were considered as input variables in developing multiclass prediction models.

Prediction weight status was classified as normal, obese, overweight, and underweight, based on the next measurement year of the body mass index (BMI, expressed in kg/m2) and the age- and sex-specific BMI references in the international Obesity Task Force Standards (IOTF).

Children with a lie self-esteem score2 were considered unreliable and removed. For the type of housing and parental occupation, we ordered their response categories in order of socioeconomic level by using the median monthly domestic household income for each type of housing and occupation obtained from the Hong Kong Census and Statistics Department. Sex as categorical variables was one-hot encoded. The responses of dietary and physical activity behavioral measurements were treated as ordinal variables, and other predictors were considered as continuous variables. Missing data on socioeconomic status were filled out according to the information reported in the students other assessment years. The other measurements had less than 5% missing data, which was considered inconsequential to the validity of the model development [31]. We used k nearest neighbour imputation algorithms to the training and test sets separatly to facilitate the use of ML that required complete data [32].

Categorical data were expressed as the number with a percentage for each weight status and compared using chi-square test. Numberical data were presented as the meanstandard deviation (SD).

P4 students were randomly divided into a training set and a test set at an 80:20 ratio. Multiclass prediction models were developed using the P4 training data to predict weight status in each subsequent year until S6, creating eight prediction windows. We used the same procedure to develop prediction models for the P6 training cohort, creating six prediction windows until S6. The weight status in our cohorts was imbalanced, with underweight, overweight and obese categories being underpresented. The imbalance could have led to biased model performance, where the model may have been more accurate at predicting the majority weight status while performing poorly on the minority weight status. To address this issue, we used the Synthetic Minority Oversampling Technique (SMOTE) sampling technique to the training sets [33]. SMOTE was a widely used technique that creates synthetic samples for the minority categories by generating new instances that are similar to the original underpresented categories. We attempted several ML approaches, including Decision Tree (DT), Random Forest (RF), Supportive Vector Machine (SVM), k-Nearest Neighbor (k-NN), and eXtreme Gradient Boosting (XG Boost), as well as the LG approach for comparison. The short- and long-term prediction abilities of the models were compared by calculating the correct classification rate, overall accuracy of the test set and micro-, macro-averaging area under the curve (AUC). Receiver operating characteristics (ROC) curves for each weight status on test set were also obtained. The AUC, precision, recall and F1-score were calculated to evaluate the model prediction accuracy, and assess the ability to predict an abnormal weight status. The precision and recall are conceptually equivalent to the sensitivity and positive predictive value, and the F1 score is the harmonic mean of precision and recall [34]. For predicting a specific weight status, all accuracy measures ranged from 0 to 1, with a higher value indicating a higher accuracy.

To examine the importance of each predictor at both population and individual levels, based on the best performing prediction models, we used the Shapley Additive Explanations (SHAP) to obtain their contributions for a prediction window [35]. SHAP value is assigned to each predictor and can quantify them by comparing the differences with and without that predictor. The Shapley values from all prediction windows in each cohort were used to compare the summary importance of predictors by different weight status. Furthermore, to better understand the individual-level prediction of weight status, we selected two students as examples and used SHAP waterfall plots to illustrate the importance of different predictors for each student. Figure1 shows the workflow used for this study. All prediction models were developed and compared using Python software (version 3.10) with Scikit-Learn.

Graphical illustration of the workflow used for this study

See original here:
Prediction of adolescent weight status by machine learning: a population-based study - BMC Public Health - BMC Public Health

Read More..

Crushing It: Autonomous AI Robot Creates a Shock-Absorbing Shape No Human Ever Could – SciTechDaily

Boston Universitys robot MAMA BEAR autonomously creates and tests energy-absorbing structures, recently achieving a record 75% efficiency, with applications ranging from shipping materials to military gear. Credit: Devin Hahn, Boston University Productions

An autonomous robot created a shock-absorbing shape no human ever could. Learn what it means for designing safer helmets, packaging, car bumpers, and more.

Within the confines of a lab in Boston Universitys College of Engineering, a robot arm drops small, plastic objects into a box precisely placed on the floor to catch them as they fall. One by one, these tiny objectfeather-light, cylindrical pieces, no bigger than an inch tallfill the box. Some are blue, others red, purple, green, or black.

Each of these structures is the result of an experiment in robot autonomy. Operating independently and adapting through ongoing learning, the robot is on a quest to create the most efficient energy-absorbing shape to ever exist.

To do this, the robot creates a small plastic structure with a 3D printer, records its shape and size, moves it to a flat metal surfaceand then crushes it with a pressure equivalent to an adult Arabian horse standing on a quarter. The robot then measures how much energy the structure absorbed, how its shape changed after being squashed, and records every detail in a vast database.

Then, it drops the crushed object into the box and wipes the metal plate clean, ready to print and test the next piece. It will be ever-so-slightly different from its predecessor, its design and dimensions tweaked by the robots computer algorithm based on all past experimentsthe basis of whats called Bayesian optimization. Experiment after experiment, the 3D structures get better at absorbing the impact of getting crushed.

Inside a lab in Boston Universitys College of Engineering, a robot arm drops small, plastic objects into a box placed perfectly on the floor to catch them as they fall. One by one, these tiny structuresfeather-light, cylindrical pieces, no bigger than an inch tallfill the box. Some are red, others blue, purple, green, or black. Each object is the result of an experiment in robot autonomy. On its own, learning as it goes, the robot is searching for, and trying to make, an object with the most efficient energy-absorbing shape to ever exist. Credit: Devin Hahn, Boston University Productions

These relentless iterations are possible because of the work of Keith Brown, an ENG associate professor of mechanical engineering, and his team in the KABlab. The robot, named MAMA BEARshort for its lengthy full title, Mechanics of Additively Manufactured Architectures Bayesian Experimental Autonomous Researcherhas evolved since it was first conceptualized by Brown and his lab in 2018. By 2021, the lab had set the machine on its quest to make a shape that absorbs the most energy, a property known as its mechanical energy absorption efficiency. This current iteration has run continuously for over three years, filling dozens of boxes with more than 25,000 3D-printed structures.

Why so many shapes? There are countless uses for something that can efficiently absorb energysay, cushioning for delicate electronics being shipped across the world or for knee pads and wrist guards for athletes. You could draw from this library of data to make better bumpers in a car, or packaging equipment, for example, Brown says.

To work ideally, the structures have to strike the perfect balance: they cant be so strong that they cause damage to whatever theyre supposed to protect, but should be strong enough to absorb impact. Before MAMA BEAR, the best structure anyone ever observed was about 71 percent efficient at absorbing energy, says Brown. But on a chilly January afternoon in 2023, Browns lab watched their robot hit 75 percent efficiency, breaking the known record. The results were just published today (May 21) in the journal Nature Communications.

When we started out, we didnt know if there was going to be this record-breaking shape, says Kelsey Snapp (ENG25), a PhD student in Browns lab who oversees MAMA BEAR. Slowly but surely we kept inching up, and broke through.

The record-breaking structure looks like nothing the researchers would have expected: it has four points, shaped like thin flower petals, and is taller and narrower than the early designs.

Were excited that theres so much mechanical data here, that were using this to learn lessons about design more generally, Brown says.

Their extensive data is already getting its first real-life application, helping to inform the design of new helmet padding for US Army soldiers. Brown, Snapp, and project collaborator Emily Whiting, a BU College of Arts & Sciences associate professor of computer science, worked with the US Army and went through field testing to ensure helmets using their patent-pending padding are comfortable and provide sufficient protection from impact. The 3D structure used for the padding is different from the record-breaking piecewith a softer center and shorter stature to help with comfort.

MAMA BEAR is not Browns only autonomous research robot. His lab has other BEAR robots performing different taskslike the nano BEAR, which studies the way materials behave at the molecular scale using a technology called atomic force microscopy. Brown has also been working with Jrg Werner, an ENG assistant professor of mechanical engineering, to develop another system, known as the PANDAshort for Polymer Analysis and Discovery ArrayBEAR to test thousands of thin polymer materials to find one that works best in a battery.

Theyre all robots that do research, Brown says. The philosophy is that theyre using machine learning together with automation to help us do research much faster.

Not just faster, adds Snapp. You can do things you couldnt normally do. We can reach a structure or goal that we wouldnt have been able to achieve otherwise, because it would have been too expensive and time-consuming. He has worked closely with MAMA BEAR since the experiments began in 2021, and gave the robot its ability to seeknown as machine visionand clean its own test plate.

The KABlab is hoping to further demonstrate the importance of autonomous research. Brown wants to keep collaborating with scientists in various fields who need to test incredibly large numbers of structures and solutions. Even though they already broke a record, we have no ability to know if weve reached the maximum efficiency, Brown says, meaning they could possibly break it again. So, MAMA BEAR will keep on running, pushing boundaries further, while Brown and his team see what other applications the database can be useful for. Theyre also exploring how the more than 25,000 crushed pieces can be unwound and reloaded into the 3D printers so the material can be recycled for more experiments.

Were going to keep studying this system, because mechanical efficiency, like so many other material properties, is only accurately measured by experiment, Brown says, and using self-driving labs helps us pick the best experiments and perform them as fast as possible.

Reference: Superlative mechanical energy absorbing efficiency discovered through self-driving lab-human partnership 21 May 2024, Nature Communications. DOI: 10.1038/s41467-024-48534-4

Go here to see the original:
Crushing It: Autonomous AI Robot Creates a Shock-Absorbing Shape No Human Ever Could - SciTechDaily

Read More..