Page 449«..1020..448449450451..460470..»

Generative AI is hot, but predictive AI remains the workhorse – CIO

Since the release of ChatGPT in November 2022, generative AI (genAI) has become a high priority for enterprise CEOs and boards of directors. A PwC report, for instance, found that 84% of CIOs expect to use genAI to support a new business model in 2024. Certainly, theres no doubt that genAI is a truly transformative technology. But its also important to remember that it is just one flavor of AI, and its not the best technology to power every use case.

The concept of what qualifies as AI changes over time. Fifty years ago, a tic-tac-toe-playing program would have been thought of as a type of AI; today, not so much. But generally speaking, the history of AI falls into three different categories.

We work with a lot of chief data and artificial intelligence officers (CAIOs), said Thomas Robinson, COO at Domino, and, at most, they see generative AI accounting for 15% of use cases and models. Predictive AI is still the workhorse in model-driven businesses, and future models are likely to combine predictive and generative AI.

In fact, there are already use cases where predictive and generative AI work in concert, such as analyzing radiology images to create reports on preliminary diagnoses or mining stock data to generate reports on which are most likely to increase in the near future. For CIOs and CTOs, this means that organizations will need a common platform for developing complete AI.

Complete AI development and deployment doesnt treat each of these types of AI as a separate animal, each with its own stack. True, genAI may require a bit more power in the way of some GPUs, and networking may need to be beefed up for better performance in some areas of the environment, but unless an organization is running a truly gigantic genAI deployment on the scale of Meta or Microsoft, building a new stack from the ground up isnt required.

Processes for governance and testing also dont need to be completely reinvented. For example, mortgage risk models powered by predictive AI require rigorous testing, validation, and constant monitoring just as do genAIs large language models (LLMs). Again, there are differences, such as genAIs well-known problem with hallucinations. But generally, the processes for managing genAI risk will be similar to those of predictive AI.

Dominos Enterprise AI platform is trusted by one out of five Fortune 100 companies to manage AI tools, data, training, and deployment. With this platform, AI and MLOps teams can manage complete AI predictive, and generative from a single control center. By unifying MLOps under a single platform, organizations can enable complete AI development, deployment, and management.

Learn how to reap the rewards and manage the risk of your genAI projects with Dominos free whitepaper on responsible genAI.

See the article here:
Generative AI is hot, but predictive AI remains the workhorse - CIO

Read More..

Machine Learning for Automation & Quality Inspection – Quality Magazine

Artificial intelligence or AI is a hot topic these days and an oft-used term in nearly every industry. Automation, quality inspection and manufacturing are no exception. Rather, these are a major proving ground for development, experimentation, and implementation of AI for industrial applications. In many instances however, AI is a bit of a catchall term, often referring to the application of some level of computer-assisted decision-making. When we unpack it, the more relevant pieces that people often speak of are machine learning, deep learning, and computer vision individually different but collectively referred to as AI. The discussion stands to benefit from some untangling to enable an improved level of clarity around what each does, why they matter and the value and benefit that machine learning brings to the industrial automation and quality inspection landscapes.

Put simply, computer vision is the process of interpreting information from images or videos in an automated fashion using computers instead of people. The algorithms and models used in computer vision applications tend to be rigid in that they are tailored to find and identify specific items in a scene these could be defects, the absence or presence of something, or incorrect characters on a label. The inspection environment needs to remain static as do the items being inspected. Changes in the variables almost always mean that the model must be updated, reconfigured, and redeployed by humans.

Machine learning on the other hand uses historical data to make decisions related to the interpretation of information from images or videos. Relying on patterns and inference, computers can identify defects or inaccuracies in a scene using algorithms or models that can adapt to changes or to variables in the environment. Initial models are often trained with supervised learning using people to characterize and label pass/fail or good/bad results based on an existing dataset and feeding that information into the model. From that baseline, the system can begin to perform inspection tasks without explicit instructions.

While machine learning is a subset of AI, deep learning should be viewed as a subset of machine learning. Using artificial neural networks, it aims to mimic the learning process of the human brain, eliminating the need for human involvement to learn from its environment. Whereas machine learning algorithms often rely on human correction to help them to overcome mistakes, deep learning leverages high-performance computing to improve through repetition on its own. To accomplish this, deep learning relies on data sets that are significantly larger than those needed to train a machine learning algorithm along with substantially longer processing and training times to achieve reliable accuracy and sophistication.

When we aggregate the various technology subsets computer vision, machine learning and deep learning the result is an artificial intelligence solution that can extend across a multitude of use cases for automation and quality. Machine learning as the potential to play a major role in the successful implementation of AI across the industry, combining traditional computer vision capability with added intelligence that enables increases in productivity, safety and efficiency within applications and tasks that have traditionally been served by human operators. In addition to traditional applications for identifying defects in a product or missed steps in a process, opportunities to create value can be found where processes are subjective, and rules not easily defined or organized into logical steps or decisions. Likewise, scenarios where outcomes are known but the action to be taken is difficult to predict or dependent on many conditions are well-suited to machine learning technology.

The number of robots being sold and deployed continues to grow and is currently setting records annually. Although robots have been used in industrial automation for decades, early functionality consisted of fixed, repetitive routines in a setting that remained static. Today however, Autonomous Mobile Robots (AMRs) extend far beyond the scope of set parameters and due in large part to machine learning, can self-navigate and respond to changes in factory floor configurations, the presence of people or objects as well as other robots. Similarly, robotic arms used in packaging, warehousing and logistics can analyze materials to be loaded onto a pallet or into a carton and can make decisions that result in optimal placement and stacking.

Equipment failure in a production or manufacturing environment often represents one of the worst problems a halted production line which can have a cascading impact on other operations in addition to impacting a manufacturer financially for every minute lost. Machine learning offers the ability to develop and deploy intelligent systems which can monitor and evaluate production equipment continuously. Systems that can self-learn utilize data from production line sensors to identify patterns and anomalies and alert to drops in performance. With these insights, maintenance can be scheduled proactively at cost-effective times rather than reactively when something breaks. The results are improved productivity and a reduction in costly repairs and downtime.

Defect detection is perhaps the most popular use-case for implementing machine vision in an automation or quality inspection application. Like the evolution of robotics, traditional machine vision applications are programmed with rigid instructions configured to look for specific defects in the same product or material in repetitive fashion. Machine learning brings advanced analysis and decision-making to quality inspection by enabling those systems to become more dynamic and tolerant of changes in inspection environments and other variables. For example, in label verification, conventional machine vision is designed to inspect specific fonts or typesets. Utilizing machine learning provides for the ability to inspect varying font styles as well as to make correct pass/fail decisions despite changes in contrast levels and text placement. Advanced machine learning can have the benefit of reduced false positives resulting in less waste, reduced manufacturing costs and higher manufacturer profitability.

The capabilities that are made possible with AI are largely the result of advances in software development and evolution. The potential to enable computers to simulate human interaction through data analysis and pattern identification has dramatically altered the complexity, ability and accuracy of our automation and quality systems. While less talked about, hardware progression makes large-scale deployments possible and until recently, many AI implementations have been limited in terms of where and how they could be used, relying on large computer infrastructure to execute their programming.

Recent advances in peripheral components including CPUs, GPUs, and TPUs, embedded processing and data transfer technologies have created the conditions for dramatic reduction in the physical size, power consumption and cost of deploying machine learning solutions. These advancements, coupled with sophisticated software are accelerating deployment in the form of edge and mobile computing, have made it possible to implement AI in applications where just a few short years ago it was thought impossible. If history is any indication, we are primed for rapid expansion of the market segments and industries served by machine learning along with continued growth in its use across existing and new applications for the foreseeable future, fueled by our drive for greater productivity, efficiency, quality and safety in the work we do and the products we make.

Visit link:
Machine Learning for Automation & Quality Inspection - Quality Magazine

Read More..

(Deep) Learning from the Bench: A Conversation on Algorithmic Fairness – Berkman Klein Center

As algorithmic decision-making becomes increasingly pervasive, it raises challenging issues pertaining to equality and equity. Join us for a timely discussion on fairness and technology grounded in Professor Minows forthcoming article about Justice Abella's equality jurisprudence. This conversation will delve into Justice Abellas pivotal contributions to defining equality in Canada, and how they might guide our approach to algorithmic fairness. As machine learning and other algorithmic predictive tools rely on biased data and produce disparate outcomes, they highlight unresolved tensions and limitations in legal frameworks in the U.S., Canada and EU pertaining to equality and non-discrimination. Exploring the tension between formal and substantive approaches, the speakers will unpack the renewed challenges we face today as discrimination manifests through algorithmic systems, and suggest paths forward to better confront algorithmic harms on the ground.

3:00 pm - 3:55 pm: Conversation with Professor Minow and Justice Abella

5 minute Q&A period

4:00 pm - 5:00 pm: Reception & meet and greet with speakers and other attendees

Justice Abella was appointed to the Supreme Court of Canada in 2004. She is the first Jewish woman appointed to the Court.

She was elected to the Royal Society of Canada in 1997, to the American Academy of Arts and Sciences in 2007, and to the American Philosophical Society in 2018. In 2020, she was awarded the Knight Commanders Cross of the Order of Merit by the President of Germany.

She attended the University of Toronto, where she earned a B.A. in 1967 and an LL.B. in 1970. In 1964 she graduated from the Royal Conservatory of Music in classical piano. She was called to the Ontario Bar in 1972 and practiced civil and criminal litigation until 1976 when she was appointed to the Ontario Family Court at the age of 29, the first pregnant person appointed to the judiciary in Canada. She was appointed to the Ontario Court of Appeal in 1992. More here, or watch a 2023documentary about Justice Abella.

Martha Minow is 300th Anniversary University Professor of Law at Harvard Law School. Martha Minow has taught at Harvard Law School since 1981, where her courses include civil procedure, constitutional law, family law, fairness and privacy, international criminal justice, jurisprudence, law and education, nonprofit organizations, and the public law workshop. An expert in human rights and advocacy for members of racial and religious minorities and for women, children, and persons with disabilities, she also writes and teaches about AI and legal issues, and about how societies transition from war and atrocities to regimes committed to democracy and justice. Minow served as Dean of Harvard Law School between 2009 and 2017, as the inaugural Morgan and Helen Chu Dean and Professor. She currently is the chair of the MacArthur Foundation, and a member of the governing boards of the Campaign Legal Center (nonpartisan voting rights group), the Carnegie Corporation (philanthropy), and GBH (public media). She also co-chairs the advisory group for MIT's new Schwartzman College of Computing. Minow completed her undergraduate studies at the University of Michigan, then earned an M.Ed. from Harvard and a J.D. from Yale. More here.

Link:
(Deep) Learning from the Bench: A Conversation on Algorithmic Fairness - Berkman Klein Center

Read More..

Machine learning algorithm sets Cardano (ADA) price on February 29, 2024 – Finbold – Finance in Bold

Still considered an Ethereum (ETH) killer, the native token of the Cardano blockchain ADA is no stranger to sudden price rises and falls. Since its launch in 2017, it saw major surges and nearly as large declines, having in 2021 skyrocketed quickly to $3 before stabilizing below $0.5 by mid-2022.

Last year, however, saw rampant speculation give way as the security, scalability, and sustainability-focused ecosystem finally took off, and after following an overall downward trajectory for more than a year, ADA surged in the final quarter.

By the start of 2024, however, the cryptocurrency entered into a correction and, with it declining nearly 20% since January 1, Finbold decided to consult the AI-driven predictive algorithms of CoinCodex on how ADA might fare by the end of February.

The machine learning algorithms of CoinCodex assess that ADA is likely to trade with significant volatility throughout February albeit on an overall upward trajectory. They estimate the token will rise 8.53% from the press time price of approximately $0.5 and find itself at $0.543316 on February 29.

The platform also estimates that ADA will fall slightly in early March and find itself closer to $0.519478 in one months time.

Given that the token had 14 green days out of the last 30, it, perhaps, isnt surprising that the crypto market sentiment on it is gauged as neutral. The Fear & Greed index the index that tracks investor attitude toward an asset however, records greed when it comes to ADA.

While the future fortunes of ADA are hard to predict, it has recently offered a mostly strong performance. In total, it is up 27.88% in the last 52 weeks, but given the decisive break from its long downtrend in October, it is as much as 70.36% up in the last 6 months.

Zooming into 2024, the token has been in a volatile period of correction following its surge that lasted from mid-October to mid-December and is down 19.62% year-to-date (YTD). More recently, it has again started rising albeit slowly and is 0.63% in the green in the last week but 0.53% in the red in the last 24 hours, having dropped to $0.50.

Disclaimer: The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.

See more here:
Machine learning algorithm sets Cardano (ADA) price on February 29, 2024 - Finbold - Finance in Bold

Read More..

Using Machine Learning to Reconstruct Cloud-Obscured Dust Plumes – Eos

Editors Highlights are summaries of recent papers by AGUs journal editors. Source: AGU Advances

Most dust and sand particles in the atmosphere originate from North Africa. Since ground-based observations of dust plumes in North Africa are sparse, investigations often rely on satellite observations. However, dust plumes are frequently obscured by clouds, making it difficult to study the full extent.

Kanngieer and Fiedler [2024] use machine learning methods to restore information about the extent of dust plumes beneath clouds in 2021 and 2022 at 9, 12, and 15 UTC. The reconstructed dust patterns demonstrate a new way to validate the dust forecast ensemble provided by the WMO Dust Regional Center in Barcelona, Spain. This proposed method is computationally inexpensive and provides new opportunities for assessing the quality of dust transport simulations. The method can also be transferred to reconstruct other aerosol and trace gas plumes.

Citation: Kanngieer, F., & Fiedler, S. (2024). Seeing beneath the cloudsMachine-learning-based reconstruction of North African dust plumes. AGU Advances, 5, e2023AV001042. https://doi.org/10.1029/2023AV001042

Don Wuebbles, Editor, AGU Advances

Originally posted here:
Using Machine Learning to Reconstruct Cloud-Obscured Dust Plumes - Eos

Read More..

Predicting soil cone index and assessing suitability for wind and solar farm development in using machine learning … – Nature.com

Day, S. D. & Bassuk, N. L. A review of the effects of soil compaction and amelioration treatments on landscape trees. J. Arboric. 20(1), 917 (1994).

Google Scholar

Batey, T. Soil compaction and soil management: A review. Soil Use Manag. 25(4), 335345 (2009).

Article Google Scholar

Nawaz, M. F., Bourrie, G. & Trolard, F. Soil compaction impact and modelling: A review. Agron. Sustain. Dev. 33, 291309 (2013).

Article Google Scholar

Lipiec, J. & Hatano, R. Quantification of compaction effects on soil physical properties and crop growth. Geoderma 116(12), 107136 (2003).

Article ADS Google Scholar

Zhang, S., Grip, H. & Lvdahl, L. Effect of soil compaction on hydraulic properties of two loess soils in China. Soil Till. Res. 90(12), 117125 (2006).

Article Google Scholar

Shah, A. N. et al. Soil compaction effects on soil health and cropproductivity: An overview. Environ. Sci. Pollut. Res. 24, 1005610067 (2017).

Article Google Scholar

Brevik, E. C. & Sauer, T. J. The past, present, and future of soils and human health studies. Soil 1(1), 3546 (2015).

Article ADS Google Scholar

Alpers, W., Zhao, Y., Mouche, A. A. & Chan, P. W. A note on radar signatures of hydrometeors in the melting layer as inferred from Sentinel-1 SAR data acquired over the ocean. Remote Sens. Environ. 253, 112177 (2021).

Article Google Scholar

Coopersmith, E. J., Minsker, B. S., Wenzel, C. E. & Gilmore, B. J. Machine learning assessments of soil drying for agricultural planning. Comput. Electron. Agric. 104, 93104 (2014).

Article Google Scholar

Rahimi-Ajdadi, F. & Abbaspour-Gilandeh, Y. A review on the soil compaction measurement systems. In Conference Proceedings, First International Conference on Organic vs Conventional Agriculture, pp. 17 (2017).

Raper, R. L., & Mac Kirby, J. Soil compaction: How to do it, undo it, or avoid doing it. Presented at the 2006 Agricultural Equipment Technology Conference, Louisville, Kentucky, USA, 12-14 February, pp. 115 (The American Society of Agricultural and Biological Engineers, 2006).

Ziyaee, A. & Roshani, M. R. A survey study on soil compaction problems for new methods in agriculture. Int. Res. J. Appl. Basic Sci. 3(9), 17871801 (2012).

Google Scholar

Brevik, E. C, & Sauer, T. J. The soil cone penetrometer test: Uses, principles, and applications. Vadose Zone J. 5, 5865 (2015).

Google Scholar

Chan, Y. et al. Prediction of soil compaction degree in typical soils of Beijing city by a machine learning algorithm. Soil Till. Res. 205, 104800 (2021).

Google Scholar

Hemmat, A., Karimzadeh, S. & Karimi, A. Comparison of artificial neural networks and regression models for predicting soil cone penetration resistance. Soil Till. Res. 143, 3845 (2014).

Google Scholar

Abbaspour-Gilandeh, Y. & Rahimi-Ajdadi, F. Modeling of soil compaction using neural networks and regression tree: A case study in Iran. J. Agric. Sci. Technol. 18(5), 12711282 (2016).

Google Scholar

Clark, R. N. Quantitative models of soil genesis. Geoderma 89(12), 126 (1999).

Google Scholar

Mulqueen, J. A., McBratney, A. B. & Minasny, B. The measurement of soil strength and its application to tillage. Aust. J. Soil Res. 15(2), 137149 (1977).

Google Scholar

Kumar, A., Chen, Y., Sadek, M.A.-A. & Rahman, S. Soil cone index in relation to soil texture, moisture content, and bulk density for no-tillage and conventional tillage. Agric. Eng. Int. CIGR J. 14(1), 2637 (2012).

Google Scholar

Hummel, J. W., Ahmad, I. S., Newman, S. C., Sudduth, K. A. & Drummond, S. T. Simultaneous soil moisture and cone index measurement. Trans. ASAE 47(3), 607 (2004).

Article Google Scholar

Zajcov, K. & Chuman, T. Application of ground penetrating radar methods in soil studies: A review. Geoderma 343, 116129 (2019).

Article ADS Google Scholar

Tekeste, M. Z., Raper, R. L., & Schwab, E. B. Soil drying effects on soil strength and depth of hardpan layers as determined from cone index data. Agric. Eng. Int.: CIGR J. X, Manuscript LW 07 010 (2008).

Google Scholar

Jabro, J. D., Stevens, W. B., Iversen, W. M., Sainju, U. M. & Allen, B. L. Soil cone index and bulk density of a sandy loam under no-till and conventional tillage in a corn-soybean rotation. Soil Till. Res. 206, 104842 (2021).

Article Google Scholar

Aase, J. K., Bjorneberg, D. L. & Sojka, R. E. Zonesubsoiling relationships to bulk density and cone index on a furrow-irrigated soil. Trans. ASAE 44(3), 577 (2001).

Google Scholar

Way, T. R., Kishimoto, T., Torbert, A. H., Burt, E. C. & Bailey, A. C. Tractor tire aspect ratio effects on soil bulk density and cone index. J. Terramech. 46(1), 2734 (2009).

Article Google Scholar

Agodzo, S. K, & Adama, I. Bulk density, cone index and water content relations for some Ghanian soils. Invited presentations at the College on Soil Physics, 2003. Agricultural Engineering Department, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana. (2004).

Sojka, R. E., Busscher, W. J. & Lehrsch, G. A. In situ strength, bulk density, and water content relationships of a Durinodic Xeric Haplocalcid soil. Soil Sci. 166(8), 520529 (2001).

Article ADS CAS Google Scholar

Hulugalle, N. R. & Entwistle, P. Soil properties, nutrient uptake and crop growth in an irrigated Vertisol after nine years of minimum tillage. Soil Till. Res. 42(12), 1532 (1997).

Article Google Scholar

Raper, R. L. Agricultural traffic impacts on soil. J. Terrramech. 42(34), 259280 (2005).

Article Google Scholar

Ayers, P. D. & Perumpral, J. V. Moisture and density effect on cone index. Trans. ASAE 25(5), 11691172 (1982).

Article Google Scholar

Mason, G. L. et al. An overview of methods to convert cone index to bevameter parameters. J. Terrramech. 87, 19 (2020).

Article Google Scholar

Elbanna, E. B. & Witney, B. D. Cone penetration resistance equation as a function of the clay ratio, soil moisture content and specific weight. J. Terrramech. 24(1), 4156 (1987).

Article Google Scholar

Liu, X. et al. Measurement of soil water content using ground-penetrating radar: A review of current methods. Int. J. Digit. Earth 12(1), 95118 (2019).

Article ADS Google Scholar

Sun, Y., Lammers, P. S. & Damerow, L. A dual sensor for simultaneous investigation of soil cone index and moisture content. Agric. Forschung. J. 9(1), E12E15 (2003).

Google Scholar

Rahman, M. M. et al. Mapping surface roughness and soil moisture using multi-angle radar imagery without ancillary data. Remote Sens. Environ. 112(2), 391402 (2008).

Article ADS Google Scholar

Ahmadi, H. & Mollazade, K. Effect of plowing depth and soil moisture content on reduced secondary tillage. Agric. Eng. Int. CIGR EJournal 11, 19 (2009).

Google Scholar

Oskoui, K. E. & Witney, B. D. The determination of plough draught-Part I. Prediction from soil and meteorological data with cone index as the soil strength parameter. J. Terramech. 19(2), 97106 (1982).

Article Google Scholar

Son, J., Jung, I., Park, K., & Han, B. Tracking-by-segmentation with online gradient boosting decision tree. In Proceedings of the IEEE International Conference on Computer Vision, 30563064 (2015).

Anghel, A., Papandreou, N., Parnell, T., De Palma, A., & Pozidis, H. Benchmarking and optimization of gradient boosting decision tree algorithms. arXiv preprint arXiv:1809.04559 (2018).

Machado, M. R., Karray, S., & de Sousa, I. T. LightGBM: An effective decision tree gradient boosting method to predict customer loyalty in the finance industry. In 2019 14th International Conference on Computer Science and Education (ICCSE), 11111116. IEEE (2019).

Jafari, A., Khademi, H., Finke, P. A., Van de Wauw, J. & Ayoubi, S. Spatial prediction of soil great groups by boosted regression trees using a limited point dataset in an arid region, southeastern Iran. Geoderma 232, 148163 (2014).

Article ADS Google Scholar

Dube, T., Mutanga, O., Abdel-Rahman, E. M., Ismail, R. & Slotow, R. Predicting Eucalyptus spp. stand volume in Zululand, South Africa: An analysis using a stochastic gradient boosting regression ensemble with multi-source data sets. Int. J. Remote Sens. 36(14), 37513772 (2015).

Article Google Scholar

Sauer, B. & Henderson, N. Site-specific DNA recombination in mammalian cells by the Cre recombinase of bacteriophage P1. Proc. Natl. Acad. Sci. 85(14), 51665170 (1988).

Article ADS CAS PubMed PubMed Central Google Scholar

Pham, T. D. et al. Comparison of machine learning methods for estimating mangrove above-ground biomass using multiple source remote sensing data in the red river delta biosphere reserve, Vietnam. Remote Sens. 12(8), 1334 (2020).

Article ADS Google Scholar

Aali, K. A., Parsinejad, M. & Rahmani, B. Estimation of saturation percentage of soil using multiple regression, ANN, and ANFIS techniques. Comput. Inf. Sci. 2(3), 127136 (2009).

Google Scholar

Goh, A. T. C. Back-propagation neural networks for modeling complex systems. Artif. Intell. Eng. 9(3), 143151 (1995).

Article Google Scholar

Kushwaha, R. L. & Zhang, Z. X. Evaluation of factors and current approaches related to computerized design of tillage tools: A review. J. Terrramech. 35(2), 6986 (1998).

Article Google Scholar

Khalilian, M., Shakib, H. & Basim, M. C. On the optimal performance-based seismic design objective for steel moment resisting frames based on life cycle cost. J. Build. Eng. 44, 103091 (2021).

Article Google Scholar

Pourmoghadam, Z. et al. Intrauterine administration of autologous hCG-activated peripheral blood mononuclear cells improves pregnancy outcomes in patients with recurrent implantation failure; A double-blind, randomized control trial study. J. Reprod. Immunol. 142, 103182 (2020).

Article CAS PubMed Google Scholar

Babaeian, E. et al. Ground, proximal, and satellite remote sensing of soil moisture. Rev. Geophys. 57(2), 530616 (2019).

Article ADS Google Scholar

Faure, A. G., Viana, J. D. & Mata, D. Penetration resistance value along compaction curves. J. Geotech. Eng. 120(1), 4659 (1994).

Article Google Scholar

Safi, S. R., Gotoh, T., Iizawa, T. & Nakai, S. Development and regeneration of composite of cationic gel and iron hydroxide for adsorbing arsenic from ground water. Chemosphere 217, 808815 (2019).

Article ADS CAS PubMed Google Scholar

Mehdizadeh, S. & Nikbakht, A. M. Predicting soil cone index using machine learning algorithms. J. Agric. Sci. Technol. 22(2), 327337 (2020).

Google Scholar

Read the original here:
Predicting soil cone index and assessing suitability for wind and solar farm development in using machine learning ... - Nature.com

Read More..

Enhancing Ore Analysis with AI in Mining Operations – AZoMining

Every day, mining operations confront challenges that reduce productivity, safety, and sustainability. These challenges include geological concerns, unsafe work environments, and substantial operational costs. As the demand for minerals and metals continues to rise, these restrictions increase the pressure for innovative solutions.

Image Credit:Nordroden/Shutterstock.com

Enter artificial intelligence (AI). As witnessed in numerous other industries, AI and machine learning offer growing potential for optimizing mining operations in various ways.

For example, mining companies can use machine learning for enhanced ore analysis and thus improved extraction operations. The technology can also be employed to reap valuable insights from satellite and drone images, which can be then used to improve operational robotics.

Ore grade estimation involves projecting the quantity of minerals that could be extracted from a geologic deposit.

Traditional approaches to this process involved taking manual samples and performing assays, which can be labor-intensive and have a wide margin of error. A more recent approach involves the creation of complex geostatistical models that provide greater precision.

These models demand specialized expertise to process large amounts of complex data. There is also significant variability and uncertainty associated with geostatistical modes.

Machine learning offers a superior approach to ore grade estimation through feature extraction. This involves combining various datasets to provide highly accurate predictions.

By focusing on certain factors, such as geological features and the presence of particular minerals, machine learning algorithms continuously improve the predictions they generate. The accuracy of these predictions improves with the increasing amount of data processed by the machine learning system. These predictions can then be used to inform resource extraction operations.

Machine learning and mining also improve resource extraction by analyzing trace elements obtained through laser ablationinductively coupled plasmamass spectrometry (LAICPMS).

The analysis of these trace elements can reveal the origins of ore deposits. This information can then be used to improve resource extraction. Researchers used machine learning and LAICPMS in one study to analyze pyrite trace element data. Results demonstrated that this approach was a valid predictor of ore deposit type.

Another approach to improving resource extraction through machine learning involves the identification of mineral ore based on hyperspectral images of mine excavation faces. In one study, researchers used machine learning to analyze two types of hyperspectral images: one set acquired in a laboratory setup with optimal conditions, and one set stimulating a mine face in a simulated field setup.

These machine-learning models were able to classify laboratory spectra with an overall accuracy rate of 98 %. The accuracy in the simulated setup was lower than that in the laboratory setup; however, the researchers attributed this to the likely impact of lower spatial resolution in the simulated field setup.

The study confirmed the validity of employing this approach to document the spatial distribution of ore minerals in an excavated mine face.

Machine learning can also improve resource extraction operations through geospatial analysis. By evaluating images from satellites and drones, machine-learning systems can identify mineral-rich zones. In so doing, these geospatial systems can accelerate the exploration process and reduce the risk of drilling in areas with unknown geological qualities.

In a recent study published inOre Geology Reviews, researchers used Geographic Information System-based machine learning to create accurate maps of prospective mineral deposits. The team achieved these results by creating a model of copper deposits and information on spatial proxies related to ore-forming processes.

These results demonstrate that the team successfully created models with high predictive precision. The study team also produced numerous targets for exploration in different provinces in China. The researchers concluded that this approach could be applied to other mineral systems and regions.

Robotics in the mining industry is experiencing a transformative period thanks to AI. From drilling and excavation to transportation, AI-powered systems leverage geospatial data to navigate intricate mining terrains. Robotics company OffWorld recently announced its plans to commence orders for its AI-powered, swarming robotic mining systems in 2024.

This line of autonomous mining robots is designed for surveying surface and underground environments, excavation, material collection, hauling, and material processing. The companys swarming approach to robotic mining is said to improve productivity and efficiency, claiming that its robots can excavate more than one million tons of ore per year in a single mining operation.

While machine learning in mining offers many benefits, there are two big industry concerns: public perception and an equitable path forward.

Experts agree that the mining industry is unprepared for a future where machines replace the human workforce. According to the experts, society must recognize that mining is the foundation of sustainable development. The industry must also work to ensure that it produces benefits for the entire society.

These challenges present opportunities for innovative businesses capable of reimagining how mining can holistically evolve through the successful integration of machine learning. Efforts are also being made to ensure that mining better facilitates a more sustainable future by focusing on the extraction of minerals for electric vehicle batteries.

Chen, H., et al. (2023). Gis-Based Mineral Prospectivity Mapping Using Machine Learning Methods: A Case Study from Zhuonuo Ore District, Tibet. Ore Geology Reviews. doi.org/10.1016/j.oregeorev.2023.105627.

K-Mine. (2023). Digging Deeper with Data: Machine Learning in Ore Grade Estimation. [Online] K-Mine. Available at: https://k-mine.com/articles/digging-deeper-with-data-machine-learning-in-ore-grade-estimation/.

Picterra. (2023). How to Mitigate Risks in Mining Operations with Geospatial Intelligence. [Online] Picterra. Available at: https://picterra.ch/blog/how-to-mitigate-risks-in-mining-operations-with-geospatial-intelligence/.

Pell, R. (2023). AI-Powered Swarm Robots Aim to Disrupt Mining Industry. [Online] EE News. Available at: https://www.eenewseurope.com/en/ai-powered-swarm-robots-aim-to-disrupt-mining-industry/.

Sun, G., et al. (2022). Machine Learning Coupled with Mineral Geochemistry Reveals the Origin of Ore Deposits. Ore Geology Reviews. doi.org/10.1016/j.oregeorev.2022.104753.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

The rest is here:
Enhancing Ore Analysis with AI in Mining Operations - AZoMining

Read More..

What is Machine Learning and How Does It Work? – Blockchain Council

Summary

Machine learning is a transformative branch of artificial intelligence that empowers systems to learn and improve from experience without being explicitly programmed. It involves algorithms that analyze data, learn from it, and make decisions or predictions. Machine learnings capability to adapt to new data independently makes it a cornerstone of modern AI applications, enhancing everything from healthcare diagnostics to personalized consumer experiences.

In this article, we will delve deep into the world of machine learning, providing you with a comprehensive understanding of what it is and how it works. By the end of this article, you will have a solid grasp of the core concepts, algorithms, and real-world applications of machine learning.

Machine Learning (ML) represents a transformative branch of artificial intelligence (AI) that empowers software applications to become more accurate in predicting outcomes without being explicitly programmed to do so. It revolves around the development of algorithms that can process input data and use statistical analysis to predict an output while updating outputs as new data becomes available. The essence of machine learning lies in its ability to learn from data, identify patterns, and make decisions with minimal human intervention.

Machine learning is crucial for its ability to process and analyze vast amounts of data with increasing accuracy and efficiency. Its importance stems from the fact that it enables the automation of decision-making processes and can be applied to a wide range of industries, including healthcare, finance, education, and more. Machine learning algorithms can uncover hidden insights through data without being explicitly programmed where to look, leading to innovations in AI that seem to border on sci-fi.

Moreover, machine learning is fundamental in developing complex models that power modern AI applications, such as natural language processing, self-driving cars, and recommendation systems. By harnessing the power of machine learning, businesses and organizations can improve operational efficiencies, enhance customer experiences, and innovate continuously in an ever-evolving digital landscape.

This technologys significance also lies in its flexibility and scalability, making it a critical tool for tackling complex problems by learning from data patterns and improving over time. As data volumes grow exponentially, machine learnings role becomes increasingly vital in making sense of this information, making predictive analyses more accurate and reliable, and driving forward the capabilities of AI to unlock new possibilities and solutions to complex challenges.

Machine Learning works by using algorithms to analyze data, learn from that data, and make use of the data. The process involves feeding the algorithm training data, which can be labeled (known) or unlabeled (unknown), to develop a model that can make predictions or decisions based on new data. The algorithms performance improves over time through a process of trial and error, adjusting its approach as it learns from the outcomes of its predictions compared to actual results. This iterative learning process enables ML models to increase in accuracy and efficiency, adapting to new data with minimal human intervention.

Machine Learning (ML) is not just a futuristic concept but a present reality, deeply embedded in various aspects of our daily lives and the backbone of numerous future innovations. From enhancing social media experiences to revolutionizing healthcare, MLs applications are vast and varied. Below, we delve into some of these applications, illustrating the breadth of MLs impact on our world:

Machine learning processes vast amounts of data, raising significant data privacy and security concerns. Ensuring the confidentiality and integrity of data while leveraging it for ML applications is paramount. This involves complying with data protection regulations, securing data storage and transfer, and implementing robust access controls.

The deployment of ML systems comes with ethical implications, including bias, fairness, and transparency. Its crucial to develop and train ML models responsibly, ensuring they do not perpetuate or amplify biases present in the training data, and decisions made by algorithms are explainable and fair.

ML models are only as good as the data they are trained on and the assumptions they are built upon. They may not handle novel situations well if those scenarios were not represented in the training data. Furthermore, over-reliance on ML can lead to overlooking simpler, more efficient solutions.

Machine learning stands as a pillar of technological advancement, driving innovation across numerous fields. Its ability to process and learn from vast amounts of data autonomously has opened new avenues for problem-solving and efficiency. As machine learning continues to evolve, its impact on our daily lives and future possibilities expands, marking an era of unprecedented growth in intelligent systems.

What is Machine Learning?

How Does Machine Learning Work?

Why is Machine Learning Important?

What are the Types of Machine Learning?

Can Machine Learning Predict the Future?

See more here:
What is Machine Learning and How Does It Work? - Blockchain Council

Read More..

Recommendation Systems: Enhancing User Experience and Driving Sales – Medium

Recommendation systems, also known as recommender systems, have become integral components of various online platforms, providing users with personalized content suggestions based on their preferences and interactions. These systems leverage advanced algorithms and deep learning concepts to enhance user experience by offering tailored recommendations for movies, TV shows, digital products, books, articles, services, and more. This article explores the workings of recommendation systems, their life cycle, types, algorithms, and real-life examples, emphasizing their significant impact on increasing sales and consumer satisfaction.

### How Do Recommender Systems Work?

A recommendation system is essentially a data filtering engine that employs deep learning algorithms to suggest potential products based on user preferences and secondary filtering. These algorithms analyze patterns in user behavior towards a particular service or product. The data collection methods vary, with e-commerce websites using review ratings and platforms like YouTube saving liked and disliked videos.

### Recommendation System Life Cycle

1. **Collect the Data:** Relevant data, such as product reviews or user ratings, is collected. 2. **Store the Collected Data:** Data is stored in proprietary data warehouses or third-party cloud services for efficient retrieval. 3. **Filter the Data:** Problematic values are filtered to enhance model accuracy. 4. **Analyze the Data:** Machine-learning or deep learning algorithms are used to detect hidden patterns. 5. **Evaluate and Test Our Model:** The performance of the recommendation system model is checked and tuned if necessary. 6. **Deploy Our Model:** The model is deployed into practice, and continuous monitoring and tuning occur. 7. **Online Machine Learning:** The model continuously improves and adjusts based on newly acquired data, ensuring longevity.

### Recommendation System Algorithms

Two prominent approaches are matrices and deep learning:

1. Clustering: An unsupervised machine learning algorithm that returns good prediction results. 2. Deep Learning: A more complex analytical approach that filters down the most relevant suggestions based on consumers behavioral patterns.

Benefits of Using Recommendation Systems

1. Increased Sales: Generating revenue is a primary goal, and recommendation systems boost sales and consumer engagement. 2. Lower System Load: These systems improve sales while maintaining lower system loads, decreasing long-term costs. 3. Increasing Engagement and Satisfaction: Continuous personalized recommendations optimize the user experience, boosting satisfaction.

Types of Recommendation Systems

1. Collaborative Filtering: Focuses on the similarity between users and items, improving recommendations based on shared interests. 2. Content-Based Filtering: Evaluates the similarity of products and suggests items with similar classifications. 3. Hybrid Filtering: Utilizes both collaborative and content-based filtering for enhanced accuracy.

Real-Life Recommender System Examples

1. Amazon: Filters likely items to help users find satisfactory products. 2. Spotify: Utilizes a hybrid filtering algorithm to recommend new music based on user preferences. 3. Facebook / Meta: Recommends posts, friend suggestions, and ads based on user interactions. 4. Netflix: Generates over 80% of content views from algorithmic suggestions, resulting in significant revenue. 5. Google and YouTube: Utilizes recommendation systems to improve user satisfaction in search results and personalized content suggestions.

Successful Companies That Use Recommender Systems

Various successful companies, including Amazon, Spotify, Facebook, Netflix, and Google, have integrated recommendation systems into their platforms to enhance user experience and drive sales.

### Final Thoughts on Recommendation Systems

Collaborative Filtering, Content-Based Filtering, and Hybrid Models are foundational methods for building recommendation systems. Important considerations include tracking recommendation effectiveness, determining when to stop recommending a product, weighing product reviews or view counts, and avoiding pigeonholing consumers.

Citations: 1. Amazon 2. Spotify 3. Facebook / Meta 4. Netflix 5. Google and Youtube

Go here to see the original:
Recommendation Systems: Enhancing User Experience and Driving Sales - Medium

Read More..

TealBook: Using AI and Machine Learning to Leverage Data – FinTech Magazine

Stephany Lapierre founded TealBook with a strong conviction that poor-quality data hinders procurement teams ability to drive efficiency and deliver actionable insights. Now, as TealBooks CEO, Lapierre and her team leverage technologies such as machine learning (ML) and artificial intelligence (AI) to help enterprise organisations access a single, trusted source of supplier data that seamlessly integrates with any data lake or enterprise system.

Whats more, TealBook enables organisations to efficiently centralise their supplier data, consolidating fragmented records from different sources into a single, comprehensive supplier record. TealBook utilizes AI to autonomously collect, verify and enrich supplier data, promoting transparency and delivering actionable insights to procurement teams and their wider businesses.

TealBook was proud to pioneer the utilisation of AI technology to gather and enhance supplier data, transforming the way procurement teams collect and manage vital information.

Procurement teams needed accurate data in a consistent and easily accessible way, Lapierre says. If businesses have better data, their systems will produce better outcomes and, ultimately, procurement will have more value within an organisation.

It wasnt until 2017 when I met TealBooks first CTO, who was fascinated with the evolution of technology that I realised we could achieve this without suppliers having to come to a portal. We could, in fact, build on Google to leverage some of the models, to then find information on businesses and make sense of it in a profile that procurement can consume.

Describing this as the magic moment, Lapierre shares that the AI-enabled TealBook not only allows procurement teams to automate their existing software solutions, but also generates information that is more valuable to customers.

Finding information and matching it to the right company is, therefore, the baseline of how TealBook is utilising AI.

Examples of this include how weve introduced natural language tags to allow our customers to search for normal words rather than codes, and by reducing the amount of checks that humans have to make, so our customers have full confidence in the data provided, adds Lapierre.

Continued here:
TealBook: Using AI and Machine Learning to Leverage Data - FinTech Magazine

Read More..