Page 1,036«..1020..1,0351,0361,0371,038..1,0501,060..»

The Race for AGI: Approaches of Big Tech Giants – Fagen wasanni

Big tech companies like OpenAI, Google DeepMind, Meta (formerly Facebook), and Tesla are all on a quest to achieve Artificial General Intelligence (AGI). While their visions for AGI differ in some aspects, they are all determined to build a safer, more beneficial form of AI.

OpenAIs mission statement encapsulates their goal of ensuring that AGI benefits all of humanity. Sam Altman, former CEO of OpenAI, believes that AGI may not have a physical body and that it should contribute to the advancement of scientific knowledge. He sees AI as a tool that amplifies human capabilities and participates in a human feedback loop.

OpenAIs key focus has been on transformer models, such as the GPT series. These models, trained on large datasets, have been instrumental in OpenAIs pursuit of AGI. Their transformer models extend beyond text generation and include text-to-image and voice-to-text models. OpenAI is continually expanding the capabilities of the GPT paradigm, although the exact path to AGI remains uncertain.

Google DeepMind, on the other hand, places its bets on reinforcement learning. Demis Hassabis, CEO of DeepMind, believes that AGI is just a few years away and that maximizing total reward through reinforcement learning can lead to true intelligence. DeepMind has developed models like AlphaFold and AlphaZero, which have showcased the potential of this approach.

Metas Yann LeCun disagrees with the effectiveness of supervised and reinforcement learning for achieving AGI, citing their limitations in reasoning with commonsense knowledge. He champions self-supervised learning, which does not rely on labeled data for training. Meta has dedicated significant research efforts to self-supervised learning and has seen promising results in language understanding models.

Elon Musks Tesla aims to build AGI that can comprehend the universe. Musk believes that a physical form may be essential for AGI, as seen through his investments in robotics. Teslas Optimus robot, powered by a self-driving computer, is a step towards that vision.

Both Google and OpenAI have incorporated multimodality functions into their models, allowing for the processing of textual descriptions associated with images. These companies are also exploring research avenues like causality, which could have a significant impact on achieving AGI.

While the leaders in big tech have different interpretations of AGI and superintelligence, their approaches reflect a shared ambition to develop AGI that benefits humanity. The race for AGI is still ongoing, and the path to its realization remains a combination of innovation, research, and exploration.

Continued here:
The Race for AGI: Approaches of Big Tech Giants - Fagen wasanni

Read More..

AI and Machine Learning Hold Potential in Fighting Infectious Disease – HealthITAnalytics.com

July 26, 2023 -A new study described that despite the continued threat of infectious diseases on public health, the capabilities of artificial intelligence (AI) and machine learning (ML) can help handle this issue and provide a framework for future pandemics.

Regardless of research and biological advancements, infectious diseases remain an issue. To keep up with the conflict, common methods that are applied include therapies and diagnostics. Often, synthetic biology approaches provide a platform for innovation. Research indicated that synthetic biology is often divided into two development categories: quantitative biological hypotheses and data from experimentation, and the comprehension of the factors such as nucleic acids and peptides, which allow for the control of biology.

According to research, advancements in AI have considered these factors. Given the complexities of biology and infectious disease, there is a high level of potential. Thus, researchers reviewed how the relationship between AI and synthetic biology can battle infectious diseases.

The review described three uses of AI in infectious diseases: anti-infective drug discovery, infection biology, and diagnostics.

Despite the pre-existence of various anti-infective drugs, drug resistance often outmatches their effectiveness. AI and ML can play a large role in developing new drugs by searching small-molecule databases while using training models to define new drugs or apply existing drugs.

The complications of infection biology are extensive, largely due to the activity of bacterial, eukaryotic, and viral pathogens. These factors can affect host responses, and, therefore, the course of infection.

ML models, however, can analyze nucleic acid, protein, and other variables to determine the aspects of hostpathogen interactions and immune responses. Research also indicates they can define genes and interactions between proteins that link to host cell changes, immunogenicity prediction, and other activities.

Also, gene expression optimization and antigen prediction has assisted the development of vaccines and drugs through supervised models.

AI and ML have applications in diagnostics. As prior instances have shown, the speed of infectious disease detection plays a large role in how spreading takes place. However, through AI and ML, researchers can identify infections and foresee drug resistance. This is primarily because of its ability to program elements well and highlight essential information from biomolecular networks.

Regardless of the opportunities and challenges that these methods may pose, they are essential to the future of infectious disease treatment. As the development of AI continues, it is critical to consider a wide range of datasets to avoid bias.

Various research efforts have also showcased the capabilities of AI and how it may advance healthcare.

Research from April 2022, for example, involved the creation of an AI model that uses non-contrast abdominal CT images to analyze factors related to pancreatic health, determining type 2 diabetes risk.

Using hundreds of images and various measurements, researchers defined the factors that correlated with diabetes. Consistent and accurate results allowed researchers to determine this analysis was an effective approach to detecting diabetes.

This study is a step towards the wider use of automated methods to address clinical challenges, said study authors Ronald M. Summers, MD, PhD, and Hima Tallam, an MD and PhD student, in apress release.It may also inform future work investigating the reason for pancreatic changes that occur in patients with diabetes.

Research efforts such as these are integral examples of how AI continues to play a role in healthcare.

See the rest here:
AI and Machine Learning Hold Potential in Fighting Infectious Disease - HealthITAnalytics.com

Read More..

Machine learning, incentives and telematics: New tools emerge to … – Utility Dive

The transition to electric vehicles will require significant new amounts of power generation for charging, but utilities say those resources can be developed in time. A more pressing challenge may be managing new charging loads, ensuring millions of vehicles do not put undue stress on the grid.

There will be 30 million to 42 million electric vehicles on U.S. roads in 2030, and they will require about 28 million charging ports,according to the National Renewable Energy Laboratory. Utilities, distributed energy resource aggregators and research institutions are all stepping up to address the issue.

Power generation is only a part of this conversation. Just as important is improving our ability to manage demand in real time, Albert Gore, executive director of the Zero Emission Transportation Association, said Monday in a discussion of how the utility sector must approach EVs.

The industry needs to further its ability to precisely manage demand in real time, including by accurately predicting when and where increases in demand will occur, according to a new ZETA policy brief.

Utilities particularly larger electricity providers in urban areas have been working for years to nudge EV charging to off-peak hours through time-of-use rates or EV-specific rates.

Consolidated Edison, which serves New York City, expects more than a quarter million EVs in its territory by 2025 and has been working since 2017 to encourage grid-beneficial charging through its SmartCharge program, which offers incentives for drivers to avoid charging during peak times.

It's one of, if not the most, successful managed charging programs in the country,Cliff Baratta, Con Edisons electric vehicle strategy and markets section manager, said during ZETAs discussion. At the end of 2022, the utility had 20% of all light-duty EVs registered in its territory enrolled in the program.

In a lot of other places, we see that 5-6% is considered good, Baratta said. We have been able to get really strong engagement with that program, to try and entrench this grid beneficial charging behavior.

Research institutions are working to develop solutions. Argonne National Laboratory and the University of Chicago have partnered on the development of a new algorithm to manage EV charging that utilizes machine learning to efficiently schedule loads.

Distributed energy resource managers are rolling out approaches to managing the anticipated demand..

FlexCharging, which has provided managed charging programs and pilots since 2019, is rolling out a product called EVisionfor smaller utilities that may have fewer resources to devote to demand management initiatives.

Cloud-based software company Virtual Peaker on Tuesday launched a managed charging solution that allows utilities to utilize both vehicle telematics data or internet-connected EV chargers to manage vehicles in charging programs.

The company is focusing on creating a single, scalable solution to increase adoption of distributed energy resources programs and help utilities reach their goals more quickly and efficiently, Virtual Peaker founder and CEO Williams Burke said in a statement.

The companys DER platform is already being used by Efficiency Maine, the states administrator for energy efficiency and demand management programs, to manage battery systems and EV chargers during peak demand periods.

Read the original:
Machine learning, incentives and telematics: New tools emerge to ... - Utility Dive

Read More..

Machine Learning Engineer Career Path: What You Need to Know – Dice Insights

When it comes to tech skills, machine learning is only getting hotter. Companies of all sizes want tech professionals who can build and manage self-learning A.I. models, and then integrate those models into all kinds of next-generation apps and services.

According tolevels.fyi, which crowdsources compensation information for various tech roles,compensation for those specializing in machine learning and A.I. has increased 2.7 percent over the past six months, from an average of $246,000 per year to $252,535. Those with lots of experience and skill in machine learning can command exponentially higher salaries, of course, especially at big companies known for extremely high pay.

But what does it take to launch yourself onto a machine learning engineer career path, and once youre there, what sort of options are available to you? Lets dive in.

Before embarking on this career path, its important to have a solid foundation in computer science and math, including an understanding of how computers and algorithms work.

Programming is an essential skill, and multiple coding languages may be required, depending on the role and company. Python and JavaScript are often the most popular programming languages that aspiring machine learning engineers focus on first, followed by supporting frameworks like TensorFlow and PyTorch.

Once you've built that foundation and developed your core skill sets, the next step is to start applying what youve learned, explains Neil Ouellette, senior machine learning engineer at Skillsoft. Its important to gain hands-on practice and experience by experimenting with different algorithms and creating small projects on Github.

This is a good way to not only sharpen your skills, but to build a portfolio of work that you can eventually share with prospective employers, he explains.

Do you need formal education to become a machine learning engineer? Thats a great question. Given the demand for ML and AI engineers, many companies are willing to hire tech professionals who dont have a formal two- or four-year degree, provided they can prove during the interview process that they have the skills necessary to succeed in the role. Before you begin applying for jobs, make sure you have a solid grasp on the following, which pop up frequently as requirements for machine learning engineer roles:

In order to carry out these tasks, youll need to have mastered the following:

Mehreen Tahir, software engineer at New Relic, says that entry-level machine learning engineers are often responsible for preprocessing and cleaning data, implementing and testing different machine learning models, and possibly deploying these models.

This involves a lot of data wrangling and debugging, but it's an essential part of the learning process, he says. I always recommend beginners to start working on their own projects or participate in online competitions like those on Kaggle.

These experiences can give you invaluable insights into the practical challenges of machine learning; theyll also help bulk out your resume and application materials when you begin applying for roles in earnest.

An entry-level machine learning engineer (often titled as a junior machine learning engineer or machine learning intern) typically fits into the data science or engineering department of an organization. Some of the typical tasks might include assisting in the development of machine learning models, with lots of collaboration with data analysts and data scientists. As with most tech jobs, a solid grasp of soft skills such as communication and empathy is essential for anyone who wants to make a career out of machine learning.

You might help in building and testing models under the supervision of senior team members, gathering and cleaning data, and learning to interpret and present results, Ouellette says. Many organizations expect their machine learning teams to stay current with the latest techniques and methodologies, and you may be asked to help with this.

There are many pathways for career advancement as a machine learning engineer, whether one is interested in being a manager or individual contributor.

"After gaining some years of experience and expertise, you can advance to a senior role," Ouellette says. "These engineers usually oversee project management, design systems on a larger scale, and may mentor junior engineers."

These potential roles could include a senior machine learning engineer, lead machine learning engineer or team lead, data scientist, AI specialist, machine learning architect, or research scientist. Given the popularity of machine learning, mastering its fundamentals can open an incredible number of career tracks that increasingly rely on the technology.

In the role of team lead, you would oversee and lead a team of machine learning engineers, Ouellette explains. This includes making key decisions on behalf of the team and owning the whole machine learning development process.

In companies that heavily rely on data or A.I., advancing to the executive roles of chief data officer or chief A.I. officer means one is responsible for establishing A.I. and data-related strategies at the highest level.If you find you have a knack for handling clients and translating business problems into data problems, a move into a data science role could be a good fit, Tahir notes.

Data scientists often do a bit of everything, from understanding the business context to data analysis to communicating results in a way that non-technical folks can understand. Soft skills matter more than ever if youre interested in management and want to eventually run your own team.

In this role, you'd be less hands-on with the code and more involved in strategic decisions, team management, and liaising between your team and the rest of the organization, Tahir says. If you're deeply interested in the theoretical side of machine learning and want to push the boundaries of what's possible, you might consider going back to school to get a PhD and become a researcher.

Its important to remember these pathways aren't strictly linear, and the beauty of this field is that there's a lot of flexibility to shape one's own career based on a personal interests and skills. What do you want machine learning to do for you?

To ensure continuous progression in a career as a machine learning engineer, it's crucial to stay updated with the latest advancements in the field, especially as it evolves at a rapid pace. Tools, languages, and frameworks enjoy frequent iterations and updates; if you ignore them for too long, youll fall behind.

Maintaining your baseline knowledge involves taking online courses, attending workshops, webinars, or conferences, and regularly reading relevant research papers. Another key is to constantly work on challenging projects, either at work or in your spare time, that push the boundaries of your current skill set, Tahir says. This hands-on experience is invaluable and can often expose you to new tools and techniques.

Networking is also essential: joining professional groups, online communities, and attending industry events can help you machine learning pros connected, learn from peers, and open new opportunities.

From Tahir's perspective, it's also important to develop soft skills, including communication, teamwork, and problem-solving skills: These are vital, particularly as you move into more senior or managerial roles Demonstrating your ability to effectively communicate complex ideas to non-technical team members or stakeholders can significantly boost your career progression.

Ouellette agrees it's critical to know how to communicate with non-technical audiences. Although machine learning is inherently complex, youll often need to explain how algorithms and statistical models work with stakeholders or clients who may not have a technical background, he says. Strong communication skills are a must.

Go here to read the rest:
Machine Learning Engineer Career Path: What You Need to Know - Dice Insights

Read More..

Application of machine learning techniques to the modeling of … – Nature.com

Nunes, L. J. R., Causer, T. P. & Ciolkosz, D. Biomass for energy: A review on supply chain management models. Renew. Sustain. Energy Rev. 120, 109658 (2020).

Article Google Scholar

Wang, G. et al. A review of recent advances in biomass pyrolysis. Energy Fuels 34, 1555715578 (2020).

Article CAS Google Scholar

Osman, A. I. et al. Conversion of biomass to biofuels and life cycle assessment: A review. Environ. Chem. Lett. 19, 40754118 (2021).

Article CAS Google Scholar

Bakhtyari, A., Makarem, M. A. & Rahimpour, M. R. Bioenergy Systems for the Future 87148 (Woodhead Publishing, 2017).

Book Google Scholar

Testa, M. L. & Tummino, M. L. Lignocellulose biomass as a multifunctional tool for sustainable catalysis and chemicals: An overview. Catalysts 11, 125 (2021).

Article CAS Google Scholar

Lin, C.-Y. & Lu, C. Development perspectives of promising lignocellulose feedstocks for production of advanced generation biofuels: A review. Renew. Sustain. Energy Rev. 136, 110445 (2021).

Article CAS Google Scholar

Wang, C. et al. A review of conversion of lignocellulose biomass to liquid transport fuels by integrated refining strategies. Fuel Process. Technol. 208, 106485 (2020).

Article CAS Google Scholar

Yamaguchi, A., Sato, O., Mimura, N. & Shirai, M. Catalytic production of sugar alcohols from lignocellulosic biomass. Catal. Today 265, 199202. https://doi.org/10.1016/j.cattod.2015.08.026 (2016).

Article CAS Google Scholar

Erian, A. M. & Sauer, M. Utilizing yeasts for the conversion of renewable feedstocks to sugar alcohols: A review. Bioresour. Technol. 346, 126296. https://doi.org/10.1016/j.biortech.2021.126296 (2022).

Article CAS PubMed Google Scholar

da Costa Lopes, A. M., Joo, K. G., Morais, A. R. C., Bogel-ukasik, E. & Bogel-ukasik, R. Ionic liquids as a tool for lignocellulosic biomass fractionation. Sustain. Chem. Process. 1, 131 (2013).

Article Google Scholar

Abbasi, A. R. et al. Recent advances in producing sugar alcohols and functional sugars by engineering Yarrowia lipolytica. Front. Bioeng. Biotechnol. 9, 648382 (2021).

Article PubMed PubMed Central Google Scholar

Fickers, P., Cheng, H. & SzeKiLin, C. Sugar alcohols and organic acids synthesis in Yarrowia lipolytica: Where are we?. Microorganisms 8, 574 (2020).

Article CAS PubMed PubMed Central Google Scholar

Park, Y.-C., Oh, E. J., Jo, J.-H., Jin, Y.-S. & Seo, J.-H. Recent advances in biological production of sugar alcohols. Curr. Opin. Biotechnol. 37, 105113 (2016).

Article CAS PubMed Google Scholar

Grembecka, M. Sugar alcoholstheir role in the modern world of sweeteners: A review. Eur. Food Res. Technol. 241, 114 (2015).

Article CAS Google Scholar

Amarasekara, A. S. Ionic liquids in biomass processing. Isr. J. Chem. 59, 789802 (2019).

Article CAS Google Scholar

Tan, S. S. Y. & MacFarlane, D. R. Ionic liquids in biomass processing. Ionic Liquids 1, 311339 (2009).

Article Google Scholar

Rajamani, S., Santhosh, R., Raghunath, R. & Jadhav, S. A. Value-added chemicals from sugarcane bagasse using ionic liquids. Chem. Pap. 75, 56055622 (2021).

Article CAS Google Scholar

Parvaneh, K., Rasoolzadeh, A. & Shariati, A. Modeling the phase behavior of refrigerants with ionic liquids using the QC-PC-SAFT equation of state. J. Mol. Liq. 274, 497504. https://doi.org/10.1016/j.molliq.2018.10.116 (2019).

Article CAS Google Scholar

Singh, S. K. & Savoy, A. W. Ionic liquids synthesis and applications: An overview. J. Mol. Liq. 297, 112038 (2020).

Article CAS Google Scholar

Sedghamiz, M. A., Rasoolzadeh, A. & Rahimpour, M. R. The ability of artificial neural network in prediction of the acid gases solubility in different ionic liquids. J. CO2 Util. 9, 3947. https://doi.org/10.1016/j.jcou.2014.12.003 (2015).

Article CAS Google Scholar

Rasoolzadeh, A. et al. A thermodynamic framework for determination of gas hydrate stability conditions and water activity in ionic liquid aqueous solution. J. Mol. Liq. 347, 118358 (2022).

Article CAS Google Scholar

Setiawan, R., Daneshfar, R., Rezvanjou, O., Ashoori, S. & Naseri, M. Surface tension of binary mixtures containing environmentally friendly ionic liquids: Insights from artificial intelligence. Environ. Dev. Sustain. 23, 1760617627 (2021).

Article Google Scholar

Rasoolzadeh, A., Javanmardi, J., Eslamimanesh, A. & Mohammadi, A. H. Experimental study and modeling of methane hydrate formation induction time in the presence of ionic liquids. J. Mol. Liq. 221, 149155. https://doi.org/10.1016/j.molliq.2016.05.016 (2016).

Article CAS Google Scholar

Welton, T. Ionic liquids: A brief history. Biophys. Rev. 10, 691706 (2018).

Article CAS PubMed PubMed Central Google Scholar

Brandt, A., Grsvik, J., Hallett, J. P. & Welton, T. Deconstruction of lignocellulosic biomass with ionic liquids. Green Chem. 15, 550583 (2013).

Article CAS Google Scholar

Reddy, P. A critical review of ionic liquids for the pretreatment of lignocellulosic biomass. S. Afr. J. Sci. 111, 19 (2015).

Article Google Scholar

Tu, W.-C. & Hallett, J. P. Recent advances in the pretreatment of lignocellulosic biomass. Curr. Opin. Green Sustain. Chem. 20, 1117 (2019).

Article Google Scholar

Usmani, Z. et al. Ionic liquid based pretreatment of lignocellulosic biomass for enhanced bioconversion. Biores. Technol. 304, 123003 (2020).

Article CAS Google Scholar

Roy, S. & Chundawat, S. P. S. Ionic liquid-based pretreatment of lignocellulosic biomass for bioconversion: A critical review. BioEnergy Res. 1, 116 (2022).

Google Scholar

Xia, Z. et al. Processing and valorization of cellulose, lignin and lignocellulose using ionic liquids. J. Bioresour. Bioprod. 5, 7995 (2020).

Article CAS Google Scholar

Carneiro, A. P., Rodrguez, O. & Macedo, E. A. Solubility of monosaccharides in ionic liquids: Experimental data and modeling. Fluid Phase Equilib. 314, 2228 (2012).

Article CAS Google Scholar

Carneiro, A. P., Rodrguez, O. & Macedo, E. A. Solubility of xylitol and sorbitol in ionic liquids: Experimental data and modeling. J. Chem. Thermodyn. 55, 184192 (2012).

Article CAS Google Scholar

Carneiro, A. P., Held, C., Rodriguez, O., Sadowski, G. & Macedo, E. A. Solubility of sugars and sugar alcohols in ionic liquids: Measurement and PC-SAFT modeling. J. Phys. Chem. B 117, 99809995 (2013).

Article CAS PubMed Google Scholar

Carneiro, A. P., Rodrguez, O. & Macedo, E. N. A. Fructose and glucose dissolution in ionic liquids: Solubility and thermodynamic modeling. Ind. Eng. Chem. Res. 52, 34243435 (2013).

Article CAS Google Scholar

Mohan, M., Goud, V. V. & Banerjee, T. Solubility of glucose, xylose, fructose and galactose in ionic liquids: Experimental and theoretical studies using a continuum solvation model. Fluid Phase Equilib. 395, 3343 (2015).

Article CAS Google Scholar

Mohan, M., Banerjee, T. & Goud, V. V. Solid liquid equilibrium of cellobiose, sucrose, and maltose monohydrate in ionic liquids: Experimental and quantum chemical insights. J. Chem. Eng. Data 61, 29232932 (2016).

Article CAS Google Scholar

Paduszynski, K., Okuniewski, M. & Domanska, U. Sweet-in-green systems based on sugars and ionic liquids: New solubility data and thermodynamic analysis. Ind. Eng. Chem. Res. 52, 1848218491 (2013).

Article CAS Google Scholar

Paduszyski, K., Okuniewski, M. & Domaska, U. Solidliquid phase equilibria in binary mixtures of functionalized ionic liquids with sugar alcohols: New experimental data and modelling. Fluid Phase Equilib. 403, 167175 (2015).

Article Google Scholar

Paduszyski, K., Okuniewski, M. & Domaska, U. An effect of cation functionalization on thermophysical properties of ionic liquids and solubility of glucose in themmeasurements and PC-SAFT calculations. J. Chem. Thermodyn. 92, 8190 (2016).

Article Google Scholar

Teles, A. R. R. et al. Solubility and solvation of monosaccharides in ionic liquids. Phys. Chem. Chem. Phys. 18, 1972219730 (2016).

Article CAS PubMed PubMed Central Google Scholar

Yang, X., Wang, J. & Fang, Y. Solubility and solution thermodynamics of glucose and fructose in three asymmetrical dicationic ionic liquids from 323.15 K to 353.15 K. J. Chem. Thermodyn. 139, 105879 (2019).

Article CAS Google Scholar

Abbasi, M., Pazuki, G., Raisi, A. & Baghbanbashi, M. Thermophysical and rheological properties of sorbitol+([mmim](MeO)2PO2) ionic liquid solutions: Solubility, density and viscosity. Food Chem. 320, 126566 (2020).

Article CAS PubMed Google Scholar

Zarei, S., Abdolrahimi, S. & Pazuki, G. Thermophysical characterization of sorbitol and 1-ethyl-3-methylimidazolium acetate mixtures. Fluid Phase Equilib. 497, 140150 (2019).

Article Google Scholar

Ruiz-Aceituno, L., Carrero-Carralero, C., Ramos, L. & Sanz, M. L. Selective fractionation of sugar alcohols using ionic liquids. Sep. Purif. Technol. 209, 800805 (2019).

Article CAS Google Scholar

Jeon, P. R. & Lee, C.-H. Artificial neural network modelling for solubility of carbon dioxide in various aqueous solutions from pure water to brine. J. CO2 Util. 47, 101500 (2021).

Article CAS Google Scholar

Amar, M. N. Modeling solubility of sulfur in pure hydrogen sulfide and sour gas mixtures using rigorous machine learning methods. Int. J. Hydrogen Energy 45, 3327433287 (2020).

Article Google Scholar

Hemmati-Sarapardeh, A., Amar, M. N., Soltanian, M. R., Dai, Z. & Zhang, X. Modeling CO2 solubility in water at high pressure and temperature conditions. Energy Fuels 34, 47614776 (2020).

Article CAS Google Scholar

Vanani, M. B., Daneshfar, R. & Khodapanah, E. A novel MLP approach for estimating asphaltene content of crude oil. Pet. Sci. Technol. 37, 22382245 (2019).

Article CAS Google Scholar

Daneshfar, R., Keivanimehr, F., Mohammadi-Khanaposhtani, M. & Baghban, A. A neural computing strategy to estimate dew-point pressure of gas condensate reservoirs. Pet. Sci. Technol. 38, 706712 (2020).

Article CAS Google Scholar

Bakhtyari, A., Mofarahi, M. & Iulianelli, A. Combined mathematical and artificial intelligence modeling of catalytic bio-methanol conversion to dimethyl ether. Energy Convers. Manag. 276, 116562. https://doi.org/10.1016/j.enconman.2022.116562 (2023).

Article CAS Google Scholar

Continued here:
Application of machine learning techniques to the modeling of ... - Nature.com

Read More..

USC at the International Conference on Machine Learning (ICML … – USC Viterbi School of Engineering

USC researchers will present nine papers at the 40th International Conference on Machine Learning (ICML 2023).

This year, USC researchers will showcase nine papers at the 40th International Conference on Machine Learning (ICML 2023), one of the most prestigious machine learning conferences, taking place July 23 29 in Honolulu, Hawaii. ICML brings together the artificial intelligence (AI) community to share new ideas, tools, and datasets, and make connections to advance the field.

Accepted papers with USC affiliation:

Moccasin: Efficient Tensor Rematerialization for Neural Networks Tu, Jul 25, 17:00Poster Session 2

Burak Bartan,Haoming Li,Harris Teague,Christopher Lott,Bistra Dilkina

Abstract: The deployment and training of neural networks on edge computing devices pose many challenges. The low memory nature of edge devices is often one of the biggest limiting factors encountered in the deployment of large neural network models. Tensor rematerialization or recompute is a way to address high memory requirements for neural network training and inference. In this paper we consider the problem of execution time minimization of compute graphs subject to a memory budget. In particular, we develop a new constraint programming formulation called textsc{Moccasin} with onlyO(n)integer variables, wherenis the number of nodes in the compute graph. This is a significant improvement over the works in the recent literature that propose formulations withO(n2)Boolean variables. We present numerical studies that show that our approach is up to an order of magnitude faster than recent work especially for large-scale graphs.

Refined Regret for Adversarial MDPs with Linear Function Approximation Tu, Jul 25, 17:00 Poster Session 2

Yan Dai,Haipeng Luo,Chen-Yu Wei,Julian Zimmert

Abstract: We consider learning in an adversarial Markov Decision Process (MDP) where the loss functions can change arbitrarily overKepisodes and the state space can be arbitrarily large. We assume that the Q-function of any policy is linear in some known features, that is, a linear function approximation exists. The best existing regret upper bound for this setting (Luo et al., 2021) is of order(K2/3)(omitting all other dependencies), given access to a simulator. This paper provides two algorithms that improve the regret to(K)in the same setting. Our first algorithm makes use of a refined analysis of the Follow-the-Regularized-Leader (FTRL) algorithm with the log-barrier regularizer. This analysis allows the loss estimators to be arbitrarily negative and might be of independent interest. Our second algorithm develops a magnitude-reduced loss estimator, further removing the polynomial dependency on the number of actions in the first algorithm and leading to the optimal regret bound (up to logarithmic terms and dependency on the horizon). Moreover, we also extend the first algorithm to simulator-free linear MDPs, which achieves(K8/9)regret and greatly improves over the best existing bound(K14/15). This algorithm relies on a better alternative to the Matrix Geometric Resampling procedure by Neu & Olkhovskaya (2020), which could again be of independent interest.

Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning We, Jul 26, 14:00 Poster Session 3

Taoan Huang,Aaron Ferber,Yuandong Tian,Bistra Dilkina,Benoit Steiner

Abstract: Integer Linear Programs (ILPs) are powerful tools for modeling and solving a large number of combinatorial optimization problems. Recently, it has been shown that Large Neighborhood Search (LNS), as a heuristic algorithm, can find high-quality solutions to ILPs faster than Branch and Bound. However, how to find the right heuristics to maximize the performance of LNS remains an open problem. In this paper, we propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics including the primal gap, the primal integral, survival rates and the best-performing rate. Specifically, CL-LNS collects positive and negative solution samples from an expert heuristic that is slow to compute and learns a new one with a contrastive loss. We use graph attention networks and a richer set of features to further improve its performance.

Fairness in Matching under UncertaintyWe, Jul 26, 17:00Poster Session 4

Siddartha Devic,David Kempe,Vatsal Sharan,Aleksandra Korolova

Abstract: The prevalence and importance of algorithmic two-sided marketplaces has drawn attention to the issue of fairness in such settings. Algorithmic decisions are used in assigning students to schools, users to advertisers, and applicants to job interviews. These decisions should heed the preferences of individuals, and simultaneously be fair with respect to their merits (synonymous with fit, future performance, or need). Merits conditioned on observable features are always emph{uncertain}, a fact that is exacerbated by the widespread use of machine learning algorithms to infer merit from the observables. As our key contribution, we carefully axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits; indeed, it simultaneously recognizes uncertainty as the primary potential cause of unfairness and an approach to address it. We design a linear programming framework to find fair utility-maximizing distributions over allocations, and we show that the linear program is robust to perturbations in the estimated parameters of the uncertain merit distributions, a key property in combining the approach with machine learning techniques.

On Distribution Dependent Sub-Logarithmic Query Time of Learned Indexing Th, Jul 27, 13:30Poster Session 5

Sepanta Zeighami,Cyrus Shahabi

Abstract: A fundamental problem in data management is to find the elements in an array that match a query. Recently, learned indexes are being extensively used to solve this problem, where they learn a model to predict the location of the items in the array. They are empirically shown to outperform non-learned methods (e.g., B-trees or binary search that answer queries inO(logn)time) by orders of magnitude. However, success of learned indexes has not been theoretically justified. Only existing attempt shows the same query time ofO(logn), but with a constant factor improvement in space complexity over non-learned methods, under some assumptions on data distribution. In this paper, we significantly strengthen this result, showing that under mild assumptions on data distribution, and the same space complexity as non-learned methods, learned indexes can answer queries inO(loglogn)expected query time. We also show that allowing for slightly larger but still near-linear space overhead, a learned index can achieveO(1)expected query time. Our results theoretically prove learned indexes are orders of magnitude faster than non-learned methods, theoretically grounding their empirical success.

SurCo: Learning Linear SURrogates for COmbinatorial Nonlinear Optimization ProblemsTh, Jul 27, 16:30Poster Session 6

Aaron Ferber,Taoan Huang,Daochen Zha,Martin Schubert,Benoit Steiner,Bistra Dilkina,Yuandong Tian

Abstract: Optimization problems with nonlinear cost functions and combinatorial constraints appear in many real-world applications but remain challenging to solve efficiently compared to their linear counterparts. To bridge this gap, we proposeSurCothat learns linearSurrogate costs which can be used in existingCombinatorial solvers to output good solutions to the original nonlinear combinatorial optimization problem. The surrogate costs are learned end-to-end with nonlinear loss by differentiating through the linear surrogate solver, combining the flexibility of gradient-based methods with the structure of linear combinatorial optimization. We propose threevariants:for individual nonlinear problems,for problem distributions, andto combine both distribution and problem-specific information. We give theoretical intuition motivating, and evaluate it empirically. Experiments show thatfinds better solutions faster than state-of-the-art and domain expert approaches in real-world optimization problems such as embedding table sharding, inverse photonic design, and nonlinear route planning.

Emergent Asymmetry of Precision and Recall for Measuring Fidelity and Diversity of Generative Models in High Dimensions Th, Jul 27, 13:30Poster Session 5

Mahyar Khayatkhoei,Wael AbdAlmageed

Abstract: Precision and Recall are two prominent metrics of generative performance, which were proposed to separately measure the fidelity and diversity of generative models. Given their central role in comparing and improving generative models, understanding their limitations are crucially important. To that end, in this work, we identify a critical flaw in the common approximation of these metrics using k-nearest-neighbors, namely, that the very interpretations of fidelity and diversity that are assigned to Precision and Recall can fail in high dimensions, resulting in very misleading conclusions. Specifically, we empirically and theoretically show that as the number of dimensions grows, two model distributions with supports at equal point-wise distance from the support of the real distribution, can have vastly different Precision and Recall regardless of their respective distributions, hence an emergent asymmetry in high dimensions. Based on our theoretical insights, we then provide simple yet effective modifications to these metrics to construct symmetric metrics regardless of the number of dimensions. Finally, we provide experiments on real-world datasets to illustrate that the identified flaw is not merely a pathological case, and that our proposed metrics are effective in alleviating its impact.

Conformal Inference is (almost) Free for Neural Networks Trained with Early Stopping Th, Jul 27, 13:30Poster Session 5

Ziyi Liang,Yanfei Zhou,Matteo Sesia

Abstract: Early stopping based on hold-out data is a popular regularization technique designed to mitigate overfitting and increase the predictive accuracy of neural networks. Models trained with early stopping often provide relatively accurate predictions, but they generally still lack precise statistical guarantees unless they are further calibrated using independent hold-out data. This paper addresses the above limitation with conformalized early stopping: a novel method that combines early stopping with conformal calibration while efficiently recycling the same hold-out data. This leads to models that are both accurate and able to provide exact predictive inferences without multiple data splits nor overly conservative adjustments. Practical implementations are developed for different learning tasks outlier detection, multi-class classification, regression and their competitive performance is demonstrated on real data.

A Critical View of Vision-Based Long-Term Dynamics Prediction Under Environment MisalignmentTu, Jul 25, 17:00Poster Session 2

Hanchen Xie,Jiageng Zhu,Mahyar Khayatkhoei,Jiazhi Li,Mohamed Hussein,Wael AbdAlmageed

Abstract: Dynamics prediction, which is the problem of predicting future states of scene objects based on current and prior states, is drawing increasing attention as an instance of learning physics. To solve this problem, Region Proposal Convolutional Interaction Network (RPCIN), a vision-based model, was proposed and achieved state-of-the-art performance in long-term prediction. RPCIN only takes raw images and simple object descriptions, such as the bounding box and segmentation mask of each object, as input. However, despite its success, the models capability can be compromised under conditions of environment misalignment. In this paper, we investigate two challenging conditions for environment misalignment: Cross-Domain and Cross-Context by proposing four datasets that are designed for these challenges: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. The datasets cover two domains and two contexts. Using RPCIN as a probe, experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision-based long-term dynamics prediction model. Furthermore, we propose a promising direction to mitigate the Cross-Domain challenge and provide concrete evidence supporting such a direction, which provides dramatic alleviation of the challenge on the proposed datasets.

Published on July 25th, 2023

Last updated on July 26th, 2023

Go here to see the original:
USC at the International Conference on Machine Learning (ICML ... - USC Viterbi School of Engineering

Read More..

Unlock the Power of AI A Special Release by KDnuggets and … – KDnuggets

Hello,

I hope this email finds you well, coding away and innovating in the dynamic world of Machine Learning.

Today, I am excited to announce a collaboration between Machine Learning Mastery and KDnuggets. Together, we've created something unique to enrich your Machine Learning journey.

I present to you our brand new ebook, "Maximizing Productivity with ChatGPT". While we've been known for our technical, code-heavy books that have guided many through the intricate pathways of Machine Learning, this time we're offering something different but equally impactful.

This ebook shifts the focus from pure coding and technical aspects, to understanding, interacting, and leveraging one of the most advanced AI tools in the market - ChatGPT. This is an evolution from our prior books, aimed at broadening your perspective and deepening your understanding of AI applications.

You'll discover:

In celebration of this launch, we're offering an exclusive 20% early bird discount with the code "20offearlybird" at checkout. But don't delay - this offer ends soon!

Maximizing Productivity with ChatGPT

This ebook is a testament to the fact that not all roads to mastering Machine Learning and AI are paved with code alone. Harnessing the power of AI also involves understanding its applications and learning how to effectively interact with it. "Maximizing Productivity with ChatGPT" offers you exactly that - an avenue to explore and master the usage of AI beyond the traditional coding confines.

If you have any questions, please don't hesitate to hit reply and send me an email directly. Here's to harnessing the power of AI together.

- Jason, Machine Learning Mastery Founder

Read more from the original source:
Unlock the Power of AI A Special Release by KDnuggets and ... - KDnuggets

Read More..

Google reveals how AI and machine learning are shaping its … – ComputerWeekly.com

Google has lifted the lid on how artificial intelligence (AI) and machine learning (ML) are assisting it with helping consumers and businesses shrink the environmental footprint of their activities by allowing them to make real-time adjustments that can curb their greenhouse gas (GHG) emissions.

Details of its work in this area can be found in the tech giants most recent annualEnvironmental report. Covering the 12 months to 31 December 2022, the document provides updates on how the tech giants efforts to run its datacentres and offices on carbon-free energy (CFE) round-the-clock are progressing and how its bid to reduce the water consumed by its operations is going.

We achieved approximately 64% round-the-clock CFE across all of our datacentres and offices, [and] this year, we expanded our CFE reporting to include offices and third-party datacentres, in addition to Google-owned and operated datacentres, said the company.

At the end of 2022, our contracted watershed projects have replenished 271 million gallons of water equivalent to more than 400 Olympic-sized swimming pools to support our target to replenish 120% of the freshwater we used.

The report also documents how, seven years after declaring itself as being an AI-first company, this technology is underpinning the companys own climate change mitigation efforts.

To this point, the company said it was using AI to accelerate the development of climate change-fighting tools that can provide better information to individuals, operational optimisation for organisations, and improved predicting and forecasting.

As an example, the company pointed to the way Google Maps uses AI to help users plan journeys in a more eco-friendly way by minimising the amount of fuel and battery power they use to get from A to B.

Eco-friendly routing has helped prevent 1.2 metric tonnes of estimated carbon emissions since launch equivalent to taking approximately 250,000 fuel-based cars off the road for a year, it reported.

The technology is also proving useful in the companys work to reduce the environmental footprint of its AI models by helping the datacentres in which they are hosted run in a more energy-efficient way.

Weve made significant investments in cleaner cloud computing by making our datacentres some of the most efficient in the world and sourcing more carbon-free energy, it said in the report. Were helping our customers make real-time decisions to reduce emissions and mitigate climate risks with data and AI.

To reinforce this point, the company cited the roll-out of its Active Assist feature to Google Cloud customers, which uses machine learning to identify unused and potentially wasteful workloads so they can be stopped to save money and cut the organisations carbon emissions at the same time.

On the flipside, though, the report went on to acknowledge that ramping up the use of AI in this way also increases the amount of work its datacentres are doing, which is giving rise to concerns about the environmental impact and energy consumption habits of its AI workloads.

With AI at an inflection point, predicting the future growth of energy use and emissions from AI compute in our datacentres is challenging, the report continued.

Historically, research has shown that as AI/ML compute demand has gone up, the energy needed to power this technology has increased at a much slower rate than many forecasts predicted. We have used tested practices to reduce the carbon footprint of workloads by large margins; together, these principles have reduced the energy of training a model by up to 100x and emissions by up to 1,000x.

The report added: We plan to continue applying these tested practices and to keep developing new ways to make AI computing more efficient.

View original post here:
Google reveals how AI and machine learning are shaping its ... - ComputerWeekly.com

Read More..

Drones stay on course in difficult conditions thanks to machine … – Professional Engineering

(Credit: MIT News, with figures from iStock)

A new machine-learning based approach can control drones and autonomous vehicles more effectively and efficiently in difficult conditions, according to its developers at the Massachusetts Institute of Technology (MIT) and Stanford University.

The technique, designed for dynamic environments where conditions can change rapidly, could help an autonomous vehicle learn to compensate for slippery road conditions to avoid going into a skid. Other potential applications include allowing a robotic free-flyer to tow different objects in space, or enabling a drone to closely follow a downhill skier despite being buffeted by strong winds.

The researchers approach incorporates structures from control theory into the process for learning a model. It does this in such a way that leads to an effective method of controlling complex dynamics, an MIT announcement said, such as those caused by wind on the trajectory of a flying vehicle. The structures are like hints that can help guide how to control a system, the announcement added.

The focus of our work is to learn intrinsic structure in the dynamics of the system that can be leveraged to design more effective, stabilising controllers, said assistant professorNavid Azizan from MIT. By jointly learning the systems dynamics and these unique control-oriented structures from data, were able to naturally create controllers that function much more effectively in the real world.

The technique immediately extracts an effective controller from the model, the announcement said, as opposed to other machine-learning methods that require a controller to be derived or learned separately with additional steps. With this structure, the researchers approach is also able to learn an effective controller using less data than other approaches. This could help their learning-based control system achieve better performance, faster, in rapidly changing environments.

This work tries to strike a balance between identifying structure in your system and just learning a model from data, said lead authorSpencer M Richards, a graduate student at Stanford University. Our approach is inspired by how roboticists use physics to derive simpler models for robots. Physical analysis of these models often yields a useful structure for the purposes of control one that you might miss if you just tried to naively fit a model to data. Instead, we try to identify similarly useful structure from data that indicates how to implement your control logic.

The researchers found that their method was data-efficient, achieving high performance even with little data. It could reportedly model a highly dynamic rotor-driven vehicle using only 100 data points, for example. Methods that used multiple learned components saw their performance drop much faster with smaller datasets.

This efficiency could make the technique especially useful in situations where a drone or robot needs to learn quickly in rapidly changing conditions.

The general approach could also be applied to many types of dynamical systems, from robotic arms to free-flying spacecraft operating in low-gravity environments.

The work was supported, in part, by the NASA University Leadership Initiative and the Natural Sciences and Engineering Research Council of Canada. The research will be presented at the International Conference on Machine Learning (ICML), running this week at the Hawaii Convention Centre.

Want the best engineering stories delivered straight to your inbox? TheProfessional Engineeringnewslettergives you vital updates on the most cutting-edge engineering and exciting new job opportunities. To sign up, clickhere.

Content published by Professional Engineering does not necessarily represent the views of the Institution of Mechanical Engineers.

Read the original post:
Drones stay on course in difficult conditions thanks to machine ... - Professional Engineering

Read More..

Forum Launched to Promote Safe Development of Large Machine … – Fagen wasanni

OpenAI, Microsoft, Google, and Anthropic have come together to create a forum that aims to support the safe and responsible development of large machine-learning models. This collaboration between top leaders in artificial intelligence is focused on coordinating safety research and establishing best practices for what are known as frontier AI models. These models exceed the capabilities of existing advanced models and have the potential to pose significant risks to public safety.

Generative AI models, such as the one powering chatbots like ChatGPT, have the ability to generate responses in the form of prose, poetry, and images by extrapolating vast amounts of data at high speeds. While these models offer various applications, industry experts and government bodies like the European Union have emphasized the need for appropriate measures to mitigate the risks associated with AI technologies.

In a statement, Microsoft President Brad Smith highlighted the responsibility of companies in ensuring the safety, security, and human control of AI technology. The Frontier Model Forum, the industry body established by the collaboration, will work closely with policymakers, academics, and governments to facilitate information sharing and promote responsible practices. The forum will prioritize the development and sharing of a public library of benchmarks and technical evaluations for frontier AI models.

The Frontier Model Forum plans to establish an advisory board in the near future and secure funding to support its initiatives. The forum will not engage in lobbying activities but instead focus on advancing AI safety. Anna Makanju, Vice President of Global Affairs at OpenAI, emphasized the urgency of this work and expressed the forums readiness to make swift progress in this critical area.

The collaboration between OpenAI, Microsoft, Google, and Anthropic demonstrates a collective commitment to ensuring the safe and responsible development of AI models. By coordinating research efforts and sharing best practices, this forum aims to address potential risks associated with the growing capabilities of machine-learning models.

Follow this link:
Forum Launched to Promote Safe Development of Large Machine ... - Fagen wasanni

Read More..