Page 2,280«..1020..2,2792,2802,2812,282..2,2902,300..»

Apple Car Will Leverage Machine Learning to Make Driving Decisions as Fast as Possible – iDrop News

Apples Car, when released, will make history as one of the first consumer vehicles to lack a steering wheel and now we know about even more new features the Apple Car will bring.

The Apple Car will utilize machine learning because current auto processors are not fast enough to make key driving decisions without ML autonomously. It was expected that Apple would use machine learning (ML) in the vehicle since the fruit companys AI chief John Giannandrea is now in charge.This is being featured because Apple wants decisions at the wheel to be made as fast as they possibly can for the consumers sake.Even a decision about a lane change can affect the processor, and that is how it works in current automobiles.

In the patent, Apple wants to use the technology in some states, such as when the vehicle is traveling a largely-empty straight highway with no turns possible for several kilometers or miles.

The patent also states, the number of actions to be evaluated may be relatively small; in other states, as when the vehicle approaches a crowded intersection, the number of actions may be much larger.

If they choose to use this technology, that means the Apple Car would have to figure out the current state of the environment around the vehicle to make certain decisions that, if it were a regular car, would take much longer.

Then to finish it off, it will have to figure out a set of feasible actions that can be takenout. This means it works like a human brain; before you do something, you have a range of outcomes or options that play in your mind. ML is basically the same, just like how a human learns through experience over time, a machine needs to learn to eventually deliver an optimal experience.

Apple has recently faced some setbacks with the Apple Car project within the last few months, from employees and key executives leaving, among others but the Apple Car project is still ongoing and expected for the full product to be in production in 2024 and a full consumer release in 2025.

Thanks for reading! Any questions? Let us know on social media.

Follow this link:
Apple Car Will Leverage Machine Learning to Make Driving Decisions as Fast as Possible - iDrop News

Read More..

TurbineOne Awarded Air Force Contract to Deploy New Machine Learning Capability to Frontlines – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--TurbineOne, the frontline perception company, was awarded a Small Business Innovation Research (SBIR) contract to advance its machine learning capabilities and deploy its software with the United States Air Force. The specific offices within the Air Force that made the SBIR award to TurbineOne are the Air Force Research Laboratory (AFRL) and AFWERX.

SBIR programs are highly competitive programs that encourage domestic small businesses to engage in Federal Research and Development. Through a competitive awards-based program, SBIRs enable small businesses to explore their technological potential and provide the incentive to profit from its commercialization. By including qualified small businesses in the nation's R&D arena, high-tech innovation is stimulated, and the United States gains entrepreneurial spirit as it meets its specific research and development needs.

TurbineOne and AFSOC have partnered with AFWERX to usher in a new era for unprecedented situational awareness. The specific technology being developed is AutoML, a feature within TurbineOnes Frontline Perception System. It uniquely enables Operators to make changes to machine-learning models in the field without having to code and without an internet connection. These newly tuned, or created, models can be immediately deployed to cameras, sensors, autonomous vehicles, and drones at the tactical edge to strengthen situational awareness, helping to keep warfighters and civilians safe.

U.S. warfighters do not have machine learning in their deployed kits, but it is critically valuable when configured for military missions, according to Ian Kalin, TurbineOnes CEO. AutoML is a revolutionary software technology that will salvage years of investments in machine learning by enabling Operators to synchronize real-world data with the training data used to create the original algorithms.

TurbineOne was founded by Ian Kalin and Matt Amacker. Kalin previously served in the U.S. Navy as a Counter Terrorism Officer after witnessing the Pentagon attacked on September 11th, and he later served as the first Chief Data Officer for the U.S. Department of Commerce. Amacker has been awarded over 110 patents and was the former Head of Applied R&D Lab at Google. He was also a Principal Engineer at Amazon and the Head of Car AI for the Toyota Research Institute. Together, Amacker and Kalin realized that people serving in dangerous frontline environments do not have Machine Learning (ML) capabilities readily available; TurbineOne was created to address the national security challenge.

AFRL and AFWERX have partnered to streamline the Small Business Innovation Research process in an attempt to speed up the experience, broaden the pool of potential applicants and decrease bureaucratic overhead. Beginning in SBIR 18.2, and now in SBIR 21.1, the Air Force has begun offering 'The Open Topic' SBIR/STTR program that is faster, leaner and open to a broader range of innovations. The Press Release authority is the Air Force Special Operations Command Public Affairs (PA) office.

TurbineOnes contract is a Phase-II type, which generally authorizes awards up to $750,000. TurbineOne plans to successfully deliver to its customer and end-users within one year of the contract award.

About TurbineOne

TurbineOne was created to help public sector heroes perform even more effectively with the right technologies. We leverage Machine Learning to provide frontline perception that empowers first-responders and warfighters with greater situational awareness. TurbineOne currently works with the Department of Defense as well as leading commercial companies like Siemens. The company is based in San Francisco. Please visit us at https://www.turbineone.com for more information.

About AFWERX

Twitter - https://twitter.com/AFWERX LinkedIn - https://www.linkedin.com/company/afwerx-usaf/ Facebook - https://www.facebook.com/AFWERX Instagram - https://www.instagram.com/afwerx/

Originally posted here:
TurbineOne Awarded Air Force Contract to Deploy New Machine Learning Capability to Frontlines - Business Wire

Read More..

Learning to improve chemical reactions with artificial intelligence – EurekAlert

image:INL researchers perform experiments using the Temporal Analysis of Products (TAP). view more

Credit: Idaho National Laboratory

If you follow the directionsin a cake recipe, you expect to end up with a nice fluffy cake.In Idaho Falls,though, the elevation can affecttheseresults.When baked goods dont turn outas expected, the troubleshooting begins.This happens in chemistry,too.Chemistsmustbeable to account for how subtle changes or additions may affect the outcome for better or worse.

Chemists maketheir version ofrecipes, known as reactions,to create specific materials.These materialsare essential ingredients to an array of products found in healthcare, farming, vehicles andother everyday productsfrom diapers to diesel.When chemists develop new materials, they rely on information from previous experiments and predictions based onpriorknowledge ofhowdifferent starting materials interact with others and behave underspecificconditions.There are a lot of assumptions, guesswork and experimentation in designing reactions using traditional methods.New computational methods like machine learning can help scientists better understand complex processes like chemical reactions.While it can be challenging forhumans topick outpatternshiddenwithin the data from many different experiments, computers excel at this task.

Machine learning isan advancedcomputational toolwhereprogrammers givecomputerslots ofdata andminimalinstructions about how to interpret it. Instead of incorporatinghuman bias into the analysis, the computer isonly instructed to pull out what it finds to be important from the data. This could be an image of a cat (if the input is all the photos on the internet) orinformation about how a chemical reactionproceeds through a series ofsteps, as is thecasefora set of machine learning experiments that are ongoing at Idaho National Laboratory.

At the lab,researchersworking with the innovative Temporal Analysis of Products (TAP)reactorsystemaretryingto improveunderstanding of chemical reactions by studying the role of catalysts,whicharecomponentsthat can be added toamixture of chemicals to alter thereactionprocess.Oftencatalystsspeed up thereaction,but they can do other things,too. In baking and brewing,enzymesact as catalyststo speed up fermentationandbreakdown sugars in wheat (glucose) into alcohol and carbon dioxide,which creates the bubbles that make bread riseand beer foam.

In the laboratory,perfectinga new catalystcan be expensive, time-consuming and even dangerous.According toINLresearcher Ross Kunz, Understanding how and why a specific catalyst behavesin a reaction is theholygrail ofreaction chemistry.To help find it,scientists arecombiningmachine learningwith a wealth of new sensor datafrom the TAP reactorsystem.

The TAP reactor system uses an array of microsensors to examine the different componentsof a reaction in realtime.For the simplestcatalytic reaction,the system captures8uniquemeasurementsin each of 5,000timepointsthat make up the experiment.Assembling the timepoints into a single data set provides 165,000 measurements foroneexperiment on a very simple catalyst.Scientiststhenuse the datatopredict what is happening in the reaction at a specific timeand how different reaction steps work together in a larger chemical reaction network.Traditional analysis methods canbarelyscratch the surfaceofsuch a large quantity of datafor a simple catalyst, let alonethe many more measurements thatare produced by acomplex one.

Machine learning methods can take theTAP dataanalysis further. Using a type of machine learning called explainableartificial intelligence, orAI,theteam caneducatethe computer about known properties of thereactionsstarting materialsand the physics that govern these types of reactions, a process called training.The computer can apply thistrainingand the patterns that it detects in the experimental data to better describe theconditions inareactionacross time.The team hopes that theexplainable AI method will produce adescription of the reaction that can be used toaccuratelymodelthe processes that occur during theTAP experiment.

In most AI experiments, a computer is given almost no trainingon the physicsand simply detects patterns in the data based upon what it can identify,similar tohow a baby might react to seeing something completely new.By contrast,the value of explainable AI lies in the fact that humanscan understand the assumptions and information that lead to the computers conclusions.This human-level understanding can make it easier for scientists to verify predictions and detect flaws and biases in the reaction description produced by explainable AI.

Implementing explainable AIis not as simple or straightforward as it might sound.With support from the Department of Energys Advanced Manufacturing office, theINLteam has spent two years preparing theTAPdata for machine learning,developing andimplementingthe machinelearning program, andvalidating the results for a common catalyst in a simple reaction that occursinthe car you driveeveryday. This reaction,the transformation of carbon monoxideinto carbon dioxide,occurs ina carscatalytic converter andrelies onplatinumasthe catalyst. Since this reaction is well studied,researcherscan checkhow well the results of the explainable AI experiments match known observations.

In April 2021, the INL team published their results validating the explainable AI method with the platinum catalyst in the article Data driven reaction mechanism estimation via transient kinetics and machine learninginChemical Engineering Journal.Now that the team has validated the approach, they are examining TAP data frommore complex industrialcatalystsused in the manufacture of smallmolecules like ethylene, propylene and ammonia. They are also working with collaborators at Georgia Institute of Technologyto applythemathematical models that result from themachine learningexperiments tocomputersimulationscalled digital twins. This type of simulation allows the scientists topredict what will happen if they change an aspectof the reaction. When a digital twin is based on avery accurate model of a reaction, researcherscanbe confident in itspredictions.

Bygivingthe digital twinthe taskto simulate a modification to a reaction or new type of catalyst, researchers can avoid doing physical experiments for modifications that are likely to lead to poor results or unsafe conditions. Instead,the digital twin simulation can savetime and moneyby testing thousands of conditions,while researchers can testonly a handful of the mostpromising conditions in the physical laboratory.

Plus, this machine learning approach can produce newer and more accurate modelsfor each new catalyst and reaction condition testedwith the TAP reactorsystem.In turn, applying these models to digital twin simulations gives researchers the predictive power to pick the best catalysts and conditions to test next in the TAP reaction. As a result, each roundof testing, model development and simulationproducesa greater understanding of how a reactionworksand howtoimprove it.

These toolsarethe foundation of a new paradigm incatalyst science but alsopave the way for radical new approaches inchemical manufacturing,said Rebecca Fushimi, who leads the project team.

About Idaho National LaboratoryBattelle Energy Alliance manages INL for the U.S. Department of Energys Office of Nuclear Energy. INL is the nations center for nuclear energy research and development,and alsoperforms research in each of DOEs strategic goal areas: energy, national security, science and the environment. For more information, visitwww.inl.gov. Follow us on social media:Twitter,Facebook,InstagramandLinkedIn.

Chemical Engineering Journal

Data driven reaction mechanism estimation via transient kinetics and machine learning

18-Apr-2021

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read the original here:
Learning to improve chemical reactions with artificial intelligence - EurekAlert

Read More..

Machine learning used to make fruits and vegetables more delicious – hortidaily.com

According to some, produce sold in the grocery store often tastes like cardboard. For those that agree, there are several reasons for this. Most of them stem from the fact that tastiness is far down on the list of what the food industry encourages plant breeders to prioritize when developing new produce varieties. Then again, when they do want to focus on taste, breeders don't have good tools for quickly sampling the fruit from thousands of cultivars.

Now, in a surprising new paper, researchers at the University of Florida describe a new method for "tasting" produce based on its chemical profile. They also stumbled on a big surprise. For more than a century, breeders have focused on sweetness and sourness when they tried to develop tastier cultivars. The new research shows that the tried-and-true approach ignores roughly half of what makes a tasty fruit or veggie so delicious.

Agricultural scientist Patrico Muoz, one of the papers co-authors, has stated that his team determined that in blueberries, for example, only 40 percent [of how well people like a fruit] is explained by sugar and acid. The rest is explained by chemicals called volatile organic compounds that we perceive with receptors in our noses, not our mouths.

That find could change the future of agriculture. The researchers behind this study focused on dozens of varieties of tomatoes and blueberries, including commercial cultivars sold in supermarkets, heirloom varieties more likely to be found at farmers markets and farm-to-table restaurants, and newly developed strains that recently graduated from breeding programs.

Source: interestingengineering.com

Photo source: Dreamstime.com

View post:
Machine learning used to make fruits and vegetables more delicious - hortidaily.com

Read More..

How Telecom Companies Can Leverage Machine Learning To Boost Their Profits – Forbes

AI in telecom

The number of smartphone users across the world has skyrocketed over the last decade and promises to do so in the future too. Additionally, most business functions can now be executed on mobile devices. However, despite the mobile surge, telecom operators around the world are still not that profitable, with average net profit margins hovering around the 17% mark. The main reasons for the middling profit rates are the high number of market rivals vouching for the same customer base and the high overhead expenses associated with the sector. Communication Service Providers (CSPs) need to become more data-driven to reduce such costs and, automatically, improve their profit margins. Increasing the involvement of AI in telecom operations enables telecom companies to make this switch from rigid, infrastructure-driven operations to a data-driven approach seamlessly.

The inclusion of AI in telecom functional areas positively impacts the bottom line of CSPs in several ways. Businesses can use specific capabilities, avatars or applications of machine learning and AI for this purpose.

Mobile networks are one of the prime components of the ever-expanding internet community. As stated earlier, a large number of internet users and business operations have gone mobile in recent times. Additionally, the emergence of 5G and edge applications, and the impending arrival of the metaverse, will simply increase the need for high-performance telecom networks. It is very likely that the standard automation tech and personnel will be overwhelmed by the relentless pressure of high-speed network connectivity and mobile calls.

The use of AI in telecom operations can transform an underperforming mobile network into a self-optimizing network (SON). Telecom businesses can monitor network equipment and anticipate equipment failure with AI-powered predictive analysis. Additionally, AI-based tools allow CSPs to keep network quality consistently high by monitoring key performance indicators such as traffic on a zone-to-zone basis. Apart from monitoring the performance of equipment, machine learning algorithms can also continually run pattern recognition while scanning network data to detect anomalies. Then, AI-based systems can either perform remedial actions or notify the network administrator and engineers in the region where the anomaly was detected. This enables telecom companies to fix network issues at source before they adversely impact customers.

Network security is another area of focus for telecom operators. Of late, the rising security issues in telecom networks have been a point of concern for CSPs globally. AI-based data security tools allow telecom companies to constantly monitor the cyber health of their networks. Machine learning algorithms perform analysis of global data networks and past security incidents to make key predictions of existing network vulnerabilities. In other words, AI-based network security tools enable telecom businesses to pre-empt future security complications and proactively take preventive measures to deal with them.

Ultimately, AI improves telecom networks in multiple ways. By improving the performance, anomaly detection and security of CSP networks, machine learning algorithms can enhance the user experience for telecom company clients. This will result in a growth of such companies customer base in the long term, and, by extension, an increase in profits.

How Telecom Companies Can Leverage Machine Learning To Boost Their Profits

The Europol classifies the telecom sector to be particularly vulnerable to fraud. Telecom fraud involves the abuse of telecommunications systems such as mobile phones and tablets by criminals to siphon money off CSPs. As per a recent study, telecom fraud accounted for losses of US$40.1 billionapproximately 1.88% of the total revenue of telecom operators. One of the common types of telecom fraud is International Revenue Sharing Fraud (IRSF). IRSF involves criminals linking up with International Premium Rate Number (IPRN) providers to illegally acquire money from telecom companies by using bots to make an absurdly high number of international calls of long duration. Such calls are difficult to trace. Additionally, telecom companies cannot bill clients for such premium calls as the connections are fraudulent. So, telecom operators end up bearing the losses for such calls. The IPRNs and criminals share the spoils between themselves. Apart from IRSF, vishing (a portmanteau for voice calls and phishing attacks) is a way in which malicious entities dupe clients of telecom companies to extract money and data. The involvement of AI in telecom operations enables CSPs to detect and eliminate these kinds of fraud.

Machine learning algorithms assist telecom network engineers with detecting instances of illegal access, fake caller profiles and cloning. To achieve this, the algorithms perform behavioral monitoring of the global telecom networks of CSPs. Accordingly, the network traffic along such networks is closely monitored. The pattern recognition capabilities of AI algorithms come into play again as they enable network administrators to identify contentious scenarios such as several calls being made from a fraudulent number, or blank callsa general indicator of vishingbeing repeatedly made from questionable sources. One of the more prominent examples of telecom companies using data analytics for fraud detection and prevention is Vodafones partnership with Argyle Data. The data science-based firm analyzes the network traffic of the telecom giant for intelligent, data-driven fraud management.

Detecting and eliminating telecom fraud are major steps towards increasing the profit margins of CSPs. As you can see, the role of AI in telecom operations is significant for achieving this objective.

To reliably serve millions of clients, telecom companies need to have a massive workforce that can handle their backend operations on a daily basis efficiently. Dealing with such a large volume of customers creates several opportunities for human error.

Telecom companies can employ cognitive computinga robotics-based field that involves Natural Language Processing (NLP), Robotic Process Automation (RPA) and rule enginesto automate the rule-based processes such as sending marketing emails, autocompleting e-forms, recording data and carrying out certain tasks that can replicate human actions. The use of AI in telecom operations brings greater accuracy in back-office operations. As per a study conducted by Deloitte, several executives in the telecom, media and tech industry felt that the use of cognitive computing for backend operations brought substantial and transformative benefits to their respective businesses.

Customer sentiment analysis involves a set of data classification and analysis tasks carried out to understand the pulse of customers. This allows telecom companies to evaluate whether their clients like or dislike their services based on raw emotions. Marketers can use NLP and AI to sense the "mood" of their customers from their texts, emails or social media posts bearing a telecom companys name. Aspect-based sentiment analytics highlight the exact service areas in which customers have problems. For example, if a customer is upset about the number of calls getting dropped regularly and writes a long and incoherent email to a telcos customer service team about it, the machine learning algorithms employed for sentiment analysis can still autonomously ascertain their mood (angry) and the problem (the call drop rate).

Apart from sentiment analysis, telecom businesses can hugely benefit from the growing emergence of chatbots and virtual assistants. Service requests for network set-ups, installation, troubleshooting and maintenance-based issues can be resolved through such machine learning-based tools and applications. Virtual assistants enable CRM teams in telecom companies to manage a large number of customers with ease. In this way, CSPs can manage customer service and sentiment analysis successfully.

Across the board, users generally cite the quality of their telecom customer service to be below satisfactory. Telecom users are constantly infuriated by long waiting times to get to a service executive, unanswered complaint emails and poor grievance handling by CSPs. Poor CRM does not bode well for telecom companies as it maligns their reputation and diminishes shareholder confidence. By implementing machine learning for CRM, telecom companies can address such issues efficiently.

Like businesses in any other sector, telecom companies need to boost their profits for long-term survival and diversification. As stated at the beginning, there are multiple factors that thwart their chances of profit generation. Going down the data science route is one of the novel ways to overcome such challenges. By involving AI in telecom operations, CSPs can manage their data wisely and channelize their resources towards maximizing revenues.

Despite the positives associated with AI, only a limited percentage of telecom businesses have incorporated the technology for profit maximization. Gradually, one can expect that percentage to rise.

More:
How Telecom Companies Can Leverage Machine Learning To Boost Their Profits - Forbes

Read More..

We dont need boots on the ground to track Russias moves on Ukraine – Popular Science

Craig Nazareth is an assistant professor of Practice of Intelligence & Information Operations, University of Arizona. This story originally published on The Conversation.

The US has been warning for weeks about the possibility of Russia invading Ukraine, and threatening retaliation if it does. Just eight years after Russias incursion into eastern Ukraine and invasion of Crimea, Russian forces are once again mobilizing along Ukraines borders.

As the US and other NATO member governments monitor Russias activities and determine appropriate policy responses, the timely intelligence they rely on no longer comes solely from multimillion-dollar spy satellites and spies on the ground.

Social media, big data, smartphones and low-cost satellites have taken center stage, and scraping Twitter has become as important as anything else in the intelligence analyst toolkit. These technologies have also allowed news organizations and armchair sleuths to follow the action and contribute analysis.

Governments still carry out sensitive intelligence-gathering operations with the help of extensive resources like the US intelligence budget. But massive amounts of valuable information are publicly available, and not all of it is collected by governments. Satellites and drones are much cheaper than they were even a decade ago, allowing private companies to operate them, and nearly everyone has a smartphone with advanced photo and video capabilities.

As an intelligence and information operations scholar, I study how technology is producing massive amounts of intelligence data and helping sift out the valuable information.

Through information captured by commercial companies and individuals, the realities of Russias military posturing are accessible to anyone via internet search or news feed. Commercial imaging companies are posting up-to-the-minute, geographically precise images of Russias military forces. Several news agencies are regularly monitoring and reporting on the situation. TikTok users are posting video of Russian military equipment on rail cars allegedly on their way to augment forces already in position around Ukraine. And internet sleuths are tracking this flow of information.

This democratization of intelligence collection in most cases is a boon for intelligence professionals. Government analysts are filling the need for intelligence assessments using information sourced from across the internet instead of primarily relying on classified systems or expensive sensors high in the sky or arrayed on the planet.

However, sifting through terabytes of publicly available data for relevant information is difficult. Knowing that much of the data could be intentionally manipulated to deceive complicates the task.

Enter the practice of open-source intelligence. The U.S. director of national intelligence defines Open-Source Intelligence, or OSINT, as the collection, evaluation and analysis of publicly available information. The information sources include news reports, social media posts, YouTube videos and satellite imagery from commercial satellite operators.

OSINT communities and government agencies have developed best practices for OSINT, and there are numerous free tools. Analysts can use the tools to develop network charts of, for example, criminal organizations by scouring publicly available financial records for criminal activity.

Private investigators are using OSINT methods to support law enforcement, corporate and government needs. Armchair sleuths have used OSINT to expose corruption and criminal activity to authorities. In short, the majority of intelligence needs can be met through OSINT.

Even with OSINT best practices and tools, OSINT contributes to the information overload intelligence analysts have to contend with. The intelligence analyst is typically in a reactive mode trying to make sense of a constant stream of ambiguous raw data and information.

Machine learning, a set of techniques that allows computers to identify patterns in large amounts of data, is proving invaluable for processing OSINT information, particularly photos and videos. Computers are much faster at sifting through large datasets, so adopting machine learning tools and techniques to optimize the OSINT process is a necessity.

Identifying patterns makes it possible for computers to evaluate information for deception and credibility and predict future trends. For example, machine learning can be used to help determine whether information was produced by a human or by a bot or other computer program and whether a piece of data is authentic or fraudulent.

And while machine learning is by no means a crystal ball, it can be usedif its trained with the right data and has enough current informationto assess the probabilities of certain outcomes. No one is going to be able to use the combination of OSINT and machine learning to read Russian President Vladimir Putins mind, but the tools could help analysts assess how, for example, a Russian invasion of Ukraine might play out.

Technology has produced a flood of intelligence data, but technology is also making it easier to extract meaningful information from the data to help human intelligence analysts put together the big picture.

More:
We dont need boots on the ground to track Russias moves on Ukraine - Popular Science

Read More..

Accurate and rapid prediction of tuberculosis drug resistance from genome sequence data using traditional machine learning algorithms and CNN |…

Data collection

To prepare the training data and labels, we downloaded the whole-genome sequencing (WGS) data for 10,575 MTB isolates from the sequence read archive (SRA) database17 and obtained corresponding lineage and phenotypic drug susceptibility test (DST) data from CRyPTIC Consortium and the 100,000 Genomes project in an excel file, which is also available in the supplementary of their publication15. The phenotypic DST results for the drugs were used as labels when training and evaluating our ML models. All the data were collected and shared by the CRyPTIC Consortium and the 100,000 Genomes Project15. Like the datasets used by previous studies, this dataset is imbalanced in that most isolates are susceptible, and the minority of them are resistant for all the four first-line drugs (Fig.1) and four second-line drugs. The numbers of isolate samples with phenotypic DST results available are 7138, 7137, 6347 and 7081 for EMB, INH, PZA and RIF, respectively. There are 6291 shared isolates among the four sample sets. In addition, 6820 out of the 10,575 isolates have phenotypic DST result available for each of the four second-line drugs.

Phenotypic overview of the MTB isolates. This bar chart shows numbers of susceptible and resistant isolates with DST results available for each of the four first-line drugs.

To detect the potential genetic features that could contribute to MTB drug resistance classification, we used a command-line tool called ARIBA18. ARIBA is a very rapid, flexible and accurate AMR genotyping tool that generates detailed and customizable outputs from which we extracted genetic features. First, we downloaded all reference data from CARD, which included not only references from different MTB strains but also from other bacteria (e.g., Staphylococcus aureus). Secondly, we clustered reference sequences based on their similarity. Then we used this collection of reference clusters as our pan-genome reference and aligned read pairs of an isolate to them. For each cluster that had reads mapped, we ran local assemblies, found the closest reference, and identified variants. After running these steps, ARIBA generated files including a summary file for alignment quality, a report file containing information of detected variants and AMR-associated genes, and a read depth file. For each cluster, the read depth file provides counts of the four DNA bases on each locus of the closest reference where reads were mapped.

Next, we filtered out low-quality mappings that did not pass the match criteria defined in ARIBAs GitHub wiki18. From these high-quality mappings, we collected novel variants in coding regions, well-studied resistance-causing variants and AMR-associated gene presences that were detected from at least one out of the 10,575 isolates as 263 genetic features. In addition, we included indicator variables for each of the 19 lineages into our feature vector resulting in a total of 282 features.

We applied two traditional ML algorithms, RF and LR, on the sample sets labeled with phenotypic DST results (see Data collection section) to train MTB AMR classifiers for the eight drugs (first-line and second-line), where the feature vector for each sample consists of the 282 features mentioned in Genetic feature extraction section.

RF is an ensemble method and made up of tens or hundreds of estimators (decision trees) to compress overfitting19,20. A final prediction is an average or majority vote of the predictions of all trees. It is often used when there are large training datasets and a large number of input features. Moreover, RF is good at dealing with imbalanced data by using class weighting. Here we trained each RF classifier with 1000 estimators.

LR is a popular regression technique for modeling binary dependent variable21. By using a sigmoid function (logit), linear regression is transformed into logistic regression so that the prediction range is [0, 1] for outputting probabilities. Then, LR model is fitted using maximum likelihood estimation. During the training process, we applied L1 regularization on LR models for feature selection and to prevent overfitting22.

CNN is a class of deep neural networks that takes multi-dimensional data as input23. When we sayCNN, generally, we refer to a 2-dimensional CNN, which is often used for image classification. However, there are two other types of CNN used in practice: 1-dimensional and 3-dimensional CNNs. Conv1D is generally used for time-series data where the kernel moves on one dimension and the input and output data are 2-dimensional. Conv2d and 3D kernels move on two dimensions and three dimensions, respectively.

Because deep learning algorithms require substantial computational power, we performed feature selection to only keep relevant features as input for deep learning algorithms. First, we randomly selected 80 percent of samples to calculate the importance of each feature by using the scikit-learn RF feature importance function that averages the impurity decrease from each feature across the trees to determine the final importance of each variable24. Then, we tuned the feature importance cutoff to find the one that maximizes the F1-score of an RF model trained on the remaining 20 percent of samples. For each of the eight drugs, features were selected when their feature importance scores were bigger than the optimal cutoff. The tuning processes for first-line drugs are visualized in Fig.2.

Feature importance cutoff tuning. For the four first-line drugs, when the cutoff increases, the F1-score quickly increases to its maximum and then continues to decrease. The cutoffs maximized F1-scores are 0.0004 (EMB), 0.0006 (INH), 0.0008 (PZA) and 0.0016 (RIF).

After the relevant features were selected, we designed and built a multi-input CNN architecture with TensorFlow Keras25 that took N inputs of 421 matrices representing N selected SNP features into the first layer. Each 421 matrix consists of normalized DNA base counts for each locus within a 21-base reference sequence window centered on the focal SNP (Fig.3). We generated normalized counts based on the raw base counts extracted from the read depth file mentioned in Genetic feature extraction section. Our convolutional architecture starts with two 1D convolutional layers followed by a flattening layer for each SNP input. Then, it concatenates the N flattening layers with the inputs of AMR-associated gene presence and lineage features. Finally, we added three fully connected layers to complete the deep neural network architecture (Fig.4). It smoothly integrates sequential and non-sequential features.

Conversion of raw base counts at each locus of a 21-base reference window into normalized base counts as Conv1D input of each selected SNP feature. The raw base counts were derived from reference-reads alignment, as shown on the left of this figure. The center of the window is the locus of a selected SNP feature. The normalized base counts at each locus are the percentage of the four DNA bases (ACGT), respectively.

Flowchart of our 1D CNN architecture.

Read more:
Accurate and rapid prediction of tuberculosis drug resistance from genome sequence data using traditional machine learning algorithms and CNN |...

Read More..

Filings buzz in the mining industry: 16% increase in big data mentions in Q3 of 2021 – Mining Technology

Mentions of big data within the filings of companies in the mining industry rose 16% between the second and third quarters of 2021.

In total, the frequency of sentences related to big data between October 2020 and September 2021 was 187% higher than in 2016 when GlobalData, from whom our data for this article is taken, first began to track the key issues referred to in company filings.

When companies in the mining industry publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Big data is one of these topics companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether big data is featuring more in the summaries and strategies of companies in the mining industry, two measures were calculated. Firstly, we looked at the percentage of companies that have mentioned big data at least once in filings during the past twelve months this was 51% compared to 23% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to big data.

Of the 50 biggest employers in the mining industry, Metalurgica Gerdau was the company that referred to big data the most between October 2020 and September 2021. GlobalData identified seven big data-related sentences in the Brazil-based company's filings 0.7% of all sentences. Caterpillar mentioned big data the second most the issue was referred to in 0.3% of sentences in the company's filings. Other top employers with high big data mentions included KGHM Polska Miedz, Honeywell International and MMC Norilsk Nickel.

Across all companies in the mining industry the filing published in the third quarter of 2021 which exhibited the greatest focus on big data came from Metalurgica Gerdau. Of the document's 1,030 sentences, seven (0.7%) referred to big data.

This analysis provides an approximate indication of which companies are focusing on big data and how important the issue is considered within the mining industry, but it also has limitations and should be interpreted carefully. For example, a company mentioning big data more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into big data have been successes or failures.

GlobalData also categorises big data mentions by a series of subthemes. Of these subthemes, the most commonly referred to topic in the third quarter of 2021 was "data analytics", which made up 60% of all big data subtheme mentions by companies in the mining industry.

Bulk Material Handling Solutions for the Mining Sector

Secure Satellite Systems and Services for Mining Operations

See the original post here:

Filings buzz in the mining industry: 16% increase in big data mentions in Q3 of 2021 - Mining Technology

Read More..

Exploration and Evaluation of Deep-Sea Mining Sites – Eos

Source: Journal of Geophysical Research: Solid Earth

The seafloor near a mid-ocean ridge is often home to rising hydrothermal fluids from the deep crust that deposit minerals on the ocean bottom. These seafloor massive sulfide deposits offer new sources of copper, zinc, lead, gold, and silver. The ore potential led to the European Unions initiation of the Blue Mining project in 2014 with the goal of turning seafloor mining into a viable industry.

Two recent and related studies sought to optimize the detection and exploration of seafloor massive sulfide deposits. Both studies focused on the Trans-Atlantic Geotraverse (TAG) active mound at 26N on the Mid-Atlantic Ridge. The studies addressed data collection at difficult-to-access seafloor sites and assessed their resource potential.

The first study, by Szitkar et al., used an autonomous underwater vehicle (AUV) to collect deep-sea passive electrical and magnetic data at active and inactive hydrothermal vent fields. The researchers attached a passive electrical sensor to the Abyss, a German AUV, which measured the disturbance of the ambient electric field caused by the chemical reactions occurring between the hydrothermal vents and the surrounding seawater. The electrical measurements provided an additional line of evidence for inherently risky research.

The study found a systematic correlation between deep-sea electrical and magnetic data; it yielded immediate geologic identification of both active and inactive hydrothermal vents. In addition, the approach eliminated the need to visually confirm the hydrothermal mounds.

The second study, by Galley et al., also used magnetic data but combined them with gravity measurements. Minimum-structure and surface geometry inversion were used to construct 3D models of the TAG active mound. These inversion methods find models that can reproduce the measured data.

The results from the modeling yielded a new geologic profile of the active hydrothermal mound. The models revealed the outer extent of the hydrothermally altered basalt rock below the vent and determined the thickness of the deposit. In addition, the models charted the movement of rising hydrothermal fluid and where it mixed with seawater. The study estimated the tonnage of the TAG active mound to be 2.17 0.44 megatons, which agrees with past estimates.

The two studies are a significant step forward in identifying and characterizing active and inactive hydrothermal mounds on the seafloor. The findings move seafloor mining toward cost-effective exploration and assessment of currently undeveloped mineral resources, with a focus on exploiting the hydrothermally inactive deposit to minimize negative environmental impacts. (Journal of Geophysical Research: Solid Earth, https://doi.org/10.1029/2021JB022082 and https://doi.org/10.1029/2021JB022228, 2021)

Aaron Sidder, Science Writer

Excerpt from:

Exploration and Evaluation of Deep-Sea Mining Sites - Eos

Read More..

James Beer Joins Hut 8 Mining as SVP of Operations – Yahoo Finance

Mr. Beer brings two decades of experience with Canadian technology leaders to Hut 8 to support the company's growth objectives

TORONTO, Feb. 15, 2022 /PRNewswire/ - Hut 8 Mining Corp. (Nasdaq: HUT) (TSX: HUT) ("Hut 8" or "the Company"), one of North America's largest, innovation-focused digital asset mining pioneers, supporting open and decentralized systems since 2018, is pleased to announce the appointment of James Beer to the new role of Senior Vice President, Operations, effective February 22, 2022. Mr. Beer joins Hut 8's growing and diverse leadership team under the direction of CEO Jaime Leverton to support scaling and expansion of the Company's diversified operations.

James Beer (CNW Group/Hut 8 Mining Corp)

Mr. Beer brings more than 20 years of leadership experience within service provider organizations serving mission critical facilities operations, colocation, site design and construction, network architecture, security, and managed services.

"We are delighted to bring James on board Hut 8's executive leadership team to help create incremental value as we continue to scale and diversify," said Jaime Leverton, Chief Executive Officer of the Company. "James brings with him an expertise and track record aligned to our vision for Hut 8 and will be instrumental in driving growth and innovation as he oversees our data center operations."

Over the last two decades, James has participated in multiple merger and acquisition transactions focused on integration, building growth engines, and value creation.

"I look forward to being a part of Hut 8 and to building on the Company's incredible innovation and momentum within the traditional data center and managed services realms," said Mr. Beer. "As industry becomes increasingly digitized, it's exciting to be working at the cutting-edge of high performance computing technologies being developed for the next generation."

About the Company:

Story continues

Hut 8 is one of North America's largest innovation-focused digital asset miners, led by a team of business-building technologists, bullish on bitcoin, blockchain, web 3.0 and bridging the nascent and traditional high performance computing worlds. With two digital asset mining sites located in Southern Alberta and a third site in North Bay, Ontario, all located in Canada, Hut 8 has one of the highest capacity rates in the industry and one of the highest inventories of self-mined Bitcoin of any crypto miner or publicly-traded company globally. With 36,000 square feet of geo-diverse data center space and cloud capacity connected to electrical grids powered by significant renewables and emission-free resources, Hut 8 is revolutionizing conventional assets to create the first hybrid data center model that serves both the traditional high performance compute (web 2.0) and nascent digital asset computing sectors, blockchain gaming, and web 3.0. Hut 8 was the first Canadian digital asset miner to list on the Nasdaq Global Select composite index and the first blockchain company to be added to the S&P/TSX Composite Index in 2021. Hut 8's team of business building technologists are believers in decentralized systems, stewards of powerful industry-leading solutions, and drivers of innovation in digital asset mining and high-performance computing, with a focus on ESG alignment. Through innovation, imagination, and passion, Hut 8 is helping to define the digital asset revolution to create value and positive impacts for its shareholders and generations to come.

Cautionary Note Regarding ForwardLooking Information This press release includes "forward-looking information" and "forward-looking statements" within the meaning of Canadian securities laws and United States securities laws, respectively (collectively, "forward-looking information"). All information, other than statements of historical facts, included in this press release that address activities, events or developments that the Company expects or anticipates will or may occur in the future, including such things as future business strategy, competitive strengths, goals, expansion and growth of the Company's businesses, operations, plans and other such matters is forward-looking information. Forward-looking information is often identified by the words "may", "would", "could", "should", "will", "intend", "plan", "anticipate", "allow", "believe", "estimate", "expect", "predict", "can", "might", "potential", "predict", "project", "is designed to", "likely" or similar expressions and includes, among others, statements regarding management's expectations, expertise, projections, estimates or characterizations of future events or circumstances, the Company's growing and diverse leadership team, the Company's ability to scale, expand, and diversify operations, the Company's ability to grow and create incremental value, high-performance computing technologies, and the Company's ability to build on innovation and momentum within the data center and managed services realms.

Forward-looking information is necessarily based on a number of opinions, assumptions and estimates that, while considered reasonable by Hut 8 as of the date of this press release, are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause the actual results, level of activity, performance or achievements to be materially different from those expressed or implied by such forward-looking information, including that the anticipated timing for completion of the construction and development activities at the Company's third mining site in North Bay, Ontario will be further delayed as a result of global supply chain impacts, the Company's ability to make interest payments on any drawn portions of loan with Trinity Capital, the impact of general economic conditions on the Company, industry conditions, currency fluctuations, taxation, regulation, changes in tax or other legislation, competition from other industry participants, the lack of availability of qualified personnel or management, stock market volatility, political and geopolitical instability and the Company's ability to access sufficient capital from internal and external sources. The foregoing and other risks are described in greater detail in the "Risk Factors" section of the Company's Annual Information Form dated March 25, 2021, which is available on http://www.sedar.com. These factors are not intended to represent a complete list of the factors that could affect Hut 8; however, these factors should be considered carefully, and you should not place undue reliance on any forward-looking information. There can be no assurance that such estimates and assumptions will prove to be correct. The forward-looking information contained in this press release are made as of the date of this press release, and Hut 8 expressly disclaims any obligation to update or alter statements containing any forward-looking information, or the factors or assumptions underlying them, whether as a result of new information, future events or circumstances, or otherwise, except as required by law. New factors emerge from time to time, and it is not possible for Hut 8 to predict all of these factors or to assess in advance the impact of each such factor on Hut 8's business or the extent to which any factor, or combination of factors, may cause actual results to differ materially from those contained in any forward-looking information. The forward-looking information contained in this press release is expressly qualified by this cautionary statement.

Related Links: http://www.hut8mining.com

Hut 8 Mining Corp Logo (CNW Group/Hut 8 Mining Corp)

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/james-beer-joins-hut-8-mining-as-svp-of-operations-301482519.html

SOURCE Hut 8 Mining Corp

Read the original post:

James Beer Joins Hut 8 Mining as SVP of Operations - Yahoo Finance

Read More..