Page 2,132«..1020..2,1312,1322,1332,134..2,1402,150..»

Baseten nabs $20M to make it easier to build machine learning-based applications – TechCrunch

As the tech world inches a closer to the idea of artificial general intelligence, were seeing another interesting theme emerging in the ongoing democratization of AI: a wave of startups building tech to make AI technologies more accessible overall by a wider range of users and organizations.

Today, one of these, Baseten which is building tech to make it easier to incorporate machine learning into a business operations, production and processes without a need for specialized engineering knowledge is announcing $20 million in funding and the official launch of its tools.

These include a client API and a library of pre-trained models to deploy models built in TensorFlow, PyTorch or scikit-learn; the ability to build APIs to power your own applications; and the ability the create custom UIs for your applications based on drag-and-drop components.

The company has been operating in a closed, private beta for about a year and has amassed an interesting group of customers so far, including both Stanford and the University of Sydney, Cockroach Labs and Patreon, among others, who use it to, for example, help organizations with automated abuse detection (through content moderation) and fraud prevention.

The $20 million is being discussed publicly for the first time now to coincide with the commercial launch, and its in two tranches, with equally notable names among those backers.

The seed was co-led by Greylock and South Park Commons Fund, with participation also from the AI Fund, Caffeinated Capital and individuals including Greg Brockman, co-founder and CTO at general intelligence startup OpenAI; Dylan Field, co-founder and CEO of Figma; Mustafa Suleyman, co-founder of DeepMind; and DJ Patil, ex-chief scientist of the United States.

Greylock also led the Series A, with participation from South Park Commons, early Stripe exec Lachy Groom; Dev Ittycheria, CEO of MongoDB; Jay Simon, ex-president of Atlassian, now at Bond; Jean-Denis Greze, CTO of Plaid; and Cristina Cordova, another former Stripe exec.

Tuhin Srivastava, Basetens co-founder and CEO, said in an interview that the funding will be used in part to bring on more technical and product people, and to ramp up its marketing and business development.

The issue that Baseten has identified and is trying to solve is one that is critical in the evolution of AI: Machine learning tools are becoming ever more ubiquitous and utilized, thanks to cheaper computing power, better access to training models and a growing understanding of how and where they can be used. But one area where developers still need to make a major leap, and businesses still need to make big investments, is when it comes to actually adopting and integrating machine learning: There remains a wide body of technical knowledge that developers and data scientists need to actually integrate machine learning into their work.

We were born out of the idea that machine learning will have a massive impact on the world, but its still difficult to extract value from machine learning models, Srivastava said. Difficult, because developers and data scientists need to have specific knowledge of how to handle machine learning ops, as well as technical expertise to manage production at the back end and the front end, he said. This is one reason why machine learning programs in businesses often actually have very little success: It takes too much effort to get them into production.

This is something that Srivastava and his co-founders Amir Haghighat (CTO) and Philip Howes (chief scientist) experienced firsthand when they worked together at Gumroad. Haghighat, who was head of engineering, and Srivastava and Howes, who were data scientists, wanted to use machine learning at the payments company to help with fraud detection and content moderation and realized that they needed to pick up a lot of extra full-stack engineering skills or hire specialists to build and integrate that machine learning along with all of the tooling needed to run it (e.g., notifications and integrating that data into other tools to action).

They built the systems still in use, and screening hundreds of millions of dollars of transactions but also picked up an idea in the process: Others surely were facing the same issues they did, so why not work on a set of tools to help all of them and take away some of that work?

Today, the main customers of Baseten a reference to base ten blocks, often used to help younger students learn the basics of mathematics (It humanizes the numbers system, and we wanted to make machine learning less abstract, too, said the CEO) are developers and data scientists who are potentially adopting other machine learning models, or even building their own but lack the skills to practically incorporate them into their own production flows. There, Baseten is part of a bigger group of companies that appear to be emerging building MLops solutions full sets of tools to make machine learning more accessible and usable by those working in DevOps and product. These include Databricks, Clear, Gathr and more. The idea here is to give tools to technical people to give them more power and more time to work on other tasks.

Baseten gets the process of tool-building out of the way so we can focus on our key skills: modeling, measurement and problem-solving, said Nikhil Harithras, senior machine learning engineer at Patreon, in a statement. Patreon is using Baseten to help run an image classification system, used to find content that violates its community guidelines.

Over time, there a logical step that Baseten could make, continuing on its democratization trajectory: considering how to build tools for non-technical audiences, too an interesting idea in light of the many no-code and low-code products that are being rolled out to give them more power to build their own data science applications, too.

Non-technical audiences are not something we focus on today, but that is the evolution, Srivastava said. The highest level goal is to accelerate the impact of machine learning.

More:
Baseten nabs $20M to make it easier to build machine learning-based applications - TechCrunch

Read More..

Which Animal Viruses Could Infect People? Computers Are Racing to Find Out. – The New York Times

Colin Carlson, a biologist at Georgetown University, has started to worry about mousepox.

The virus, discovered in 1930, spreads among mice, killing them with ruthless efficiency. But scientists have never considered it a potential threat to humans. Now Dr. Carlson, his colleagues and their computers arent so sure.

Using a technique known as machine learning, the researchers have spent the past few years programming computers to teach themselves about viruses that can infect human cells. The computers have combed through vast amounts of information about the biology and ecology of the animal hosts of those viruses, as well as the genomes and other features of the viruses themselves. Over time, the computers came to recognize certain factors that would predict whether a virus has the potential to spill over into humans.

Once the computers proved their mettle on viruses that scientists had already studied intensely, Dr. Carlson and his colleagues deployed them on the unknown, ultimately producing a short list of animal viruses with the potential to jump the species barrier and cause human outbreaks.

In the latest runs, the algorithms unexpectedly put the mousepox virus in the top ranks of risky pathogens.

Every time we run this model, it comes up super high, Dr. Carlson said.

Puzzled, Dr. Carlson and his colleagues rooted around in the scientific literature. They came across documentation of a long-forgotten outbreak in 1987 in rural China. Schoolchildren came down with an infection that caused sore throats and inflammation in their hands and feet.

Years later, a team of scientists ran tests on throat swabs that had been collected during the outbreak and put into storage. These samples, as the group reported in 2012, contained mousepox DNA. But their study garnered little notice, and a decade later mousepox is still not considered a threat to humans.

If the computer programmed by Dr. Carlson and his colleagues is right, the virus deserves a new look.

Its just crazy that this was lost in the vast pile of stuff that public health has to sift through, he said. This actually changes the way that we think about this virus.

Scientists have identified about 250 human diseases that arose when an animal virus jumped the species barrier. H.I.V. jumped from chimpanzees, for example, and the new coronavirus originated in bats.

Ideally, scientists would like to recognize the next spillover virus before it has started infecting people. But there are far too many animal viruses for virologists to study. Scientists have identified more than 1,000 viruses in mammals, but that is most likely a tiny fraction of the true number. Some researchers suspect mammals carry tens of thousands of viruses, while others put the number in the hundreds of thousands.

To identify potential new spillovers, researchers like Dr. Carlson are using computers to spot hidden patterns in scientific data. The machines can zero in on viruses that may be particularly likely to give rise to a human disease, for example, and can also predict which animals are most likely to harbor dangerous viruses we dont yet know about.

It feels like you have a new set of eyes, said Barbara Han, a disease ecologist at the Cary Institute of Ecosystem Studies in Millbrook, N.Y., who collaborates with Dr. Carlson. You just cant see in as many dimensions as the model can.

Dr. Han first came across machine learning in 2010. Computer scientists had been developing the technique for decades, and were starting to build powerful tools with it. These days, machine learning enables computers to spot fraudulent credit charges and recognize peoples faces.

But few researchers had applied machine learning to diseases. Dr. Han wondered if she could use it to answer open questions, such as why less than 10 percent of rodent species harbor pathogens known to infect humans.

She fed a computer information about various rodent species from an online database everything from their age at weaning to their population density. The computer then looked for features of the rodents known to harbor high numbers of species-jumping pathogens.

Once the computer created a model, she tested it against another group of rodent species, seeing how well it could guess which ones were laden with disease-causing agents. Eventually, the computers model reached an accuracy of 90 percent.

Then Dr. Han turned to rodents that have yet to be examined for spillover pathogens and put together a list of high-priority species. Dr. Han and her colleagues predicted that species such as the montane vole and Northern grasshopper mouse of western North America would be particularly likely to carry worrisome pathogens.

Of all the traits Dr. Han and her colleagues provided to their computer, the one that mattered most was the life span of the rodents. Species that die young turn out to carry more pathogens, perhaps because evolution put more of their resources into reproducing than in building a strong immune system.

These results involved years of painstaking research in which Dr. Han and her colleagues combed through ecological databases and scientific studies looking for useful data. More recently, researchers have sped this work up by building databases expressly designed to teach computers about viruses and their hosts.

In March, for example, Dr. Carlson and his colleagues unveiled an open-access database called VIRION, which has amassed half a million pieces of information about 9,521 viruses and their 3,692 animal hosts and is still growing.

Databases like VIRION are now making it possible to ask more focused questions about new pandemics. When the Covid pandemic struck, it soon became clear that it was caused by a new virus called SARS-CoV-2. Dr. Carlson, Dr. Han and their colleagues created programs to identify the animals most likely to harbor relatives of the new coronavirus.

SARS-CoV-2 belongs to a group of species called betacoronaviruses, which also includes the viruses that caused the SARS and MERS epidemics among humans. For the most part, betacoronaviruses infect bats. When SARS-CoV-2 was discovered in January 2020, 79 species of bats were known to carry them.

But scientists have not systematically searched all 1,447 species of bats for betacoronaviruses, and such a project would take many years to complete.

By feeding biological data about the various types of bats their diet, the length of their wings, and so on into their computer, Dr. Carlson, Dr. Han and their colleagues created a model that could offer predictions about the bats most likely to harbor betacoronaviruses. They found over 300 species that fit the bill.

Since that prediction in 2020, researchers have indeed found betacoronaviruses in 47 species of bats all of which were on the prediction lists produced by some of the computer models they had created for their study.

Daniel Becker, a disease ecologist at the University of Oklahoma who also worked on the betacoronavirus study, said it was striking the way simple features such as body size could lead to powerful predictions about viruses. A lot of it is the low-hanging fruit of comparative biology, he said.

Dr. Becker is now following up from his own backyard on the list of potential betacoronavirus hosts. It turns out that some bats in Oklahoma are predicted to harbor them.

If Dr. Becker does find a backyard betacoronavirus, he wont be in a position to say immediately that it is an imminent threat to humans. Scientists would first have to carry out painstaking experiments to judge the risk.

Dr. Pranav Pandit, an epidemiologist at the University of California at Davis, cautions that these models are very much a work in progress. When tested on well-studied viruses, they do substantially better than random chance, but could do better.

Its not at a stage where we can just take those results and create an alert to start telling the world, This is a zoonotic virus, he said.

Nardus Mollentze, a computational virologist at the University of Glasgow, and his colleagues have pioneered a method that could markedly increase the accuracy of the models. Rather than looking at a viruss hosts, their models look at its genes. A computer can be taught to recognize subtle features in the genes of viruses that can infect humans.

In their first report on this technique, Dr. Mollentze and his colleagues developed a model that could correctly recognize human-infecting viruses more than 70 percent of the time. Dr. Mollentze cant yet say why his gene-based model worked, but he has some ideas. Our cells can recognize foreign genes and send out an alarm to the immune system. Viruses that can infect our cells may have the ability to mimic our own DNA as a kind of viral camouflage.

When they applied the model to animal viruses, they came up with a list of 272 species at high risk of spilling over. Thats too many for virologists to study in any depth.

You can only work on so many viruses, said Emmie de Wit, a virologist at Rocky Mountain Laboratories in Hamilton, Mont., who oversees research on the new coronavirus, influenza and other viruses. On our end, we would really need to narrow it down.

Dr. Mollentze acknowledged that he and his colleagues need to find a way to pinpoint the worst of the worst among animal viruses. This is only a start, he said.

To follow up on his initial study, Dr. Mollentze is working with Dr. Carlson and his colleagues to merge data about the genes of viruses with data related to the biology and ecology of their hosts. The researchers are getting some promising results from this approach, including the tantalizing mousepox lead.

Other kinds of data may make the predictions even better. One of the most important features of a virus, for example, is the coating of sugar molecules on its surface. Different viruses end up with different patterns of sugar molecules, and that arrangement can have a huge impact on their success. Some viruses can use this molecular frosting to hide from their hosts immune system. In other cases, the virus can use its sugar molecules to latch on to new cells, triggering a new infection.

This month, Dr. Carlson and his colleagues posted a commentary online asserting that machine learning may gain a lot of insights from the sugar coating of viruses and their hosts. Scientists have already gathered a lot of that knowledge, but it has yet to be put into a form that computers can learn from.

My gut sense is that we know a lot more than we think, Dr. Carlson said.

Dr. de Wit said that machine learning models could some day guide virologists like herself to study certain animal viruses. Theres definitely a great benefit thats going to come from this, she said.

But she noted that the models so far have focused mainly on a pathogens potential for infecting human cells. Before causing a new human disease, a virus also has to spread from one person to another and cause serious symptoms along the way. Shes waiting for a new generation of machine learning models that can make those predictions, too.

What we really want to know is not necessarily which viruses can infect humans, but which viruses can cause an outbreak, she said. So thats really the next step that we need to figure out.

Read more from the original source:
Which Animal Viruses Could Infect People? Computers Are Racing to Find Out. - The New York Times

Read More..

Applied BioMath, LLC to Present on Machine Learning in Drug Discovery at Bio-IT World Conference and Expo – PR Newswire

CONCORD, Mass., April 27, 2022 /PRNewswire/ -- Applied BioMath (www.appliedbiomath.com), the industry-leader in providing model-informed drug discovery and development (MID3) support to help accelerate and de-risk therapeutic research and development (R&D), today announced their participation at the Bio-IT World Conference and Expo occurring May 3-5, 2022 in Boston, MA.

Kas Subramanian, PhD, Executive Director of Modeling at Applied BioMath will present "Applications of Machine Learning in Preclinical Drug Discovery" within the conference track, AI for Drug Discovery and Development on Thursday, May 5, 2022 at 1:05 p.m. E.T. In this presentation, Dr. Subramanian will discuss how machine learning methods can improve efficiency in therapeutic R&D decision making. He will review case studies that demonstrate machine learning applications to target validation and lead optimization.

"Traditionally, therapeutic R&D requires experiments on many different targets, hits, leads, and candidates that are based on best guesses," said John Burke, PhD, Co-founder, President and CEO of Applied BioMath. "By utilizing artificial intelligence and machine learning, project teams can computationally work with more data to better inform experiments and develop better therapeutics."

To learn more about Applied BioMath's presence at the Bio-IT World Conference and Expo, please visit http://www.appliedbiomath.com/BioIT22.

About Applied BioMath

Founded in 2013, Applied BioMath's mission is to revolutionize drug invention. Applied BioMath applies biosimulation, including quantitative systems pharmacology, PKPD, bioinformatics, machine learning, clinical pharmacology, and software solutions to provide quantitative and predictive guidance to biotechnology and pharmaceutical companies to help accelerate and de-risk therapeutic research and development. Their approach employs proprietary algorithms and software to support groups worldwide in decision-making from early research through all phases of clinical trials. The Applied BioMath team leverages their decades of expertise in biology, mathematical modeling and analysis, high-performance computing, and industry experience to help groups better understand their therapeutic, its best-in-class parameters, competitive advantages, patients, and the best path forward into and in the clinic to increase likelihood of clinical concept and proof of mechanism, and decrease late stage attrition rates. For more information about Applied BioMath and its services and software, visitwww.appliedbiomath.com.

Applied BioMath and the Applied BioMath logo are registered trademarks of Applied BioMath, LLC.

Press Contact: Kristen Zannella ([emailprotected])

SOURCE Applied BioMath, LLC

More:
Applied BioMath, LLC to Present on Machine Learning in Drug Discovery at Bio-IT World Conference and Expo - PR Newswire

Read More..

America’s AI in Retail Industry Report to 2026 – Machine Learning Technology is Expected to Grow Significantly – ResearchAndMarkets.com – Business…

DUBLIN--(BUSINESS WIRE)--The "America's AI in the Retail Market - Growth, Trends, COVID-19 Impact, and Forecasts (2022 - 2027)" report has been added to ResearchAndMarkets.com's offering.

America's AI in the retail market is expected to register a CAGR of 30% during the forecast period, 2021 - 2026.

Companies Mentioned

Key Market Trends

Machine Learning Technology is Expected to Grow Significantly

Food and Grocery to Augment Significant Growth

Key Topics Covered:

1 INTRODUCTION

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET DYNAMICS

4.1 Market Overview

4.2 Market Drivers

4.2.1 Hardware Advancement Acting as a Key Enabler for AI in Retail

4.2.2 Disruptive Developments in Retail, including AR, VR, IOT, and New Metrics

4.2.3 Rise of AI First Organizations

4.2.4 Need for Efficiency in Supply Chain Optimization

4.3 Market Restraints

4.3.1 Lack of Professionals, as well as In-house Knowledge for Cultural Readiness

4.4 Industry Value Chain Analysis

4.5 Porter's Five Forces Analysis

4.6 Industry Policies

4.7 Assessment of Impact of COVID-19 on the Industry

5 AI Adoption in the Retail Industry

5.1 AI Penetration with Retailers (Historical, Current, and Forecast)

5.2 AI penetration by Retailer Size (Large and Medium)

5.3 AI Use Cases in Operations

5.3.1 Logistics and Distribution

5.3.2 Planning and Procurement

5.3.3 Production

5.3.4 In-store Operations

5.3.5 Sales and Marketing

5.4 AI Retail Startups (Equity Funding vs Equity Deals)

5.5 Road Ahead for AI in Retail

6 MARKET SEGMENTATION

6.1 Channel

6.2 Solution

6.3 Application

6.4 Technology

7 COMPETITIVE LANDSCAPE

7.1 Company Profiles

8 INVESTMENT ANALYSIS

9 MARKET TRENDS AND FUTURE OPPORTUNITIES

For more information about this report visit https://www.researchandmarkets.com/r/kddpm3

More:
America's AI in Retail Industry Report to 2026 - Machine Learning Technology is Expected to Grow Significantly - ResearchAndMarkets.com - Business...

Read More..

Dynamic compensation of stray electric fields in an ion trap using machine learning and adaptive algorithm | Scientific Reports – Nature.com

A gradient descent algorithm (ADAM) and a deep learning network (MLOOP) were tested for compensating stray fields in different working regimes. The source code used for the experiments is available in28. The software controlled the voltages using the PXI-6713 DAQ and read the fluorescence counts from a photo-mutliplier-tube (PMT) through a time tagging counter (IDQ id800). All software was written in python and interfaced with the DAQ hardware using the library NI-DAQmx Python. A total of 44 DC electrodes and the horizontal position of the cooling laser were tuned by the program, resulting in a total of 45 input parameters.

Voltage deviation from the original starting point during optimization with ADAM. (a) uncharged trap (Gradient descent optimizer section). (b) During UV charging (Testing under poor trap conditions). Top graphs show odd electrode numbers corresponding to top DC electrodes in Fig. 1b and the bottom graphs show the even electrode numbers. The values were determined by subtracting the voltage at each iteration by the starting voltage (Delta V = V_n - V_0). Changes can be seen in almost all the electrodes of the trap.

(a) MLOOP deep learning network. Differential Evolution explores the input space (blue points) and the neural network creates a model of the data and predicts an optimum (red points). Maximum photon count of the neural network points is 96 1% higher than manual optimization. Differential Evolution continues to explore the input space and has varied photon counts. The beginning point for the process (found by manually adjusting the 4 voltage set weights) was at 33700counts/s and the highest photon count found by the neural network was at 66200counts/s. (b) and (c) Fluorescence versus laser frequency detuning from the resonance for inital setting and after different optimizations. It can be seen from that the experimental values are very close to the theoretical Lorentzian fit29,30,31. This shows the heating is low before and after optimization and therefore the change in fluorescence can be used to infer the change in heating. Deviation from the theory near the resonance shown in (b) is a sign of small heating instability.

The first compensation test was performed by ADAM gradient descent algorithm. This is a first order optimizer that uses the biased first and second order moments of the gradient to update the inputs of an objective function, and was chosen for its fast convergence, versatility in multiple dimensions and tolerance to noise23. Our goal was to maximize the fluorescence of the ion which was described by a function (f(vec {alpha }{,})), where (vec {alpha }{,} =(alpha _1, alpha _2, alpha _3, ldots alpha _{45})) represents the array of parameters to be optimized. To find the optimal (vec {alpha }{,}), the algorithm needs to know the values of the partial derivatives for all input parameters. Because we do not have an analytic expression for (f(vec {alpha }{,})), the values of its derivatives were estimated from experimental measurements by sequentially changing each input (alpha _{i}), and reading the associated change in fluorescence f. This data were used as inputs to ADAM for finding the optimal (vec {alpha }{,}) which maximized f.

Before running the automated compensation, we manually adjusted the 4 weights of the voltage sets used for compensation described in the previous section. We also tried to run ADAM to optimize these 4 parameters but the increase in fluorescence was limited to 6%. After manual compensation, we ran ADAM on all 45 inputs with the algorithm parameters given in the source code28. Each iteration took 12s, where 9.8s were the photon readout (0.1s(times )2 readouts per parameter plus 2(times )0.1s readouts at the beginning and end of the iteration), and the rest of the time was the gradient computation. If the photon count reduced by more than 40% of its initial value, the algorithm terminated and applied the previously found optimum. This acted as the safety net for the program, ensuring the ion was not lost while optimizing the 45 inputs. We need this safety net because if the ion is heated past the capture range for the used cooling detuning, it will be ejected from the trap. In our implementation of the algorithm we removed the reduction in the step size of the optimization algorithm as iterations progressed. This step reduction, which is present in the standard version of ADAM, is not ideal when stray fields change with time since the optimal values of the voltages also drift in time. The removal caused some fluctuations in the photon readout near the optimal settings. Adding to these fluctuations, other sources of noise, such as wavemeter laser locking32, and mechanical drift in the trap environment, resulted in daily photon count variations of around 5%. Fluctuation in laser power was not a concern here since the power of the cooling laser was stabilized. Despite these fluctuations, and the fact that stray fields change every day, the algorithm demonstrated an increase in fluorescence collection up to 78 1% (Fig.2b) when starting from a manually optimized configuration in less than 10 iterations, or 120 s.

The ADAM algorithm was fast and reliable (the ion was never lost during optimization), even in extremely volatile conditions like having time-dependent charging and stray electric field buildup. Figure3a shows a colourmap of the voltages and laser position adjustments, where most of the improvement came from adding the same voltage to all DC electrodes indicating that the ion was not at optimal height. The volatility of the ion-trap environment causes the fluorescence rate to oscillate around the optimal point. To get the best value, instead of using the values of the final iteration, the software saved all voltage combinations and applied the setting with the highest photon count after all iterations were finished. Despite picking the best value it can be seen in Fig.2b that the fluorescence for some iterations during the optimization are higher than the final point selected by the software. This is because when the settings are changed, the ion fluorescence rate may transiently increase and subsequently stabilize to a slightly lower value for the same voltage settings.

The second algorithm tested was a deep learning network using the python based optimization and experimental control package MLOOP20. MLOOP uses Differential Evolution33 for exploring and sampling data. The blue points in Fig.4a corresponds to these samples and it can be seen that even at the end of optimization, they can have non-optimum fluorescence rates. MLOOP also trains a neural network using the data collected by Differential Evolution and creates an approximate model of the experimental system. It then uses this model to predict an optimum point. The red points in Fig. 4a shows the optimum points predicted by the neural network model. It can be seen that this section starts later than Differential Evolution, as it requires some data for initial neural network training, and gradually finds the optimum and stays near it. For training of the neural network, the inbuilt ADAM optimizer is used to minimize the cost function. The sampling in MLOOP does not require a gradient calculation which greatly improves the sampling time. Even though the sampling is fast, training the network to find an optimal point requires a minimum of 100 samples and that makes MLOOP slower than ADAM. With our settings for MLOOP, each iteration took 0.7s on average and therefore 700s was needed to take 1000 samples shown in Fig.4a.

In our test the neural network in MLOOP had 5 layers with 45 nodes each, all with Gaussian error correction. The neural network structure (number of layers and cells) was manually optimized and tested on a 45-dimensional positive definite quadratic function before being used for the experiment. Once the ion was trapped, positioned above the integrated mirror22, and photon counts were read, the program started sampling 100 different voltage combinations around its initial point. Then, the network started training on the initial data and making predictions for the voltages that maximise fluorescence. Since the ion trap setup is very sensitive to changes in the electric field, the voltages were set to move a maximum of 1% of their previous value in each iteration to reduce the chance of losing the ion. As a step size value could not be explicitly defined, this percentage was chosen to make the changes similar to the step size used for ADAM.

A small percentage of our initial trials with the maximum change of a few percent (instead of 1%) led to an unstable ion during the parameter search sequence. This is because MLOOP is a global optimizer and can set the voltages to values far from the stable starting point. Since the ion trap is a complicated system that can only be modelled for a specific range of configurations, moving away from these settings can lead to unpredictable and usually unstable behavior. MLOOP also has an in-built mechanism that handles function noise using a predefined expected uncertainty. We set this uncertainty to the peak-to-peak noise of the photon readout when no optimization was running.

Since MLOOP is a global optimizer it was able to find optimum points different from the points found by ADAM. For trials where low numbers of initial training data points were used, these configurations proved to be unstable and in most cases resulted in the loss of the ion. Unstable states were also observed occasionally if the optimizer was run for too long. With moderate-size training sets, MLOOP was able to find voltage settings with fluorescence rates similar or higher than optimum points found by ADAM as shown in Fig.4a. Considering the long duration of the MLOOP iteration sequence and the possibility of finding unstable settings in volatile conditions, the test of optimization with induced changing stray fields (Testing under poor trap conditions) was only performed with the ADAM optimizer as the gradient based search method proved to be more robust against fluctuations in the ion environment.

To test the effectiveness of the protocols, the saturation power, (P_{sat}), was measured before and after the optimization process. The (P_{sat}) is the laser power at which the fluorescence rate of a two-level system is half the fluorescence at infinite laser power. We also measured the overall detection efficiency (eta ), the fraction of emitted photons which resulted in detection events. Table1 shows (P_{sat}) decreased (ion photon absorption was improved) using both ADAM and MLOOP. The detection efficiency was approximately the same for all runs as expected.

Another test was done by measuring fluorescence versus laser detuning before and after optimization. Figure4b shows that the measured values follows the expected Lorentzian profile29,30,31 and associated linewidth before and after optimization. This indicates that the initial micromotion magnitude (beta ) was sufficiently small for fluorescence to be a good optimization proxy. Clear increase in florescence can be seen after optimizing 44 electrodes individually both with ADAM and MLOOP. The fit residual curve (difference between the experimental values and the theoretical fit) shows that optimizing individual electrodes, resulted in slight increase in heating instability near the resonance.

To test the live performance of the optimization protocol in a non-ideal situation, we deliberately charged the trap by shining 369.5nm UV laser light onto the chip for 70 min. The power of the laser was (200pm 15mu W) and the Gaussian diameter of the focus was (120pm 10mu m). This process ejects electrons due to the photo-electric effect34 and produces irregular and potentially unpredictable slow time varying electric fields within the trap. The process charged the trap significantly and made a noticeable reduction to the photon count. The ADAM algorithm was then tested both during charging and after charging was stopped. In both cases an improvement of fluorescence rate was observed.

The first experiment was performed to test the optimizing process after charging. In this test, starting with the optimal manual setting, ADAM individual electrode optimizer was able to obtain 27% improvement in the fluorescence rate (blue points on the left side of Fig.5a). Then charging was induced onto the trap for 70min and a clear decrease in photon count was seen that went even lower than the initial value (red points in Fig. 5a). At this point charging was stopped and ADAM was run again and fluorescence rate returned back to the previous optimum, within the error, in approximately 12min. During the second optimization, the fluorescence goes higher than the stable final value for some iterations before the final. This is because of the same effect explained in Gradient descent optimizer section that the fluorescence might spike right after a change but go down slightly after stabilizing. Looking at the changes of individual electrodes, shown in Fig.3b, we see that the main electrodes adjusted were those around the ion and some throughout the trap. The change in the laser horizontal position was negligible.

Another experiment was done by running ADAM during continuous charging for real-time compensation. Since we induce charging via laser scattering from the trap, the collected photons are both from the ion and the scattered laser and fluctuations in the intensity of scattered light confuses the optimizer. Despite that the optimizer did not lose the ion nor needed to abort the process. Figure5b shows that the fluorescence rate, even after a 70-min charging session, remained near the optimum value. After stopping the charging, the ion remained trapped for more than 8 h and was intentionally removed from the trap after this time.

(a) Real time compensation with ADAM of laser charging induced stray electric field. The ion was optimized using ADAM (left blue points) then the photon count was noted whilst charging for 70 min (red points) then re-optimized (right blue points). Initial improvement from manually optimized settings was 27%. The second optimization improved the fluorescence by 58% from the charged conditions and returned it back to the optimum value of the first optimization within the error. (b) The trap was charged by hitting a UV laser to destabilize the ion and individual electrodes optimized using ADAM simultaneously for 70 min. The photon count fluctuates as a result of combination of fluctuations of power of the cooling laser power, algorithm search and charging irregularities. The optimizer keeps the fluorescence at the photon count similar to the case of optimizing after the charging is stopped (third section of (a)).

See the article here:
Dynamic compensation of stray electric fields in an ion trap using machine learning and adaptive algorithm | Scientific Reports - Nature.com

Read More..

How Artificial Intelligence Is Transforming Israeli Intelligence Collection – The National Interest Online

Intelligence is a profession as old as time, but new advances in artificial intelligence and machine learning are changing it like never before. As these new technologies mature, they are likely to have both predicted and unexpected implications for humanity, and intelligence collection will never be the same. In Israel, Unit 8200, which is the cyber unit of military intelligence, is leading the transformation in the Israel Defense Forces.

According to the commander of Unit 8200, a machine can use big data to create information better than humans, but it does not understand the context, has no emotions or ethics, and is unable to think outside the box. Therefore, instead of prioritizing between humans and machines, we should recognize that, for at least the foreseeable future, machines will not replace humans role in intelligence decisionmaking. However, it is clear that intelligence professionals need to adapt how they conceptualize technology and mathematical thinking in the twenty-first century.

The first interesting development worth highlighting in Israeli intelligence is automatic translation. In recent years, we have seen unprecedented advancements in translation technology; algorithms based on neural networks have been successful in offering a highly accurate level of translation. The translation of languagessuch as Arabic and Persianinto Hebrew allows intelligence analysts to have direct contact with raw material and eliminates the dependence that analysts had on the collection units themselves. In addition, it enables intelligence analysts to deal with big data repositories. This means that the Israeli Military Intelligence has begun to integrate automatic translation engines into its work processes and is starting to give up some of its human translators. Instead, the military is having some of its intelligence personnel train the machines to raise the level of translation they provide.

A second development is in the area of identifying targets for attack in the war on terror. This process also relies on advanced algorithms in the field of machine learning, utilizing the ability to process vast amounts of information and cross-link many layers of geographic information to reveal anomalies in the data. The change appeared for the first time in the recent operation in Gaza (2021), in which Israeli military intelligence first used artificial intelligence in a campaign to identify many real-time terror targets.

In order to adopt these new technologies, intelligence units must change how they are organized and the work processes that they employ. Further, new intelligence roles must be defined. Here are two such roles. First, an information researcher: a person responsible for the analysis of the information, the acquisition of advanced tools for analyzing large reservoirs, semantic research (NLP), data Preparation, visualization of the information, network analysis (SNA) or geospatial analysis. Second, an information specialist: a person responsible for defining the problem in terms of optimizing machine learning and defining business metrics, directing the collection operation, analyzing errors, and designing the product requirements.

The integration of artificial intelligence will change the way intelligence is handled in Israel long into the future, but unresolved challenges remain. For the time being, machines still do not know how to ask questions or summarize research insights at a sufficient level. In addition, imposing responsibility for attacking targets on a machine can lead to devastating consequences and create ethical dilemmas, as is evident in recent conflicts such as the Russo-Ukrainian War.

The response to these challenges will be gradual. It is likely that the changes today are just the tip of the iceberg in how artificial intelligence will alter the practice of intelligence collection and analysis. Most of the changes we are seeing today are changes that automate the intelligence process, but the next step is making processes more automatic, raising questions and fears about who is in control. Therefore, it is more appropriate to first incorporate artificial intelligence components into non-life-threatening intelligence processes and create trust between cautious intelligence professionals and the machine, humans new partner in the intelligence cycle.

Lt. Col. (res.) David Siman-Tov is a Senior Research Fellow at the Institute for National Security Studies (INSS) and deputy head of the Institute for the Research of the Methodology of Intelligence (IRMI) at the Israeli Intelligence Community Commemoration and Heritage Center.

Image: Reuters.

Read the original:
How Artificial Intelligence Is Transforming Israeli Intelligence Collection - The National Interest Online

Read More..

Link Machine Learning (LML) has a Neutral Sentiment Score, is Rising, and Outperforming the Crypto Market Friday: What’s Next? – InvestorsObserver

Link Machine Learning (LML) gets a neutral rating from InvestorsObserver Friday. The crypto is up 37.96% to $0.006191928941 while the broader crypto market is up 236905.2%.

The Sentiment Score provides a quick, short-term look at the cryptos recent performance. This can be useful for both short-term investors looking to ride a rally and longer-term investors trying to buy the dip.

Link Machine Learning price is currently above resistance. With support set around $0.0037498183340939 and resistance at $0.00596165217246732, Link Machine Learning is potentially in a volatile position if the rally burns out.

Link Machine Learning has traded on low volume recently. This means that today's volume is below its average volume over the past seven days.

Due to a lack of data, this crypto may be less suitable for some investors.

Click here to unlock the rest of the report on Link Machine Learning

Subscribe to our daily morning update newsletter and never miss out on the need-to-know market news, movements, and more.

Thank you for signing up! You're all set to receive the Morning Update newsletter

Follow this link:
Link Machine Learning (LML) has a Neutral Sentiment Score, is Rising, and Outperforming the Crypto Market Friday: What's Next? - InvestorsObserver

Read More..

Dropbox: We unplugged a data center to test our disaster readiness. Here’s how it went – ZDNet

Cloud file storage service Dropbox has explained how it took its key data center completely offline to test its disaster readiness capabilities.

As Dropbox explains, after migrating its computing infrastructure from Amazon Web Services in 2015 and then launching its Magic Pocket file content storage system, the company became "highly centralized" at its San Jose data center (SJC) located not so far from the San Andreas Fault.

Given the criticality of the San Jose data center, Dropbox wanted to know what would happen to global availability if that region or "metro" went down, so the company worked towards a goal, in November of last year, of testing its resilience by physically unplugging the fiber network to its SJC data centers.

"In a world where natural disasters are more and more prevalent, it's important that we consider the potential impact of such events on our data centers," the team who ran the project explained in a detailed blog post.

SEE: What is cloud computing? Everything you need to know about the cloud explained

The company stores file content and metadata about files and users. Magic Pocket splits content files into blocks and replicates them across its infrastructure in different regions. The system is designed to serve block data independently from different data centers concurrently in the event that a datacenter goes down, making it a so-called 'active-active' system.

Dropbox was seeking the same active-active architecture for its metadata stack. But back then, its main MySQL database for metadata was in the SJC and it hadn't properly tested its failover or active-passive capability. It wanted to test how its database in SJC would failover to a replicated MySQL database at its passive data center in Idaho. A failover test in 2015 was successful but its engineers realized active-active architecture for metadata would be harder than for block storage.

The company's engineers settled on active-passive for metadata and in 2019 began running many failover tests.

But then in May 2020, a "critical failure" in Dropbox's failover tooling "caused a major outage, costing us 47 minutes of downtime." The company kicked off an emergency audit of its failover tooling and processes, and created a dedicated seven-person disaster recovery team whose goal was to slash the Recovery Time Objective (RTO) by the end of 2021.

"We realized the best way to ensure we did not have any dependency on the active metro was to perform a disaster recovery test where we physically unplugged SJC from the rest of the Dropbox network," the company explains.

"If unplugging SJC proved to have minimal impact on our operations, this would prove that in the event of a disaster affecting SJC, Dropbox could be operating normally within a matter of hours. We called this project the SJC blackhole."

After ensuring critical services running in SJC were multi-homed running from another metro other than SJC the team decided how they would simulate the complete loss of SJC.

Initially, Dropbox planned to isolate SJC from the network by draining the metro's network routers, but opted for the more drastic measure of unplugging the network fiber.

"While this would have gotten the job done, we ultimately landed on a physical approach that we felt better simulated a true disaster scenario: unplugging the network fiber!"

It carried out two test runs via two datacenters in its Dallas Forth Worth (DFW) metro (DFW4 and DFW5), but the first test which unplugged DFW4, was deemed a failure because it impacted global availability and the test was ended early. Dropbox incorrectly assumed the DFW4 and DFW5 were roughly equivalent and didn't account for cross-facility dependencies.

SEE: Cloud computing is the key to business success. But unlocking its benefits is hard work

A few weeks later, engineers ran a new test that would blackhole the entire DFW metro. Engineers at each of the two facilities unplugged the fiber on command.

Dropbox observed no impact to availability and maintained the blackhole for the full 30 minutes and it was deemed a success.

At 5pm PT on Thursday, November 18, 2021, Dropbox finally ran the major test at its SJC, where engineers unplugged each of the metro's three datacenters one by one. Dropbox passed the 30 minute blackhole threshold without observing an impact to global availability, although some internal services were impacted.

"Yeah, we know, this probably sounds a bit anti-climactic. But that's exactly the point! Our detail-oriented approach to preparing for this event is why the big day went so smoothly," the company explained.

It's still not an active-active architecture, but Dropbox says it is confident that, without SJC, Dropbox as a whole could still survive a major outage in that metro, noting that the approachhad proved that it "now had the people and processes in place to offer a significantly reduced RTO and that Dropbox could run indefinitely from another region without issue. And most importantly, our blackhole exercise proved that, without SJC, Dropbox could still survive."

See the original post:
Dropbox: We unplugged a data center to test our disaster readiness. Here's how it went - ZDNet

Read More..

5 Must-Have Features of Backup as a Service For Hybrid Environments – CIO

As the value and business criticality of data increases, so do the challenges of backup, recovery, and data management. These challenges are exacerbated by exploding data growth, increasing SLA requirements, and an evolving threat and compliance landscape. Although hybrid cloud environments provide businesses with greater agility, protecting and managing apps and data across the core data center, private cloud, and public cloud can prove increasingly complex and costly.

Many IT organizations that have cobbled together data protection solutions over the years now find themselves saddled with rigid, siloed infrastructure based on an equally rigid backup approach. Organizations that postpone modernization complain about being stuck in maintenance mode with disparate, multi-vendor backup and recovery systems that are complex and expensive to maintain. Multiple touch points of administration slow down production, and the costs of software licensing, disruptive upgrades, and over-provisioning can add up fast. Resources are tight but not protecting business-critical data and apps can jeopardize the health of the business.

This situation presents a challenge for IT organizations to find a solution that simply and reliably safeguards your most important and valuable assets. Modern cloud services are designed to do a better job protecting data and apps in hybrid cloud environments, and to simplify operations and keep costs down.

When it comes to data protection modernization, most businesses realize they cannot afford to wait. According to ESG, 57% of organizations expect to increase spending on data protection in 2022, and 26% identify data backup and recovery as a top-5 area of data center modernization planned for the next 12 to 18 months.

If you are considering the transition to data protection as a service (DPaaS), youre not alone. Recent research indicates the importance of these emerging services in cloud-centric strategies. According to IDC, DPaaS is the fastest-growing segment of the data protection market with forecast 19.1% CAGR through 2025.

In large part, thats because cloud services improve efficiencies by reducing cost, risk, and complexity. Protecting data on premises and in a hybrid cloud environment sets you up to deliver on future SLAs, enabling you to meet demanding RPOs and RTOs while keeping your business moving ahead.

New backup as a service offerings have redefined backup and recovery with the simplicity and flexibility of the cloud experience. Cloud-native services can eliminate complexity of protecting your data and free you from the day-to-day hassles of managing the backup infrastructure. The innovative approach to backup lets you meet SLAs in hybrid cloud environments, and simplifies your infrastructure, driving significant value for your organization.

Resilient data protection is key to always-on availability for data and applications in todays changing hybrid cloud environments. While every organization has its own set of requirements, I would advise you to focus on cost efficiency, simplicity, performance, scalability, and future-readiness when architecting your strategy and evaluating new technologies. The simplest choice: A backup as a service solution that integrates all of these features in a pay-as-you-go consumption model.

Modern solutions are architected to support todays challenging IT environments. Introduced in September 2021 as a part of HPE GreenLake for data protection, HPE Backup and Recovery Service is designed to deliver five key benefits to hybrid cloud environments:

HPE Backup and Recovery Service brings the simplicity of the cloud experience to on-prem and cloud environments. This service breaks down the silos of a typical backup deployment, supporting any primary storage array to protect your VMware virtual machines (VMs), and theres no need to manage backup hardware, software, or cloud infrastructure. The solution can be deployed quickly and managed very simply through a single console. Policy driven, the HPE Backup and Recovery Service easily organizes VMs into protection groups, making it easy to apply policies to multiple VMs or datastores and to automate protection at scale.

No modern data security solution is complete without ransomware protection. The most effective way to protect backup data from cyberattacks is to keep it hidden from attackers. Ransomware cant infect and encrypt what it cannot access.

HPE Backup and Recovery Service creates backup stores which are not directly accessible by the operating system. Backup images are made inaccessible to ransomware, ensuring data backup security, and enabling reliable data restores. And with backup data immutability, users can also prevent a backup being inadvertently or maliciously deleted or modified before the configured retention date. Once the retention/immutability date has been set it cannot be reduced, and the backup is safe from attack or accidental deletion.

Data is the lifeblood of your organization. Data protection modernization provides your organization with an opportunity to:

Get the peace of mind that your data is rapidly recoverable, always secure, and provides value to your business without compromise with the HPE Backup and Recovery Service. See how it works in this 4-minute demo.

You can also register for a 90-day free trial of the HPE Backup and Recovery Service.* All features of the cloud service are available during the trial period and after evaluation, with HPE support.

*Cloud storage capacity is fixed at 5 TB during the trial period.

____________________________________

Ashwin Shetty is a Product Marketing Manager for HPE Storage. In this role, Ashwin is responsible for helping customers understand the value of modernizing data protection with HPE Backup and Recovery Service, HPE StoreOnce, HPE RMC, and HPE StoreEver Tape. Prior to joining HPE, Ashwin worked in the sales and marketing groups of Oracle and HCL.

See more here:
5 Must-Have Features of Backup as a Service For Hybrid Environments - CIO

Read More..

How Is Cloud Computing Transforming Healthcare Industry – CIO Applications

The broad usage of cloud computing in healthcare extends well beyond simply storing data on cloud infrastructure.

Fremont, CA:There has been a significant shift in the development, use, storage, and distribution of healthcare data. From traditional storage to digitization of healthcare data, the healthcare sector has undergone a long way in improving its data management processes.

The broad usage of cloud computing in healthcare extends well beyond simply storing data on cloud infrastructure. Healthcare providers are already embracing this technology to increase efficiencies, optimize processes, reduce healthcare delivery costs, and enable personalization in treatment plans to improve results.

Here are some of the ways cloud consulting is affecting healthcare.

Lowering Of Costs

The fundamental concept of cloud computing is the accessibility of computer resources such as data storage and computational power on demand. Hospitals and healthcare professionals are no longer required to acquire hardware and servers entirely. There are no upfront costs connected with data storage in the cloud. Instead, users pay for their resources, resulting in substantial cost savings.

Ease Of Interoperability

The goal of interoperability is to develop data linkages across the healthcare system, regardless of the site of origin or storage. As a result, patient data is easily available for dissemination and generating insights to support healthcare planning and delivery due to interoperability powered by cloud usage.

Access To High Powered Analytics

Healthcare data, both organized and unstructured, is a valuable asset. Relevant patient data from many sources may get compiled and calculated in the cloud. The use of Big Data analytics & artificial intelligence algorithms on cloud-stored patient data can significantly improve medical research. In addition, processing huge datasets becomes more viable with the cloud's enhanced computational capability.

Patient's Ownership Of Data

Cloud computing democratizes data and empowers people to take charge of their health. It increases patient participation in health-related choices and leads to effective decision-making by serving as a patient education and engagement tool.

Telemedicine Capabilities

Accessing data from anywhere is one of the most significant benefits of cloud storage. Cloud computing and healthcare have the potential to enhance a variety of healthcare-related services such as telemedicine, post-hospitalization care planning, and virtual drug adherence. Telehealth also enhances access to healthcare services.

View post:
How Is Cloud Computing Transforming Healthcare Industry - CIO Applications

Read More..