Page 3,602«..1020..3,6013,6023,6033,604..3,6103,620..»

Exposure to -1x the Daily Performance: Bitcoin.com Exchange Adds Inverse Token BTCSHORT | Promoted – Bitcoin News

During the last few months, Bitcoin.coms cryptocurrency exchange has added a number of new coins and features. Today, Bitcoin.com Exchange has added an Ethereum-based inverse token called BTCSHORT, a token that gives users exposure to the inverse or -1x the daily performance of Bitcoin on any given day.

Bitcoin.com Exchange traders can now access a new Ethereum-based inverse token created by the company Amun. Our cryptocurrency trading platform has added BTCSHORT so traders can gain exposure to the inverse or -1x the daily performance of bitcoin (BTC). The new token will be accessible at 10:00 a.m. UTC and it will be paired with the stablecoin tether (USDT). A recent blog post published by Bitcoin.com explains how the BTCSHORT product from Amun works and how customers who use our exchange can leverage the token.

BTCSHORT is strictly not a security, carries many risks, and is not suitable for risk-averse token holders and traders, the blog post explains. This type of token is best suited for sophisticated, highly risk-tolerant token holders who understand and are comfortable with taking on the risks inherent to inverse tokens like BTCSHORT and understand the risks associated in holding tokens generally and inverse products in particular.

For instance, if the price of BTC is trending downward then BTCSHORTs multi-day performance will prosper, and daily returns of the token will be compounded. In contrast, if there is low volatility and the price of BTC is headed northbound, then BTCSHORTs performance will drop and there will be no compounded returns.

Losses made on one day will be, because of previous losses, applied to a smaller amount, the announcement details. This means that compounding will lead to slightly reduced losses than if there were no compounding. Additionally, the BTCSHORT listing announcement adds:

BTCSHORT offers a notional exposure to -1x the daily performance of Bitcoin. It is crucial that all token holders understand how compounding and the daily rebalancing of the token affect performance, especially in volatile markets. The tokens are designed for holding periods of equal or less than one day and holders need to consider their holdings each day.

If you are interested in learning more about BTCSHORT and other digital asset token backed by industry experts then check out Amuns website and frequently asked questions (FAQ) section. If you want to join one of the fastest cryptocurrency trading platforms on the market today, then check out Bitcoin.coms Exchange today.

Our exchange is a simple-to-use trading engine that offers a variety of different cryptocurrencies. Popular digital assets hosted on the platform include coins like litecoin (LTC), ripple (XRP), tron (TRX), zcash (ZEC), stellar (XLM), dash (DASH) and eos (EOS) are paired with markets denominated in base currencies such as bitcoin cash (BCH), ETH, BTC, and tether (USDT).

What do you think about the token BTCSHORT? Let us know in the comments below.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Read disclaimer

Continue reading here:
Exposure to -1x the Daily Performance: Bitcoin.com Exchange Adds Inverse Token BTCSHORT | Promoted - Bitcoin News

Read More..

Molecular dynamics used to simulate 100 million atoms | Opinion – Chemistry World

The TV series Devs took as its premise the idea that a quantum computer of sufficient power could simulate the world so completely that it could project events accurately back into the distant past (the Crucifixion or prehistory) and predict the future. At face value somewhat absurd, the scenario supplied a framework on which to hang questions about determinism and free will (and less happily, the Many Worlds interpretation of quantum mechanics).

Quite what quantum computers will do for molecular simulations remains to be seen, but the excitement about them shouldnt eclipse the staggering advances still being made in classical simulation. Full ab initio quantum-chemical calculations are very computationally expensive even with the inevitable approximations they entail, so it has been challenging to bring this degree of precision to traditional molecular dynamics, where molecular interactions are still typically described by classical potentials. Even simulating pure water, where accurate modelling of hydrogen bonding and the ionic disassociation of molecules involves quantum effects, has been tough.

Now a team that includes Linfeng Zhang and Roberto Car of Princeton University, US, has conducted ab initio molecular dynamics simulations for up to 100 million atoms, probing timescales up to a few nanoseconds.1 Sure, its a long way from the Devs fantasy of an exact replica of reality. But it suggests that simulations with quantum precision are reaching the stage where we can talk not in terms of handfuls of molecules but of bulk matter.

How do they do it? The trick, which researchers have been exploring for several years now, is to replace quantum-chemical calculations with machine learning (ML). The general strategy of ML is that an algorithm learns to solve a complex problem by being trained with many examples for which the answers are already known, from which it deduces the general shape of solutions in some high-dimensional space. It then uses that shape to interpolate for examples that it hasnt seen before. The familiar example is image interpretation: the ML system works out what to look for in photos of cats, so that it can then spot which new images have cats in them. It can work remarkably well so long as it is not presented with cases that lie far outside the bounds of the training set.

The approach is being widely used in molecular and materials science, for example to predict crystal structures from elemental composition,2-3 or electronic structure from crystal structure.4-5 In the latter case, bulk electronic properties such as band gaps have traditionally been calculated using density functional theory (DFT), an approximate way to solve the quantum-mechanical equations of many-body systems. Here the spatial distribution of electron density is computationally iterated from some initial guess until it fits the equations in a self-consistent way. But its computationally intensive, and ML circumvents the calculations by figuring out from known cases what kind of electron distribution a given configuration of atoms will have.

The approach can in principle be used for molecular dynamics by recalculating the electron densities at each time step. Zhang and colleagues have now shown how far this idea can be pushed using supercomputing technology, clever algorithms, and state-of-the-art artificial intelligence.6 They present results for simulations of up to 113 million atoms for the test case of a block of copper atoms, enabling something approaching a prediction of bulk-like mechanical behaviour from quantum chemistry. Their simulations of liquid water, meanwhile, contain up to 12.6 million atoms.

For small systems where the comparison to full quantum DFT calculations can be made, the researchers find electron distributions essentially indistinguishable from the full calculations, while gaining 45 orders of magnitude in speed. Their system can capture the full phase diagram of water over a wide range of temperature and pressure, and can simulate processes such as ice nucleation. In some situations water can be coarse-grained such that hydrogen bonding can still be modelled without including the hydrogen atoms explicitly.7 The researchers say it should be possible soon to follow such processes on timescales approaching microseconds for about a million water molecules, enabling them to look at processes such as droplet and ice formation in the atmosphere.

For small systems where the comparison to full quantum DFT calculations can be made, the researchers find electron distributions essentially indistinguishable from the full calculations, while gaining 45 orders of magnitude in speed. Their system can capture the full phase diagram of water over a wide range of temperature and pressure, and can simulate processes such as ice nucleation. The researchers say it should be possible soon to follow such processes on timescales approaching microseconds for about a million water molecules, enabling them to look at processes such as droplet and ice formation in the atmosphere.

Both of these test cases are helped by being relatively homogeneous, involving largely identical atoms or molecules. Still, the prospects of this deep-learning approach look good for studying much more heterogeneous systems such as complex alloys.8 One very attractive goal is, of course, biomolecular systems, where the ability to model fully solvated proteins, membranes and other cell components could help us understand complex mesoscale cell processes and predict the behaviour of drug candidates. One challenge here is how to include long-range interactions such as electrostatic forces.

Its a long way from Devs-style simulations of minds and histories, which will perhaps only ever be fantasies. But one scene in that series showed what might be a more tractable goal: the simulation of a growing snowflake. What a wonderful way that would be to advertise the simulators art.

1. Jia et al., arXiv, 2020 http://www.arxiv.org/abs/2005.00223 (submitted, ACM, New York, 2020)

2 C C Fischer et al, Nat. Mater., 2006, 5, 641 (DOI:10.1038/nmat1691)

3 N Mounet et al, Nat. Nanotechnol., 2018, 13, 246 (DOI: 10.1038/s41565-017-0035-5)

4 Y Dong et al, npj Comput. Mater., 2019, 5, 26 (DOI:10.1038/s41524-019-0165-4)

5 A Chandrasekaran et al, npj Comput. Mater., 2019, 5, 22 (DOI:10.1038/s41524-019-0162-7)

6 L Zhang et al, Phys. Rev. Lett., 2018, 120, 143001 (DOI:10.1103/PhysRevLett.120.143001)

7. L Zhang et al, J. Chem. Phys., 2018, 149, 034101 (DOI:10.1063/1.5027645)

8. F-Z Dai et al., J. Mater. Sci. Technol., 2020, 43, 168 (DOI:10.1016/j.jmst.2020.01.005)

Go here to see the original:
Molecular dynamics used to simulate 100 million atoms | Opinion - Chemistry World

Read More..

Artificial intelligence – Ascension Glossary

Abbreviation - AI

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behavior. AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.This raises philosophical arguments about the nature of the mind, Consciousness and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity. Today, it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer sciences. Currently on the earth, without common knowledge of the NAA agenda that uses many forms of Artificial intelligence and Satanic Ritual Abuse to Mind Control and implant the public, there is much controversy on the discussions on positive and negative results of AI, as it is a growing threat to the planet, as well as a threat to human freedom and sovereignty. [1]

SPE's are aggressive Artificial intelligence parasites that invade the central nervous system to monitor a persons thought patterns so that they can mimic them. They monitor thought patterns and emotional behaviors and search for weaknesses within the human host body so they can aggressively use that weakness against the person to plummet them into very low frequency thoughts of the Predator Mind. See the Houses of Ego. When a person has weak spiritual-energetic development, weak moral character along with a weak mind, this makes it much easier for the AI parasite to control the human being and prepare the body for dark force or Imposter Spirit Possession.

The way to dismantle and deactivate artificial intelligence and nanotechnology Alien Implants is to develop your heart center and spiritual human qualities such increasing deep emotional feelings of Loving Kindness, Compassion and Empathy.

Alien implants work in the human body similarly as the chemical process of Geoengineering that is spraying chemtrails in the skies to manipulate or control forces in physical matter. The construction and raw substances used in Alien Implants are vast and some unknown, they can be made of biological material, synthetic material, etheric substances in the Lightbody or programmed nanobots (Nanites) used in Artificial intelligence technologies. Alien implants are a bio-engineering technology designed to shape the human body into the Mind Control submission to NAA agendas, while chemical (nanoparticle) geoengineering is used to control the weather by harming the ozone layer and create excessive methane gases.

In both examples, when the foreign (unnatural or artificial) material is introduced to the natural body it disrupts the electromagnetic energetic balance and the homeostatic rhythm of the body. Many times it runs a low level EMF or Radio Waves signal that is designed to disrupt the human bodies natural homeostasis and electromagnetic balance.

Transhumanism is a school of thought that seeks to guide us towards a posthuman condition. Essentially, this is about creating artificially intelligent hybrids or cyborgs to replace the organic spiritual consciousness of humans. Some examples are redesigning the human organism using advanced nanotechnology or radical technological enhancements. Some of the proposed biological enhancements are using some combination of technologies such as genetic engineering, psychopharmacology, life extension therapies, neural interfaces, brain mapping, wearable or implanted computers, and entrainment of cognitive techniques. Most of these options are designed to disconnect the human soul from the human body, and prepare the body to be used as a shell for a new host. Effectively, this is integrating technological and pharmaceutical hybridization to damage human DNA, as preparation for body snatching.

The fundamental basis of the Transhumanism concept is the A.I. downloaded into the scientific human mind from the Negative Aliens and Satanic Forces, in their quest to survive and achieve immortality by hijacking human consciousness and ultimately possessing the human host body. They do not have flesh and bone bodies and covet ours. Most academics are filled with a variety of mind control and alien implants to be a cog in the wheel to steadily enforce alien control systems. Most early transhumanism concepts were developed by geneticists interested in eugenics and sustaining life forms in synthetic environments. (Like the eugenic experiments similar to those of the Black Sun Nazis). A common feature of promoting transhumanism is the future vision of creating a new intelligent species, into which humanity will evolve and eventually, either supplement it or supersede it. This distraction on the surface is a scheme, while the underlying motivation is intending species extinction of what we know as humans today. Transhumanism stresses the evolutionary perspective, yet it completely ignores the electromagnetic function of human DNA and the consciousness reality of the multidimensional human Soul-spirit. They claim to want to stop human suffering but have no idea of the Alien Machinery and mind control implants used to imprison human consciousness. They know nothing about the afterlife, what happens during the death of the body or even how the human body or Universe really works, yet they want to control every aspect of the human body with Artificial intelligence technology.

Nanorobotics is the emerging technology field creating machines or robots whose components are at or close to the scale of a nanometre (109 meters). More specifically, nanorobotics refers to the nanotechnology engineering discipline of designing and building nanorobots, with devices ranging in size from 0.110 micrometres and constructed of nanoscale or molecular components. The names nanobots, nanoids, Nanites, nanomachines, or nanomites have also been used to describe these Artificial intelligence devices currently under research and development. [2] Many types of nanotechnologies are in full operation in the Secret Space Programs and are already used by many technologically advanced Extraterrestrial races.

The next extension of collecting data through the use of artificial intelligence Brain Mapping is Mind Uploading. Some Transhumanists consider mind uploading an important proposed life extension technology. The goal of mind uploading is to recreate whole brain emulation, which has the ability to transfer the data from a human brain to a computational device, such as a digital, analog, quantum-based or software-based artificial neural network. Then from quantum computers, the brain that was mind uploaded can be controlled or manipulated in subspace. Many scientists believe that the human brain and mind define who we are, based solely on their information pattern, while the body or hardware that information is implemented upon is secondary or interchangeable. They are wrong.

Moving intelligence patterns of the human brain as purely data structures to another synthetic or biological substrate manifests extremely damaging genetic mutations and perversions into the blueprint of original Silicate Matrix human DNA. AI genetic mutations in human DNA generate unforeseen diseases and miasma in the future, capable of destroying the organic consciousness potential that exists within the elemental human body and planetary body.

Additionally, Transhumanism generally seeks to explain the body and brain function as purely computational machinery that is responsible for our cognitive capacities and informational processing. Its proponents believe these are what make the merge of artificial intelligence technology with the human body a positive technological advancement towards humanitys future evolutionary direction. Nothing is further from the truth.

The real agenda behind Transhumanism is to interfere with the true higher consciousness embodiment process during the Ascension Cycle, by sublimating higher consciousness embodiment to be replaced with the insertion of artificially intelligent machines and virtual realities.[3]

The Electric Wars timeline holds major causal Trigger Event memories of when Artificial intelligence technology was in its earlier phases in this Universe. This timeline represents the pre-assimilation stages of the Black Subtle Forces in the phantom matrices, before gradually converting them into AI systems. The eighth astrological precession was the time of the Orion Invasion event that occurred as a result of the Stargate damage in the 8th portal, and through each of the subsequent planetary time cycles, the Alien Machinery was methodically brought into each dimension in order to reach the lowest density of the material reality. Essentially, the NAA plan was to bring the Electric War dramas to the matter fields of the earth in order to anchor it into the physical realm, which shifts future timelines.[4]

The artificial core manifestation template is built on Base 10 Math, and is an intentional distortion of the 12 Tree Grid manifestation template or Kathara Grid that is built on base 12 math. This distortion to the natural order compromised the integrity of the Universal Tree of Life core manifestation template, which is the basis of all energy to matter manifestation.

Essentially, the Thothian Luciferian agenda was to utterly destroy all organic creation code, matrices and artifacts that included Base 12 Math and replace it with their own versions of Base 10 Math.

The patriarchal slant and use of the Artificial Tree of Life to project virtual realities distorted the original Base 12 Code into the base 10 code (eliminating the 12D Ray), which caused a reality split between the artificial and organic layers throughout the dimensional timelines. There were sections of the dimensional matrices that remained organic, and others that split into Artificial Timelines and were absorbed into the phantom matrices.[5]

Archontic Deception Behavior

SPE

Luciferian

Satanic

NAA

Human Trafficking

Original post:
Artificial intelligence - Ascension Glossary

Read More..

10 Steps to Adopting Artificial Intelligence in Your …

Artificial intelligence (AI) is clearly a growing force in the technology industry. AI is taking center stage at conferences and showing potential across a wide variety of industries, including retail and manufacturing. New products are being embedded with virtual assistants, while chatbots are answering customer questions on everything from your online office supplier's site to your web hosting service provider's support page. Meanwhile, companies such as Google, Microsoft, and Salesforce are integrating AI as an intelligence layer across their entire tech stack. Yes, AI is definitely having its moment.

This isn't the AI that pop culture has conditioned us to expect; it's not sentient robots or Skynet, or even Tony Stark's Jarvis assistant. This AI plateau is happening under the surface, making our existing tech smarter and unlocking the power of all the data that enterprises collect. What that means: Widespread advancement in machine learning (ML), computer vision, deep learning, and natural language processing (NLP) have made it easier than ever to bake an AI algorithm layer into your software or cloud platform.

For businesses, practical AI applications can manifest in all sorts of ways depending on your organizational needs and the business intelligence (BI) insights derived from the data you collect. Enterprises can employ AI for everything from mining social data to driving engagement in customer relationship management (CRM) to optimizing logistics and efficiency when it comes to tracking and managing assets.

ML is playing a key role in the development of AI, noted Luke Tang, General Manager of TechCode's Global AI+ Accelerator program, which incubates AI startups and helps companies incorporate AI on top of their existing products and services.

"Right now, AI is being driven by all the recent progress in ML. There's no one single breakthrough you can point to, but the business value we can extract from ML now is off the charts," Tang said. "From the enterprise point of view, what's happening right now could disrupt some core corporate business processes around coordination and control: scheduling, resource allocation and reporting." Here we provide tips from some experts to explain the steps businesses can take to integrate AI in your organization and to ensure your implementation is a success.

Take the time to become familiar with what modern AI can do. The TechCode Accelerator offers its startups a wide array of resources through its partnerships with organizations such as Stanford University and corporations in the AI space. You should also take advantage of the wealth of online information and resources available to familiarize yourself with the basic concepts of AI. Tang recommends some of the remote workshops and online courses offered by organizations such as Udacity as easy ways to get started with AI and to increase your knowledge of areas such as ML and predictive analytics within your organization.

The following are a number of online resources (free and paid) that you can use to get started:

Once you're up to speed on the basics, the next step for any business is to begin exploring different ideas. Think about how you can add AI capabilities to your existing products and services. More importantly, your company should have in mind specific use cases in which AI could solve business problems or provide demonstrable value.

"When we're working with a company, we start with an overview of its key tech programs and problems. We want to be able to show it how natural language processing, image recognition, ML, etc. fit into those products, usually with a workshop of some sort with the management of the company," Tang explained. "The specifics always vary by industry. For example, if the company does video surveillance, it can capture a lot of value by adding ML to that process."

Next, you need to assess the potential business and financial value of the various possible AI implementations you've identified. It's easy to get lost in "pie in the sky" AI discussions, but Tang stressed the importance of tying your initiatives directly to business value.

"To prioritize, look at the dimensions of potential and feasibility and put them into a 2x2 matrix," Tang said. "This should help you prioritize based on near-term visibility and know what the financial value is for the company. For this step, you usually need ownership and recognition from managers and top-level executives."

There's a stark difference between what you want to accomplish and what you have the organizational ability to actually achieve within a given time frame. Tang said a business should know what it's capable of and what it's not from a tech and business process perspective before launching into a full-blown AI implementation.

"Sometimes this can take a long time to do," Tang said. "Addressing your internal capability gap means identifying what you need to acquire and any processes that need to be internally evolved before you get going. Depending on the business, there may be existing projects or teams that can help do this organically for certain business units."

Once your business is ready from an organizational and tech standpoint, then it's time to start building and integrating. Tang said the most important factors here are to start small, have project goals in mind, and, most importantly, be aware of what you know and what you don't know about AI. This is where bringing in outside experts or AI consultants can be invaluable.

"You don't need a lot of time for a first project; usually for a pilot project, 2-3 months is a good range," Tang said. "You want to bring internal and external people together in a small team, maybe 4-5 people, and that tighter time frame will keep the team focused on straightforward goals. After the pilot is completed, you should be able to decide what the longer-term, more elaborate project will be and whether the value proposition makes sense for your business. It's also important that expertise from both sidesthe people who know about the business and the people who know about AIis merged on your pilot project team."

Tang noted that, before implementing ML into your business, you need to clean your data to make it ready to avoid a "garbage in, garbage out" scenario. "Internal corporate data is typically spread out in multiple data silos of different legacy systems, and may even be in the hands of different business groups with different priorities," Tang said. "Therefore, a very important step toward obtaining high-quality data is to form a cross-[business unit] taskforce, integrate different data sets together, and sort out inconsistencies so that the data is accurate and rich, with all the right dimensions required for ML."

Begin applying AI to a small sample of your data rather than taking on too much too soon. "Start simple, use AI incrementally to prove value, collect feedback, and then expand accordingly," said Aaron Brauser, Vice President of Solutions Management at M*Modal, which offers natural language understanding (NLU) tech for health care organizations as well as an AI platform that integrates with electronic medical records (EMRs).

A specific type of data could be information on certain medical specialties. "Be selective in what the AI will be reading," said Dr. Gilan El Saadawi, Chief Medical Information Officer (CMIO) at M*Modal. "For example, pick a certain problem you want to solve, focus the AI on it, and give it a specific question to answer and not throw all the data at it."

After you ramp up from a small sample of data, you'll need to consider the storage requirements to implement an AI solution, according to Philip Pokorny, Chief Technical Officer (CTO) at Penguin Computing, a company that offers high-performance computing (HPC), AI, and ML solutions.

"Improving algorithms is important to reaching research results. But without huge volumes of data to help build more accurate models, AI systems cannot improve enough to achieve your computing objectives," Pokorny wrote in a white paper entitled, "Critical Decisions: A Guide to Building the Complete Artificial Intelligence Solution Without Regrets." "That's why inclusion of fast, optimized storage should be considered at the start of AI system design."

In addition, you should optimize AI storage for data ingest, workflow, and modeling, he suggested. "Taking the time to review your options can have a huge, positive impact to how the system runs once its online," Pokorny added.

With the additional insight and automation provided by AI, workers have a tool to make AI a part of their daily routine rather than something that replaces it, according to Dominic Wellington, Global IT Evangelist at Moogsoft, a provider of AI for IT operations (AIOps). "Some employees may be wary of technology that can affect their job, so introducing the solution as a way to augment their daily tasks is important," Wellington explained.

He added that companies should be transparent on how the tech works to resolve issues in a workflow. "This gives employees an 'under the hood' experience so that they can clearly visualize how AI augments their role rather than eliminating it," he said.

When you're building an AI system, it requires a combination of meeting the needs of the tech as well as the research project, Pokorny explained. "The overarching consideration, even before starting to design an AI system, is that you should build the system with balance," Pokorny said. "This may sound obvious but, too often, AI systems are designed around specific aspects of how the team envisions achieving its research goals, without understanding the requirements and limitations of the hardware and software that would support the research. The result is a less-than-optimal, even dysfunctional, system that fails to achieve the desired goals."

To achieve this balance, companies need to build in sufficient bandwidth for storage, the graphics processing unit (GPU), and networking. Security is an oft-overlooked component as well. AI by its nature requires access to broad swaths of data to do its job. Make sure that you understand what kinds of data will be involved with the project and that your usual security safeguards -- encryption, virtual private networks (VPN), and anti-malware -- may not be enough.

"Similarly, you have to balance how the overall budget is spent to achieve research with the need to protect against power failure and other scenarios through redundancies," Pokorny said. "You may also need to build in flexibility to allow repurposing of hardware as user requirements change."

Original post:
10 Steps to Adopting Artificial Intelligence in Your ...

Read More..

Can Artificial Intelligence Be Smarter Than a Person …

But the benign examples were just as interesting. In one test of locomotion, a simulated robot was programmed to travel forward as quickly as possible. But instead of building legs and walking, it built itself into a tall tower and fell forward. How is growing tall and falling on your face anything like walking? Well, both cover a horizontal distance pretty quickly. And the AI took its task very, very literally.

According to Janelle Shane, a research scientist who publishes a website about artificial intelligence, there is an eerie genius to this forward-falling strategy. After I had posted [this paper] online, I heard from some biologists who said, Oh yeah, wheat uses this strategy to propagate! she told me. At the end of each season, these tall stalks of wheat fall over, and their seeds land just a little bit farther from where the wheat stalk heads started.

From the perspective of the computer programmer, the AI failed to walk. But from the perspective of the AI, it rapidly mutated in a simulated environment to discover something which had taken wheat stalks millions of years to learn: Why walk, when you can just fall? A relatable sentiment.

The stories in this paper are not just evidence of the dim-wittedness of artificial intelligence. In fact, they are evidence of the opposite: A divergent intelligence that mimics biology. These anecdotes thus serve as evidence that evolution, whether biological or computational, is inherently creative and should routinely be expected to surprise, delight, and even outwit us, the lead authors write in the conclusion. Sometimes, a machine is more clever than its makers.

This is not to say that AI displays what psychologists would call human creativity. These machines cannot turn themselves on, or become self-motivated, or ask alternate questions, or even explain their discoveries. Without consciousness or comprehension, a creature cannot be truly creative.

But if AI, and machine learning in particular, does not think as a person does, perhaps its more accurate to say it evolves, as an organism can. Consider the familiar two-step of evolution. With mutation, genes diverge from their preexisting structure. With natural selection, organisms converge on the mutation best adapted to their environment. Thus, evolutionary biology displays a divergent and convergent intelligence that is a far better metaphor for to the process of machine learning, like generative design, than the tangle of human thought.

AI might not be smart in a human sense of the word. But it has already shown that it can perform an eerie simulation of evolution. And that is a spooky kind of genius.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.

Follow this link:
Can Artificial Intelligence Be Smarter Than a Person ...

Read More..

5 Strong Buy Artificial Intelligence Stocks

Market value: $46.5 billion

TipRanks consensus price target: $188.75 (33% upside potential)

TipRanks consensus rating: Strong Buy

From healthcare to agriculture, Deere (DE, $142.01) is another unexpected name using artificial intelligence in creative new ways. According to a report by KeyBanc, technology acquired by John Deere could reduce chemical spraying volumes by up to 90%. Thats a massive saving, both in terms of money and in terms of the environment.

So how did John Deere move into the world of big data?

For this initiative, DE snapped up computer-vision startup Blue River Technology for $305 million in September 2017. Blue River developed a smart robot capable of assessing whether a plant is a weed or a plant, then delivering the pesticide accordingly. So instead of assessing weeds vs crops on a field by field basis, farmers can now work plant by plant.

This is just one AI-powered service that John Deere now offers to farmers. For example, farmers can also use the companys big-data analytics to decide where to plant crops or how to use their machinery most effectively. The companys online portal gathers data from sensors attached to machines as well as soil probes and external datasets.

From a Street perspective, DE is a top stock to own right now. The company has received no less than nine consecutive buy ratings in the last three months.

We think the slow recovery in Deeres large agricultural business could accelerate in fiscal year 2019 with higher grain prices, which have a favorable set-up entering the growing season, comments UBS analyst Steven Fisher (view Fishers TipRanks profile).

Read the rest here:
5 Strong Buy Artificial Intelligence Stocks

Read More..

Artificial intelligence data privacy issues on the rise

Thanks to the sheer amount of data that machine learning technologies collect, end-user privacy will be more important than ever.

It's still very early days for artificial intelligence (AI) in businesses. But the data that desktop and mobile applications automatically collect, analyze using machine learning algorithms and act upon is a reality, and IT shops must be ready to handle this type and volume of information. In particular, thorny artificial intelligence data privacy issues can arise if employers can detect and view more -- and more personal -- data about their employees on devices or apps.

"AI requires a ton of data, so the privacy implications are bigger," said Andras Cser, vice president and principal analyst at Forrester Research. "There's potential for a lot more personally identifiable data being collected. IT definitely needs to pay attention to masking that data."

Business applications and devices can take advantage of machine learning in a number of ways. A mobile sales app could collect location or IP address data and find patterns to connect the user with customers in their area, for instance. If the user accesses this app on a personal device they use for work, they may not want their employer to be able to view that data when they're off the clock. Or, a user's personal apps could learn information about the individual that he or she wouldn't want their human resources department to find out.

Health-related devices that take advantage of artificial intelligence pose a significant threat. A lot of companies give out Fitbits, for example, to gather data about employees that's used for insurance purposes, said mobile expert Brian Katz. Artificial intelligence data from that kind of device could reveal a health condition the employer didn't know about, and then comes the real dilemma:

"If your manager knows about it, do they act on it?" Katz said.

One way for IT to address data privacy issues with machine learning is to "mask" the data collected, or anonymize it so that observers can't learn specific information about a specific user. Some companies take a similar approach now with regulatory compliance, where blind enforcement policies use threat detection to determine if a device follows regulations but do not glean any identifying information.

With AI it becomes easier to correlate data ... and remove privacy. Brian Katzmobile expert

Device manufacturers have also sought to protect users in this way. For example, Apple iOS 10 added differential privacy, which recognizes app and data usage patterns among groups of users while obscuring the identities of individuals.

"If you know a couple things that you can correlate, you can identify a person," Katz said. "With AI it becomes easier to correlate data ... and remove privacy. People want to provide a better experience and learn more about [users], and doing that in an anonymous way is very difficult."

Tools such as encryption are also important for IT to maintain data privacy and security, Cser said. IT departments should also have policies in place that make it clear to users what is permissible and not permissible data for IT to collect and what the business can do with it, he said.

It's important for users to understand this information, Katz said.

"Part of it's just being transparent with users about what you're doing with the data," he said.

Another best practice is to separate business and personal apps using technologies such as containerization, he said. Enterprise mobility management tools can be set up to look at only corporate apps but still be able to whitelist and blacklist any apps to prevent malware. That way, IT doesn't invade users' privacy on personal apps.

Privacy regulations vary widely across the globe, and many businesses and countries are still working to update guidelines based on emerging technology.

The European Union (EU), for example, has strong protection for the personal privacy of employees. Individuals must be notified of any data gathered about them, any data processing can be done only if there is a "legitimate" purpose such as investigating suspected criminal activity, and collected data must be kept secure. There are also restrictions on entities sharing collected data outside the EU.

The United States is more lax, said Joseph Jerome, a policy counsel on the Privacy & Data Project at the Center for Democracy & Technology in Washington, D.C.

"Basically employers can get away with anything they want so long as they're providing some kind of notice of consent," he said.

That's the reason some companies prefer to provide corporate-owned devices rather than enable BYOD, Katz said.

"You don't have as much of an expectation about privacy there, and that's why they do it," he said. "Your privacy is much more limited on a [corporate] device."

And when it comes to artificial intelligence data specifically, an interesting question arises: Who is responsible for the learned information? The employer? The machine learning application itself? The person that created the algorithm? These factors are still up in the air, Cser said.

"Legal frameworks are not yet capable of handling this kind of autonomous information," he said. "It's going to be a precedence-based type of evolution."

Still, the data privacy issues raised by artificial intelligence are not entirely new. The internet of things and big data have been able to glean similarly personal and large volumes of data for years.

"It's basically a continuation of those trends," Jerome said. "It's lots and lots of data being gleaned from a lot of different sources. There's a lot of hype here, but at the end of the day ... I don't know if it raises any new issues."

Rather, machine learning might be a unique way to actually help users manage their data privacy, Jerome said. Privacy assistant apps could allow users to create policies that predict and make inferences over time to decide how and when the user would like their data to be collected and used, or not, according to Carnegie Mellon University research.

"AI might be an amazing way to do privacy management," Jerome said.

Original post:
Artificial intelligence data privacy issues on the rise

Read More..

An AI future set to take over post-Covid world | The …

Updated: May 18, 2020 10:03:39 pm

Written by Seuj Saikia

Rabindranath Tagore once said, Faith is the bird that feels the light when the dawn is still dark. The darkness that looms over the world at this moment is the curse of the COVID-19 pandemic, while the bird of human freedom finds itself caged under lockdown, unable to fly. Enthused by the beacon of hope, human beings will soon start picking up the pieces of a shared future for humanity, but perhaps, it will only be to find a new, unfamiliar world order with far-reaching consequences for us that transcend society, politics and economy.

Crucially, a technology that had till now been crawling or at best, walking slowly will now start sprinting. In fact, a paradigm shift in the economic relationship of mankind is going to be witnessed in the form of accelerated adoption of artificial intelligence (AI) technologies in the modes of production of goods and services. A fourth Industrial Revolution as the AI-era is referred to has already been experienced before the pandemic with the backward linkages of cloud computing and big data. However, the imperative of continued social distancing has made an AI-driven economic world order todays reality.

Setting aside the oft-discussed prophecies of the Robo-Human tussle, even if we simply focus on the present pandemic context, we will see millions of students accessing their education through ed-tech apps, mothers buying groceries on apps too and making cashless payments through fintech platforms, and employees attending video conferences on relevant apps as well: All this isnt new phenomena, but the scale at which they are happening is unparalleled in human history. The alternate universe of AI, machine learning, cloud computing, big data, 5G and automation is getting closer to us every day. And so is a clash between humans (labour) and robots (plant and machinery).

This clash might very well be fuelled by automation. Any Luddite will recall the misadventures of the 19th-century textile mills. However, the automation that we are talking about now is founded on the citadel of artificially intelligent robots. Eventually, this might merge the two factors of production into one, thereby making labour irrelevant. As factories around the world start to reboot post COVID-19, there will be hard realities to contend with: Shortage of migrant labourers in the entire gamut of the supply chain, variations of social distancing induced by the fears of a second virus wave and the overall health concerns of humans at work. All this combined could end up sparking the fire of automation, resulting in subsequent job losses and possible reallocation/reskilling of human resources.

In this context, a potential counter to such employment upheavals is the idea of cash transfers to the population in the form of Universal Basic Income (UBI). As drastic changes in the production processes lead to a more cost-effective and efficient modern industrial landscape, the surplus revenue that is subsequently earned by the state would act as a major source of funds required by the government to run UBI. Variants of basic income transfer schemes have existed for a long time and have been deployed to unprecedented levels during this pandemic. Keynesian macroeconomic measures are increasingly being seen as the antidote to the bedridden economies around the world, suffering from near-recession due to the sudden ban on economic activities. Governments would have to be innovative enough to pump liquidity into the system to boost demand without harming the fiscal discipline. But what separates UBI from all these is its universality, while others remain targeted.

This new economic world order would widen the cracks of existing geopolitical fault lines particularly between US and China, two behemoths of the AI realm. Datanomics has taken such a high place in the valuation spectre that the most valued companies of the world are the tech giants like Apple, Google, Facebook, Alibaba, Tencent etc. Interestingly, they are also the ones who are at the forefront of AI innovations. Data has become the new oil. What transports data are not pipelines but fibre optic cables and associated communication technologies. The ongoing fight over the introduction of 5G technology central to automation and remote command-control architecture might see a new phase of hostility, especially after the controversial role played by the secretive Chinese state in the COVID-19 crisis.

The issues affecting common citizens privacy, national security, rising inequality will take on newer dimensions. It is pertinent to mention that AI is not all bad: As an imperative change that the human civilisation is going to experience, it has its advantages. Take the COVID-19 crisis as an example. Amidst all the chaos, big data has enabled countries to do contact tracing effectively, and 3D printers produced the much-needed PPEs at local levels in the absence of the usual supply chains. That is why the World Economic Forum (WEF) argues that agility, scalability and automation will be the buzzwords for this new era of business, and those who have these capabilities will be the winners.

But there are losers in this, too. In this case, the developing world would be the biggest loser. The problem of inequality, which has already reached epic proportions, could be further worsened in an AI-driven economic order. The need of the hour is to prepare ourselves and develop strategies that would mitigate such risks and avert any impending humanitarian disaster. To do so, in the words of computer scientist and entrepreneur Kai-Fu Lee, the author of AI Superpowers, we have to give centrality to our heart and focus on the care economy which is largely unaccounted for in the national narrative.

(The writer is assistant commissioner of income tax, IRS. Views are personal)

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Opinion News, download Indian Express App.

Here is the original post:
An AI future set to take over post-Covid world | The ...

Read More..

Coronavirus tests the value of artificial intelligence in medicine – FierceBiotech

Albert Hsiao, M.D., and his colleagues at the University of California, San Diego (USCD) health system had been working for 18 months on anartificial intelligence programdesigned to help doctors identify pneumonia on a chest X-ray. When thecoronavirushit the U.S., they decided to see what it could do.

The researchers quickly deployed the application, which dots X-ray images with spots of color where there may be lung damage or other signs of pneumonia. It has now been applied to more than 6,000 chest X-rays, and its providing some value in diagnosis, said Hsiao, director of UCSDs augmented imaging and artificial intelligence data analytics laboratory.

His team is one of several around the country that has pushed AI programs developed in a calmer time into the COVID-19 crisis to perform tasks like deciding which patients face the greatest risk of complications and which can be safely channeled into lower-intensity care.

ASCO Explained: Expert predictions and takeaways from the world's biggest cancer meeting

Join FiercePharma for our ASCO pre- and post-show webinar series. We'll bring together a panel of experts to preview what to watch for at ASCO. Cancer experts will highlight closely watched data sets to be unveiled at the virtual meeting--and discuss how they could change prescribing patterns. Following the meeting, well do a post-show wrap up to break down the biggest data that came out over the weekend, as well as the implications they could have for prescribers, patients and drugmakers.

The machine-learning programs scroll through millions of pieces of data to detect patterns that may be hard for clinicians to discern. Yet few of the algorithms have been rigorously tested against standard procedures. So while they often appear helpful, rolling out the programs in the midst of a pandemic could be confusing to doctors or even dangerous for patients, some AI experts warn.

AI is being used for things that are questionable right now, said Eric Topol, M.D., director of the Scripps Research Translational Institute and author of several books on health IT.

Topol singled out a system created byEpic, a major vendor of electronic health record software, that predicts which coronavirus patients may become critically ill. Using the tool before it has been validated is pandemic exceptionalism, he said.

Epic said the companys model had been validated with data from more 16,000 hospitalized COVID-19 patients in 21 healthcare organizations. No research on the tool has been published, but, in any case, it was developed to help clinicians make treatment decisions and is not a substitute for their judgment, said James Hickman, a software developer on Epics cognitive computing team.

Others see the COVID-19 crisis as an opportunity to learn about the value of AI tools.

My intuition is its a little bit of the good, bad and ugly, said Eric Perakslis, Ph.D., a data science fellow at Duke University and former chief information officer at the FDA. Research in this setting is important.

Nearly $2 billion poured into companies touting advancements in healthcare AI in 2019. Investments in the first quarter of 2020 totaled $635 million, up from $155 million in the first quarter of 2019, according to digital health technology funderRock Health.

At least three healthcare AI technology companies have made funding deals specific to the COVID-19 crisis, includingVida Diagnostics, an AI-powered lung-imaging analysis company, according to Rock Health.

Overall, AIs implementation in everyday clinical care is less common than hype over the technology would suggest. Yet the coronavirus crisis has inspired some hospital systems to accelerate promising applications.

UCSD sped up its AI imaging project, rolling it out in only two weeks.

Hsiaos project, with research funding from Amazon Web Services, the UC system and the National Science Foundation (NSF), runs every chest X-ray taken at its hospital through an AI algorithm. While no data on the implementation have been published yet, doctors report that the tool influences their clinical decision-making about a third of the time, said Christopher Longhurst, M.D., UCSD Healths chief information officer.

The results to date are very encouraging, and were not seeing any unintended consequences, he said. Anecdotally, were feeling like its helpful, not hurtful.

AI has advanced further in imaging than other areas of clinical medicine because radiological images have tons of data for algorithms to process, and more data make the programs more effective, said Longhurst.

But while AI specialists have tried to get AI to do things like predict sepsis and acute respiratory distressresearchers at Johns Hopkins Universityrecently won a NSF grantto use it to predict heart damage in COVID-19 patientsit has been easier to plug it into less risky areas such as hospital logistics.

In New York City, two major hospital systems are using AI-enabled algorithms to help them decide when and how patients should move into another phase of care or be sent home.

AtMount Sinai Health System, an artificial intelligence algorithm pinpoints which patients might be ready to be discharged from the hospital within 72 hours, said Robbie Freeman, vice president of clinical innovation at Mount Sinai.

Freeman described the AIs suggestion as a conversation starter, meant to help assist clinicians working on patient cases decide what to do. AI isnt making the decisions.

NYU Langone Healthhas developed a similar AI model. It predicts whether a COVID-19 patient entering the hospital will suffer adverse events within the next four days, said Yindalon Aphinyanaphongs, M.D., Ph.D., who leads NYU Langones predictive analytics team.

The model will be run in a four- to six-week trial with patients randomized into two groups: one whose doctors will receive the alerts, and another whose doctors will not. The algorithm should help doctors generate a list of things that may predict whether patients are at risk for complications after theyre admitted to the hospital, Aphinyanaphongs said.

Some health systems are leery of rolling out a technology that requires clinical validation in the middle of a pandemic. Others say they didnt need AI to deal with the coronavirus.

Stanford Health Careis not using AI to manage hospitalized patients with COVID-19, saidRon Li, M.D., the centers medical informatics director for AI clinical integration. The San Francisco Bay Areahasnt seen the expected surge of patientswho would have provided the mass of data needed to make sure AI works on a population, he said.

Outside the hospital, AI-enabled risk factor modeling is being used to help health systems track patients who arent infected with the coronavirus but might be susceptible to complications if they contract COVID-19.

At Scripps Health in San Diego, clinicians are stratifying patients to assess their risk of getting COVID-19 and experiencing severe symptoms using a risk-scoring model that considers factors like age, chronic conditions and recent hospital visits. When a patient scores seven or higher, a triage nurse reaches out with information about the coronavirus and may schedule an appointment.

Though emergencies provide unique opportunities to try out advanced tools, its essential for health systems to ensure doctors are comfortable with them and to use the tools cautiously, with extensive testing and validation, Topol said.

When people are in the heat of battle and overstretched, it would be great to have an algorithm to support them, he said. We just have to make sure the algorithm and the AI tool isnt misleading, because lives are at stake here.

Kaiser Health News(KHN) is a national health policy news service. It is an editorially independent program of theHenry J. Kaiser Family Foundationwhich is not affiliated with Kaiser Permanente.

ThisKHNstory first published onCalifornia Healthline, a service of theCalifornia Health Care Foundation

See the article here:
Coronavirus tests the value of artificial intelligence in medicine - FierceBiotech

Read More..

Marshaling artificial intelligence in the fight against Covid-19 – MIT News

Artificial intelligencecouldplay adecisiverole in stopping the Covid-19 pandemic. To give the technology a push, the MIT-IBM Watson AI Lab is funding 10 projects at MIT aimed atadvancing AIs transformative potential for society. The research will target the immediate public health and economic challenges of this moment. But it could havealasting impact on how we evaluate and respond to risk long after the crisis has passed. The 10 research projects are highlightedbelow.

Early detection of sepsis in Covid-19 patients

Sepsis is a deadly complication of Covid-19, the disease caused by the new coronavirus SARS-CoV-2. About 10 percent of Covid-19 patients get sick with sepsis within a week of showing symptoms, but only about half survive.

Identifying patients at risk for sepsis can lead to earlier, more aggressive treatment and a better chance of survival. Early detection can also help hospitals prioritize intensive-care resources for their sickest patients. In a project led by MIT ProfessorDaniela Rus, researchers will develop a machine learning system to analyze images of patients white blood cells for signs of an activated immune response against sepsis.

Designing proteins to block SARS-CoV-2

Proteins are the basic building blocks of life, and with AI, researchers can explore and manipulate their structures to address longstanding problems. Take perishable food: The MIT-IBM Watson AI Labrecently used AIto discover that a silk protein made by honeybees could double as a coating for quick-to-rot foods to extend their shelf life.

In a related project led by MIT professorsBenedetto MarelliandMarkus Buehler, researchers will enlist the protein-folding method used in their honeybee-silk discovery to try to defeat the new coronavirus. Their goal is to design proteins able to block the virus from binding to human cells, and to synthesize and test their unique protein creations in the lab.

Saving lives while restarting the U.S. economy

Some states are reopening for business even as questions remain about how to protect those most vulnerable to the coronavirus. In a project led by MIT professorsDaron Acemoglu,Simon JohnsonandAsu Ozdaglarwill model the effects of targeted lockdowns on the economy and public health.

In arecent working paperco-authored by Acemoglu,Victor Chernozhukov, Ivan Werning, and Michael Whinston,MIT economists analyzed the relative risk of infection, hospitalization, and death for different age groups. When they compared uniform lockdown policies against those targeted to protect seniors, they found that a targeted approach could save more lives. Building on this work, researchers will consider how antigen tests and contact tracing apps can further reduce public health risks.

Which materials make the best face masks?

Massachusetts and six other states have ordered residents to wear face masks in public to limit the spread of coronavirus. But apart from the coveted N95 mask, which traps 95 percent of airborne particles 300 nanometers or larger, the effectiveness of many masks remains unclear due to a lack of standardized methods to evaluate them.

In a project led by MIT Associate ProfessorLydia Bourouiba, researchers are developing a rigorous set of methods to measure how well homemade and medical-grade masks do at blocking the tiny droplets of saliva and mucus expelled during normal breathing, coughs, or sneezes. The researchers will test materials worn alone and together, and in a variety of configurations and environmental conditions. Their methods and measurements will determine howwell materials protect mask wearers and the people around them.

Treating Covid-19 with repurposed drugs

As Covid-19s global death toll mounts, researchers are racing to find a cure among already-approved drugs. Machine learning can expedite screening by letting researchers quickly predict if promising candidates can hit their target.

In a project led by MIT Assistant ProfessorRafael Gomez-Bombarelli, researchers will represent molecules in three dimensions to see if this added spatial information can help to identify drugs most likely to be effective against the disease. They will use NASAs Ames and U.S. Department of Energys NSERC supercomputers to further speed the screening process.

A privacy-first approach to automated contact tracing

Smartphone data can help limit the spread of Covid-19 by identifying people who have come into contact with someone infected with the virus, and thus may have caught the infection themselves. But automated contact tracing also carries serious privacy risks.

Incollaborationwith MIT Lincoln Laboratory and others, MIT researchersRonald RivestandDaniel Weitznerwill use encrypted Bluetooth data to ensure personally identifiable information remains anonymous and secure.

Overcoming manufacturing and supply hurdles to provide global access to a coronavirus vaccine

A vaccine against SARS-CoV-2 would be a crucial turning point in the fight against Covid-19. Yet, its potential impact will be determined by the ability to rapidly and equitably distribute billions of doses globally.This is an unprecedented challenge in biomanufacturing.

In a project led by MIT professorsAnthony SinskeyandStacy Springs, researchers will build data-driven statistical models to evaluate tradeoffs in scaling the manufacture and supply of vaccine candidates. Questions include how much production capacity will need to be added, the impact of centralized versus distributed operations, and how to design strategies forfair vaccine distribution. The goal is to give decision-makers the evidenceneededto cost-effectivelyachieveglobalaccess.

Leveraging electronic medical records to find a treatment for Covid-19

Developed as a treatment for Ebola, the anti-viral drug remdesivir is now in clinical trials in the United States as a treatment for Covid-19. Similar efforts to repurpose already-approved drugs to treat or prevent the disease are underway.

In a project led by MIT professorsRoy Welschand Stan Finkelstein, researchers will use statistics, machine learning, and simulated clinical drug trials to find and test already-approved drugs as potential therapeutics against Covid-19. Researchers will sift through millions of electronic health records and medical claims for signals indicating that drugs used to fight chronic conditions like hypertension, diabetes, and gastric influx might also work against Covid-19 and other diseases.

Finding better ways to treat Covid-19 patients on ventilators

Troubled breathing from acute respiratory distress syndrome is one of the complications that brings Covid-19 patients to the ICU. There, life-saving machines help patients breathe by mechanically pumping oxygen into the lungs. But even as towns and cities lower their Covid-19 infections through social distancing, there remains a national shortage of mechanical ventilators and serious health risks of ventilation itself.

In collaboration with IBM researchers Zach Shahn and Daby Sow, MIT researchersLi-Wei LehmanandRoger Markwill develop an AI tool to help doctors find better ventilator settings for Covid-19 patients and decide how long to keep them on a machine. Shortened ventilator use can limit lung damage while freeing up machines for others.To build their models, researchers will draw on data from intensive-care patients with acute respiratory distress syndrome, as well as Covid-19 patients at a local Boston hospital.

Returning to normal via targeted lockdowns, personalized treatments, and mass testing

In a few short months, Covid-19 has devastated towns and cities around the world. Researchers are now piecing together the data to understand how government policies can limit new infections and deaths and how targeted policies might protect the most vulnerable.

In a project led by MIT ProfessorDimitris Bertsimas, researchers will study the effects of lockdowns and other measures meant to reduce new infections and deaths and prevent the health-care system from being swamped. In a second phase of the project, they will develop machine learning models to predict how vulnerable a given patient is to Covid-19, and what personalized treatments might be most effective. They will also develop an inexpensive, spectroscopy-based test for Covid-19 that can deliver results in minutes and pave the way for mass testing. The project will draw on clinical data from four hospitals in the United States and Europe, including Codogno Hospital, which reported Italys first infection.

Excerpt from:
Marshaling artificial intelligence in the fight against Covid-19 - MIT News

Read More..