Page 2,765«..1020..2,7642,7652,7662,767..2,7702,780..»

Cardano price: What price is Cardano today? Altcoin explained – Express

Cryptocurrency has boomed in recent months, with major cryptocurrencies like bitcoin gaining increased interest. Cardano is another cryptocurrency that has come under the spotlight recently, but unlike bitcoin, Cardano is considered an altcoin.

Nick Jones, CEO and Co-Founder at Zumo, Scottish crypto wallet, toldExpress.co.ukthat Cardano is just one example of many different altcoins available on the cryptocurrency market.

Mr Jones said: "In the cryptocurrency space, an altcoin refers to any digital asset that is not bitcoin.

"Bitcoin dominance - the value held in bitcoin as opposed to the total value of the entire cryptocurrency market - currently stands at 46.22 percent.

READ MORE:Bitcoin price: What is bitcoin's price today?

"In other words, bitcoin accounts for almost half of todays crypto market, leaving just over half to the bitcoin alternatives, or altcoins."

Examples of altcoins that have gained attention in recent months include Cardano, Ethereum and Dogecoin, the latter of which has been referenced frequently in tweets by Tesla CEO, Elon Musk.

But like all cryptocurrencies, Cardano and other altcoins are considered high-risk and are known for being volatile.

Mr Jones added: "Cardano is an example of one of these altcoins. It was founded in 2015 by Ethereum co-founder Charles Hoskinson.

"Like bitcoin or Ethereum, Cardano has its own separate blockchain, or network for processing transactions.

"It is claimed to be the first blockchain platform developed based on peer-reviewed research, and is targeting a broad range of use cases across education, retail, agriculture, government finance and healthcare.

"Cardano has captured public attention for a couple of notable reasons. First, it has stated an emphasis on inclusivity and positive global change.

"Founder Charles Hoskinson has a particular interest in Africa, and one recent example has been Cardanos involvement in Ethiopia, where the Ethiopian government will be using the Cardano blockchain to track student performance.

"Cardano is also viewed by some as an ESG-friendly blockchain. This is because it operates on a proof-of-stake mechanism that is said to require less energy and computing power than bitcoin's proof-of-work algorithm.

"Like Ethereum, Cardano will also incorporate smart contracts - computer programs stored on the blockchain that automatically execute when specific conditions are met, opening up a range of practical applications."

DON'T MISS:Bitcoin warning as experts identify 'significant limitation'[WARNING]Crypto crackdown: Police seize 300million in digital cash[INSIGHT]Cryptocurrency collapse: Economists panic over Tether[ANALYSIS]

Cardano's price on Wednesday was a significant drop from its price a week prior.

On July 7, Cardano had a price of $1.43 (1.03), before dropping on July 10 to $1.32 (0.954169).

The information in this article does not equate to financial advice.

Anyone considering investing in cryptocurrency should understand the risks involved.

Continued here:
Cardano price: What price is Cardano today? Altcoin explained - Express

Read More..

Is the rise of altcoins going to continue for a while? – FXStreet

The market is slowly recovering after the recent drop, and some coins have already come back to the green zone.

Top coins by CoinMarketCap

Despite today's rise, the price of Bitcoin (BTC) has declined by 6% over the last week.

BTC/USD chart by TradingView

After a false breakout of the $32,190 mark, the chief crypto is coming back to the resistance zone. However, the trading volume is low, which means that traders have not accumulated enough power for a continued rise. But if bulls manage to do it, there is a good chance to test the zone of the most liquidity around $37,000 next week.

Bitcoin is trading at $31,795 at press time.

Binance Coin (BNB) has gained more, rising by 2.58% over the past 24 hours.

BNB/USD chart byTrading View

Binance Coin (BNB) has fixed above the crucial $300 mark. At the moment, an ongoing rise is more likely than a fall. If the buying trading volume increases, the level of $340 may be attained by the end of the next week.

BNB is trading at $306 at press time.

Cardano (ADA) is also located in a zone of short-term growth, rising by 2.46%.

ADA/USD chart by TradingView

Cardano (ADA) has bounced off the support at $1.16 against the low trading volume. However, the altcoin remains trading within the falling channel, which means that bulls have not completely seized the initiative so far.

If a breakout occurs, there is a probability to see the return to the resistance around $1.40 soon.

ADA is trading at $1.19 at press time.

DOGE has gained the most today, rocketing by 9% since yesterday.

DOGE/USD chart by TradingView

DOGE made a false breakout of the resistance zone at $0.20; however, the price has not declined so far. It may mean that buyers are gathering power for another attempt to break the level. If that occurs, the rise may continue to $0.25.

DOGE is trading at $0.18 at press time.

Litecoin (LTC) remains weaker than other coins from the list despite the 2% growth today.

LTC/USD chart byTrading View

Litecoin (LTC) tested the resistance at $124.80. However, LTC could not fix above the resistance. It means that bears keep controlling the situation on the market, trying to retest the support at $105 within the next several days.

Litecoin is trading at $120.72 at press time.

See the original post:
Is the rise of altcoins going to continue for a while? - FXStreet

Read More..

Mark Cuban has invested in a new altcoin that’s recovered impressively – Crypto News Flash

Mark Cuban has become one of the biggest cryptocurrency fans in recent months, investing in everything from Ethereum to NFT platforms and DeFi market leaders. He is now increasing his position in a new altcoin that, despite crashing in May like all the other coins, is now seeing an impressive recovery.

Cuban revealed his altcoin portfolio for non-fungible tokens on his platform, known as Lazy. Since then, crypto fans have been on the lookout for changes in his positions to know what hes bullish about. And as trader Tyler Swope spotted recently, Cuban has increased his position on Olympus (OHM).

Related: Mark Cubans enthusiasm in DeFi curbed after suspected rug pull hit

Cuban bought 75 OHM on Saturday, worth $42,300 at current prices. But it doesnt stop there for the Dallas Mavericks owner and Shark Tank star. He has staked his OHM, a further endorsement for the cryptocurrency. A dig into his Etherscan address shows that he has now staked 536 OHM, worth $302,304 at press time.

OHM is the native token of Olympus DAO, a decentralized reserve currency that claims to promote stability and predictability without having a peg for its tokens. The project has only been around for a few months, but in that time, it has attracted the interest of several investors, so much so that one month after launch, it spiked to an all-time high at $1,479.

That was in late April. However, May came, and as the greater crypto market tumbled, OHM wasnt spared either. It shed about 90 percent of its value to trade at $164. Since then, it has been recovering well certainly better than most cryptos including Bitcoin and Ethereum.

It now trades at $564, and in the past 24 hours, it added 7 percent to its value. However, its still a relatively small crypto, with CoinGecko pegging its market cap at just $390 million.

Popular YouTuber and analyst Tyler Swope, better known as Chico Crypto, described OHM as one of the only projects out there that is recovering and recovering well since the May crush.

He added, So unique that even the highly respected Messari put out research on how it works and according to Messari, OHM has a flywheel effect in which users are incentivized to do whats best for it.

Read the rest here:
Mark Cuban has invested in a new altcoin that's recovered impressively - Crypto News Flash

Read More..

Quantum Computing Market is anticipated to surge at a CAGR of 33.7% over the next ten years – PRNewswire

NEW YORK, July 19, 2021 /PRNewswire/ --As per the findings of a revised market research by Persistence Market Research, the worldwide quantum computing market insight reached a valuation of around US$ 5.6 Bn in 2020, and is anticipated to surge at a CAGR of 33.7% over the next ten years.

Major companies are developing quantum computers focused on delivering free access to their quantum systems through cloud platforms, with the objective of creating awareness and a community for developers working on quantum computing technology. Through this new way of offering access, companies are targeting universities, research groups, and organizations focused on quantum computing to practice, test, and develop applications of quantum computing.

Key Takeaways from Market Study

Request for sample PDF of report: https://www.persistencemarketresearch.com/samples/14758

"Growing trend of cost-effective cloud quantum computing along with technological advancements and rising governmental investments to develop quantum computing solutions for commercial applications to propel market growth," says a Persistence Market Research analyst.

Pharmaceutical Industry Preclinical Drug Discovery and Development of Personalized Medicine

Quantum computers are computational devices that use dynamics of atomic-scale objects to manipulate and store information. Current methods in drug synthesis involve significant approximations on the molecular and atomic level. Material science and pharmaceutical vendors use a variety of computational exhaustive methods to evaluation molecule matches and expect positive effects of potential therapeutic approaches.

Ask an expert for any other query: https://www.persistencemarketresearch.com/ask-an-expert/14758

Accurate predictions often require lengthy simulation processes with the current binary computing system, and it takes years and cost millions of dollars to achieve the desired result. There is an opportunity for quantum computing to replace exiting binary systems in drug discovery processes, as quantum computers can analyze large-scale molecules in less time. Also, high computational power of quantum computers opens up the possibility for developing personalized medicines based on individual unique genetic makeup.

COVID-19 Impact Analysis

The COVID-19 epidemic outbreak has disrupted different industries, including the quantum computing space. Demand for quantum computing software, machine learning, cloud-based quantum computing, artificial intelligence (AI), and quantum computer-as-a-services has been increasing during lockdowns. This is fueling demand for quantum computing software and services.

During the outbreak, manufacturing as well as design and development of quantum computing devices declined by nearly 5%-7% in Q3-Q4 2020, due to falling production across East Asian and North America factories, as both regions are the world's major quantum computing device manufacturers and suppliers. However, according to report, production has become pretty stable in the first half of 2021 with demand gaining traction again.

Large quantum-computing enterprises in North America, Europe, Canada, China, Australia, India, and Russia are investing in qubit research, while also giving researchers access to cloud-based and commercial cloud services. Over, the market for quantum computing is projected to grow faster from Q3-Q4 2021 onwards.

Get full access of report: https://www.persistencemarketresearch.com/checkout/14758

Find More Valuable Insights

Persistence Market Research puts forward an unbiased analysis of the global market for quantum computing market, providing historical demand data (2016-2020) and forecast statistics for the period 2021-2031.

To understand the opportunities in the market, it has been segmented on the basis of component (quantum computing devices, quantum computing software, and services (consulting services, implementation services, and support & maintenance), application (simulation & testing, financial modeling, artificial intelligence & machine learning, cybersecurity & cryptography, and others) and industry (healthcare & life sciences, banking & financial services, manufacturing, academics & research, aerospace & defense, energy & utilities, it & telecom and others) across major regions of the world (North America, Latin America, Europe, East Asia, South Asia & Pacific, and MEA).

Related Reports:

About Persistence Market Research:

Persistence Market Research (PMR), as a 3rd-party research organization, does operate through an exclusive amalgamation of market research and data analytics for helping businesses ride high, irrespective of the turbulence faced on the account of financial/natural crunches.

Overview:

Persistence Market Research is always way ahead of its time. In other words, it tables market solutions by stepping into the companies'/clients' shoes much before they themselves have a sneak pick into the market. The pro-active approach followed by experts at Persistence Market Research helps companies/clients lay their hands on techno-commercial insights beforehand, so that the subsequent course of action could be simplified on their part.

Contact

Rajendra Singh Persistence Market Research U.S. Sales Office:305 Broadway, 7th FloorNew York City, NY 10007+1-646-568-7751United StatesUSA - Canada Toll-Free: 800-961-0353Email: [emailprotected]Visit Our Website:https://www.persistencemarketresearch.com

SOURCE Persistence Market Research Pvt. Ltd.

Continued here:
Quantum Computing Market is anticipated to surge at a CAGR of 33.7% over the next ten years - PRNewswire

Read More..

Quantum Computing for the Future Grid – Transmission & Distribution World

The electric power grid is undergoing unprecedented change. This change is due to decarbonization efforts, increased reliance on renewable and variable generation resources, the integration of distributed energy resources, and transportation electrification. In turn, these changes have required electric utilities to expand their monitoring and measurement efforts through metering infrastructure and distribution automation initiatives. All these efforts have resulted in the collection of mountains of data from the electric grid. While this significant increase in data collection enables better monitoring of the grid and enhanced decision making, we still need a robust computational foundation that can convert all this collected big data into actionable information.

As mathematical challenges increase and data becomes core to modern utility decision-making, our industry needs to make progress and draw from emerging analytics and computing technologies. Quantum computing is a ground-breaking information processing technology that can support efforts to address power system challenges and enable the grid of the future. Given the promising applications to the power grid, this is an area of research that has really caught my attention lately. While quantum computing applications to the power grid have remained mostly unexamined, forward-looking utilities are exploring the next step to enhance these analytics by understanding how emerging quantum computing technologies can be leveraged to provide higher service levels.

Building the future grid will require an overall view of the quantum computing technology applications in power systems, such as the dynamic interaction of the transmission and distribution systems. According to a recent IEEE article by Rozhin Eskandarpour and a team of researchers from the University of Denver Electrical and Computing Engineering Department, current computational technologies might not be able to adequately address the needs of the future grid.

The most notable change is observed in the role of the distribution grid and customers in system design and management. Transmission and distribution systems were frequently operated as distinct systems but are becoming more of an integrated system. The underlying hypothesis was that at the substation, the transmission system would supply a prescribed voltage, and the distribution system will supply the energy to individual customers. However, as various types of distributed energy resources, including generation, storage, electric vehicles, and demand response, are integrated into the distribution network, there may be distinct interactions between the transmission and distribution systems. Distributed generations transient and small-signal stability problems are one instance that changes the energy systems dynamic nature. Therefore, developing more comprehensive models that include the dynamic relationships between transmission and distribution systems, and relevant computational tools that can solve such models will be essential in the future. Furthermore, better scheduling models are needed to design viable deployment and use of distributed energy resources.

Eskandarpour et al. describe other potential quantum computing applications for the power grid, including optimization, planning, and logistics; forecasting; weather prediction; wind turbine design; cybersecurity; grid security; and grid stability.

Given that I am both professionally embedded in covering the newest innovations within the power sector and nearing the end of a Ph.D. program at the University of Denver, it is not particularly surprising that a new university-industry research consortium has caught my attention. I am excited to share about this ground-breaking initiative and its potential role in building the future grid.

The University of Denver, in collaboration with various utilities, has established a consortium related to envisioning the quantum upgraded electric system of tomorrow. QUEST is the clever acronym that has been adopted for this university-industry consortium. The consortium aims to enhance university-industry collaborations to solve emerging challenges in building the future grid by utilizing quantum information and quantum computation. The consortium will develop new quantum models, methodologies, and algorithms to solve a range of grid problems faster and more accurately. Topics of interest include:

Industry members financially support the QUEST consortium, and membership is voluntary and open to any public or private organization active in the power and energy industry. For more information, contact Dr. Amin Khodaei at the University of Denver, School of Engineering and Computer Science.

Read more:
Quantum Computing for the Future Grid - Transmission & Distribution World

Read More..

Red Hat embraces quantum supremacy as it looks to the future – SiliconANGLE News

Since its founding in 1993, Red Hat Inc. has seen significant growth and witnessed first hand the transformation from an analog to a digital economy.

With years of experience under its belt, Red Hat is looking on the horizon to prepare for emerging technology with its partnership with IBM Corp., giving it a front-row seat to technological progress. The software company employs a variety of experts across different departments to maintain the massive overhead of running a large tech business.

We typically organize our teams around horizontal technology sectors, said Stephen Watt (pictured, right), distinguished engineer and head of emerging technologies at Red Hat. I have an edge team, cloud networking team, a cloud storage team, application platforms team. Weve got different areas that we attack work and opportunities, but the good ideas can come from a variety of different places, so we try and leverage co-creation with our customers and our partners.

Watt, along with Parul Singh (pictured, left), senior software engineer at Red Hat, and Luke Hinds (pictured, middle), senior software engineer at Red Hat, spoke with John Furrier, host of theCUBE, SiliconANGLE Medias livestreaming studio, during the recentRed Hat Summit. They discussed quantum supremacy, how Red Hat manages its consumers needs, signature server and more.(* Disclosure below.)

One of the many new technologies emerging is quantum computing, which uses qubits instead of bits and is able to process an exponential amount of data compared to its older counterpart.

Quantum computers are evolving, and they have been around, but right now you see that they are going to be the next thing, Singh said. We define quantum supremacy as, say you have any program that you run or any problem that you solve on a classical computer, a quantum computer would be giving you the results faster.

Because quantum computers are not as easily accessible as classical computers, Red Hat has sought out a solution that combines OpenShifts classical components with quantum computing, taking the results and integrating them into classical workloads.

Signature server, or sigstore, is an umbrella organization containing various open-source projects.

Sigstore will enable developers to sign software artifacts, bills and materials, containers, binaries, all of these different artifacts that are part of a software supply chain, Hinds said. Its very similar to a blockchain. It allows you to have cryptographic-proof auditing of our software supply chain, and weve made sigstore so that its easy to adopt, because traditional cryptographic signing tools are a challenge for a lot of developers to implement in their open-source projects.

Open-source boasts the advantage of being transparent, allowing everyone to see the code with no hidden surprises or security issues lurking underneath. Another advantage of open-source software is agency, according to Watt.

If youre waiting on a vendor to go do something, if its proprietary software, you dont have much agency to get that vendor to go do that thing. Whereas the open source, if youre tired of waiting around, you can just submit the patch, he said. So people can then go and take sigstore, run it as a smaller internal service. Maybe they discover a bug. They can fix that bug, contribute it back to the operationalizing piece, as well as the traditional package software, to make it a much more robust and open service. So you bring that transparency and the agency back to the software-as-a-service model as well.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of Red Hat Summit. (* Disclosure: TheCUBE is a paid media partner for Red Hat Summit. Neither Red Hat Inc., the sponsor for theCUBEs event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

We are holding our second cloud startup showcase on June 16.Click here to join the free and open Startup Showcase event.

TheCUBEis part of re:Invent, you know,you guys really are a part of the eventand we really appreciate your coming hereand I know people appreciate thecontent you create as well Andy Jassy

We really want to hear from you. Thanks for taking the time to read this post. Looking forward to seeing you at the event and in theCUBE Club.

Read more here:
Red Hat embraces quantum supremacy as it looks to the future - SiliconANGLE News

Read More..

DeepMind’s AlphaFold 2 reveal: Convolutions are out, attention is in – ZDNet

DeepMind, the AI unit of Google that invented the chess champ neural network AlphaZero a few years back, shocked the world again in November with a program that had solved a decades-old problem of how proteins fold. The program handily beat all competitors, in what one researcher called a "watershed moment" that promises to revolutionize biology.

AlphaFold 2, as it's called, was described at the time only in brief terms, in a blog post by DeepMind and in a paper abstract provided by DeepMind for the competition in which they submitted the program, the Critical Assessment of Techniques for Protein Structure Prediction biannual competition.

Last week, DeepMind finally revealed just how it's done, offering up not only a blog post but also a16-page summary paperwritten by DeepMind's John Jumper and colleagues in Nature magazine, a 62-page collection of supplementary material, and a code library on GitHub. A story on the new details by Nature's Ewan Calloway characterizes the data dump as "protein structure coming to the masses."

So, what have we learned? A few things. As the name suggests, this neural net is the successor to the first AlphaFold, which had also trounced competitors in the prior competition in 2018. The most immediate revelation of AlphaFold 2 is that making progress in artificial intelligence can require what's called an architecture change.

The architecture of a software program is the particular set of operations used and the way they are combined. The first AlphaFold was made up of a convolutional neural network, or "CNN," a classic neural network that has been the workhorse of many AI breakthroughs in the past decade, such as containing triumphs in the ImageNet computer vision contest.

But convolutions are out, and graphs are in. Or, more specifically, the combination of graph networks with what's called attention.

A graph network is when some collection of things can be assessed in terms of their relatedness and how they're related via friendships -- such as people in a social network. In this case, AlphaFold uses information about proteins to construct a graph of how near to one another different amino acids are.

Also: Google DeepMind's effort on COVID-19 coronavirus rests on the shoulders of giants

These graphs are manipulated by the attention mechanism that has been gaining in popularity in many quarters of AI. Broadly speaking, attention is the practice of adding extra computing power to some pieces of input data. Programs that exploit attention have lead to breakthroughs in a variety of areas, but especially natural language processing, as in the case of Google's Transformer.

The part that used convolutions in the first AlphaFold has been dropped in Alpha Fold 2, replaced by a whole slew of attention mechanisms.

Use of attention runs throughout AlphaFold 2. The first part of AlphaFold is what's called EvoFormer, and it uses attention to focus processing on computing the graph of how each amino acid relates to another amino acid. Because of the geometric forms created in the graph, Jumper and colleagues refer to this operation of estimating the graph as "triangle self-attention."

Echoing natural language programs, the EvoFormer allows the triangle attention to send information backward to the groups of amino acid sequences, known as "multi-sequence alignments," or "MSAs," a common term in bioinformatics in which related amino acid sequences are compared piece by piece.

The authors consider the MSAs and the graphs to be in a kind of conversation thanks to attention -- what they refer to as a "joint embedding." Hence, attention is leading to communication between parts of the program.

The second part of AlphaFold 2, following the EvoFormer, is what's called a Structure Module, which is supposed to take the graphs that the EvoFormer has built and turn them into specifications of the 3-D structure of the protein, the output that wins the CASP competition.

Here, the authors have introduced an attention mechanism that calculates parts of a protein in isolation, called an "invariant point attention" mechanism. They describe it as "a geometry-aware attention operation."

The Structure Module initiates particles at a kind of origin point in space, which you can think of as a 3-D reference field, called a "residue gas," and then proceeds to rotate and shift the particles to produce the final 3-D configuration. Again, the important thing is that the particles are transformed independently of one another, using the attention mechanism.

Why is it important that graphs, and attention, have replaced convolutions? In the original abstract offered for the research last year, Jumper and colleagues pointed out a need to move beyond a fixation on what are called "local" structures.

Going back to AlphaFold 1, the convolutional neural network functioned by measuring the distance between amino acids, and then summarizing those measurements for all pairs of amino acids as a 2-D picture, known as a distance histogram, or "distogram." The CNN then operated by poring over that picture, the way CNNs do, to find local motifs that build into broader and broader motifs spanning the range of distances.

But that orderly progression from local motifs can ignore long-range dependencies, which are one of the important elements that attention supposedly captures. For example, the attention mechanism in the EvoFormer can connect what is learned in the triangle attention mechanism to what is learned in the search of the MSA -- not just one section of the MSA, but the entire universe of related amino acid sequences.

Hence, attention allows for making leaps that are more "global" in nature.

Another thing we see in AlphaFold is the end-to-end goal. In the original AlphaFold, the final assembly of the physical structure was simply driven by the convolutions, and what they came up with.

In AlphaFold 2, Jumper and colleagues have emphasized training the neural network from "end to end." As they say:

"Both within the Structure Module and throughout the whole network, we reinforce the notion of iterative refinement by repeatedly applying the final loss to outputs then feeding the outputs recursively to the same modules. The iterative refinement using the whole network (that we term 'recycling' and is related to approaches in computer vision) contributes significantly to accuracy with minor extra training time."

Hence, another big takeaway from AlphaFold 2 is the notion that a neural network really needs to be constantly revamping its predictions. That is true both for the recycling operation, but also in other respects. For example, the EvoFormer, the thing that makes the graphs of amino acids, revises those graphs at each of the multiple stages, what are called "blocks," of the EvoFormer. Jumper and team refer to this constant updates as "constant communication" throughout the network.

As the authors note, through constant revision, the Structure piece of the program seems to "smoothly" refine its models of the proteins. "AlphaFold makes constant incremental improvements to the structure until it can no longer improve," they write. Sometimes, that process is "greedy," meaning, the Structure Module hits on a good solution early in its layers of processing; sometimes, it takes longer.

Also: AI in sixty seconds

In any event, in this case the benefits of training a neural network -- or a combination of networks -- seem certain to be a point of emphasis for many researchers.

Alongside that big lesson, there is an important mystery that remains at the center of AlphaFold 2: Why?

Why is it that proteins fold in the ways they do? AlphaFold 2 has unlocked the prospect of every protein in the universe having its structure revealed, which is, again, an achievement decades in the making. But AlphaFold 2 doesn't explain why proteins assume the shape that they do.

Proteins are amino acids, and the forces that make them curl up into a given shape are fairly straightforward -- things like certain amino acids being attracted or repelled by positive or negative charges, and some amino acids being "hydrophobic," meaning, they stay farther away from water molecules.

What is still lacking is an explanation of why it should be that certain amino acids take on shapes that are so hard to predict.

AlphaFold 2 is a stunning achievement in terms of building a machine to transform sequence data into protein models, but we may have to wait for further study of the program itself to know what it is telling us about the big picture of protein behavior.

See the article here:
DeepMind's AlphaFold 2 reveal: Convolutions are out, attention is in - ZDNet

Read More..

DeepMind open-sources AlphaFold 2 for protein structure predictions – VentureBeat

All the sessions from Transform 2021 are available on-demand now. Watch now.

Let theOSS Enterprise newsletterguide your open source journey!Sign up here.

DeepMind this week open-sourced AlphaFold 2, its AI system that predicts the shape of proteins, to accompany the publication of a paper in the journal Nature. With the codebase now available, DeepMind says it hopes to broaden access for researchers and organizations in the health care and life sciencefields.

The recipe for proteins large molecules consisting of amino acids that are the fundamental building blocks of tissues, muscles, hair, enzymes, antibodies, and other essential parts of living organisms are encoded in DNA. Its these genetic definitions that circumscribe their three-dimensional structures, which in turn determine their capabilities. But protein folding, as its called, is notoriously difficult to figure out from a corresponding genetic sequence alone. DNA contains only information about chains of amino acid residues and not those chains final form.

In December 2018, DeepMindattempted to tackle the challenge of protein folding with AlphaFold, the product of two years of work. The Alphabet subsidiary said at the time that AlphaFold could predict structures more precisely than prior solutions. Its successor, AlphaFold 2, announced in December 2020, improved on this to outgun competing protein-folding-predicting methods for a second time. In the results from the 14th Critical Assessment of Structure Prediction (CASP) assessment, AlphaFold 2 had average errors comparable to the width of an atom (or 0.1 of a nanometer), competitive with the results from experimental methods.

AlphaFold draws inspiration from the fields of biology, physics, and machine learning. It takes advantage of the fact that a folded protein can be thought of as a spatial graph, where amino acid residues (amino acids contained within a peptide or protein) are nodes and edges connect the residues in close proximity. AlphaFold leverages an AI algorithm that attempts to interpret the structure of this graph while reasoning over the implicit graph its building using evolutionarily related sequences, multiple sequence alignment, and a representation of amino acid residue pairs.

In the open source release, DeepMind says it significantly streamlined AlphaFold 2. Whereas the system took days of computing time to generate structures for some entries to CASP, the open source version is about 16 times faster. It can generate structures in minutes to hours, depending on the size of the protein.

DeepMind makes the case that AlphaFold, if further refined, could be applied to previously intractable problems in the field of protein folding, including those related to epidemiological efforts. Last year, the company predicted several protein structures of SARS-CoV-2, including ORF3a, whose makeup was formerly a mystery. At CASP14, DeepMind predicted the structure of another coronavirus protein, ORF8, that has since been confirmed by experimentalists.

Beyond aiding the pandemic response, DeepMind expects AlphaFold will be used to explore the hundreds of millions of proteins for which science currently lacks models. Since DNA specifies the amino acid sequences that comprise protein structures, advances in genomics have made it possible to read protein sequences from the natural world, with 180 million protein sequences and counting in the publicly available Universal Protein database. In contrast, given the experimental work needed to translate from sequence to structure, only around 170,000 protein structures are in the Protein Data Bank.

DeepMind says its committed to making AlphaFold available at scale and collaborating with partners to explore new frontiers, like how multiple proteins form complexes and interact with DNA, RNA, and small molecules. Earlier this year, the company announced a new partnership with the Geneva-based Drugs for Neglected Diseases initiative, a nonprofit pharmaceutical organization that hopes to use AlphaFold to identify compounds to treat conditions for which medications remain elusive.

Original post:
DeepMind open-sources AlphaFold 2 for protein structure predictions - VentureBeat

Read More..

AI in Healthcare Market Growing Trade Among Emerging Economies Opening New Opportunities (2021-2031) | Nuance Communications, Inc., DeepMind…

The latest insightSLICE research report published on AI in Healthcare promises to cede reliable and clarifying insights appertaining to the real time scenario trajectory concerning the market arena during the speculated forecast period of 2021-2031.

Setting forth the risks and opportunities to helping commercial enterprise players commit their reserves in areas studied to have a capable profit prospective. By the same token, the report checks the prevailing regional statistics and picks out methodologies to foresee their influence.

Get a FREE PDF Sample of this Report @ https://www.insightslice.com/request-sample/489

Major Companies:

Nuance Communications, Inc., DeepMind Technologies Limited, IBM Corporation, Intel Corporation and Microsoft and NVIDIA Corporation.

The global, regional, and other market statistics including CAGR, financial statements, volume, and market share mentioned in this report can be easily relied upon in light of their high precision and authenticity. The report also provides a study on the current and future demand of the Global AI in Healthcare Market.

Major Applications of the Market are:

virtual assistants, robot assisted surgery, connected machines, diagnosis, clinical trials and others

Major Types of the Market are:

Component- hardware, software and services

For Instant Discount Click here @ https://www.insightslice.com/request-discount/489

Regional Analysis For AI in Healthcare Market

North America (the United States, Canada, and Mexico)Europe (Germany, France, UK, Russia, and Italy)Asia-Pacific (China, Japan, Korea, India, and Southeast Asia)South America (Brazil, Argentina, Colombia, etc.)The Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)

Research Methodology

AI in Healthcare search primarily suggests a path to move ahead considering the altering market dynamics. Detailed inwardly analysis of aspects such as volume along with revenue evaluation market, share contributed by decisive players, regions affecting market trends, consumer patterns and accordingly, changing monetary value during 2021-2031.Also, the logical analysis of AI in Healthcare cites an elaborated disintegration of crucial market improvement drivers and limitations along with an impact analysis of the cited factors.

Table of Contents

Report Overview: The report overview includes studying the market scope, leading players, market segments and sub-segments, market analysis by type, application, geography, and the remaining chapters that shed light on the overview of the market.

Executive: The report summarizes about AI in Healthcare market trends and shares, market size analysis by region, and countries. Under market size analysis by region, analysis of market share, and growth rate by region is provided.

Profiles of International Players: This section also profiles some of the major players functioning in the Global AI in Healthcare Market, based on various factors such as the company overview, revenue, and product offering (s), key development (s), business strategies, Porters five forces analysis, and SWOT analysis.

Regional Study: The regions and countries mentioned in this research study have been studied based on the market size by application, product, key players, and market forecast.

Key Players: This section of the AI in Healthcare Market report explains about the expansion plans of the leading players, M&A, investment analysis, funding, company establishment dates, revenues of manufacturers, and the regions served.

Request For customization: https://www.insightslice.com/request-customization/489

About Us:

We are a team of research analysts and management consultants with a common vision to assist individuals and organizations in achieving their short and long term strategic goals by extending quality research services. The inception of insightSLICE was done to support established companies, start-ups as well as non-profit organizations across various industries including Packaging, Automotive, Healthcare, Chemicals & Materials, Industrial Automation, Consumer Goods, Electronics & Semiconductor, IT & Telecom and Energy among others. Our in-house team of seasoned analysts hold considerable experience in the research industry.

Contact Info422 Larkfield Ctr #1001Santa Rosa,CA 95403-1408info@insightslice.com+1 (707) 736-6633

See original here:
AI in Healthcare Market Growing Trade Among Emerging Economies Opening New Opportunities (2021-2031) | Nuance Communications, Inc., DeepMind...

Read More..

Self-driving data centres: Managing the transition from human-to-AI workload management – IT Brief Australia

Article by Infosys Australia and New Zealand vice president and regional head delivery & operations, Ashok Mysore.

Organisations have naturally accelerated their digital agendas as employees were forced to work remotely amid the pandemic.

Meanwhile the nature of work has continued to evolve, with data centre workloads having grown exponentially.

While data centre managers have always used conventional tools to react to shifts in workloads, they've never been able to forecast for change. This issue has come to the fore over the past 18 months as workload distribution has been increasingly subject to sudden change.

As a result, AI and automation have become powerful tools in workload management and an essential part of every CIO's strategy. Autonomous technologies help manage workloads within an enterprise's infrastructure in real-time by better identifying workload patterns, matching demands with data centre capacity, spotting anomalies, and predicting breakdowns and outages much earlier.

The ability to mitigate downtime and keep workload clusters up and running is crucial to maintaining efficient workload operations into the future. For example, Googles DeepMind AI system helped the company achieve a 15% reduction in energy consumption at some data centres, by using algorithms that manipulate computer servers and equipment such as cooling systems.

Infosys applied AI offers enterprises an integrated way to scale and future-proof their business by converging the powers of AI, analytics and cloud to deliver new business solutions.

Beyond improving overall operational efficiency, these technologies can help free the workforce from mundane tasks and create more time for creative thinking and tackling broader business issues.

How easily can your enterprise embrace AI-drive workload management?

It wont be long before all data centre managers are faced with the choice to embrace AI-driven reinvention for revenue growth. But how an organisation handles the transition from human-to-AI workload management depends on its technological maturity, scale of operations and its data centre's dynamism.

The Infosys Cloud Radar Report shows Australian enterprises have led the way in digital and cloud technology investment, but this is predicted to fall over the next few years.

Additionally, theFuture of Work study shows that approximately a third of global CEOs are concerned about the availability of critical skills amid the trend towards remote work. It also predicts that the nature of jobs themselves will change, requiring new skills and methods of attaining them

Where there are concerns about limited tech workplace talent, like in Australia, accelerating AI adoption and optimising workload management in data centres can contribute to more meaningful work.

Before leaders get comfortable handing essential business responsibilities to a piece of software, significant barriers need to be overcome to building robust and responsible AI-managed workload systems. For example, predictions made by the AI-powered workload management tool and its overall intent must be fully explainable to an enterprise's IT team, otherwise its scalability will be limited. Additionally, AI models are traditionally built for fixed and predictable environments; hence, testing a workload management model for data drift and bias is crucial to avoid blind spots.

It's encouraging to see more organisations looking for a comprehensive approach to scale enterprise-grade AI for their workload management. AIs powerful automation ability coupled with predictive insights for almost all workload operations, from maintenance and monitoring to data security, makes promise of its potential compelling.

For businesses who choose to scale automation and AI investments, new doors for meaningful work will be opened in the future.

Read more:
Self-driving data centres: Managing the transition from human-to-AI workload management - IT Brief Australia

Read More..