Page 2,867«..1020..2,8662,8672,8682,869..2,8802,890..»

Here Are Five Undervalued Altcoins To Buy in June: Altcoin Daily – The Daily Hodl

Crypto analyst Austin Arnold of Altcoin Daily is naming five of the most undervalued altcoins that he says are ripe for opportunities this new month.

In a new video for his 818,000 subscribers, Arnold lists five cryptocurrencies that are all within the non-fungible token (NFT) space, starting with ECOMI (OMI).

ECOMI is the blockchain that powers the digital collectibles marketplace VeVe. According to Arnold, VeVes partnership with pop-culture icons could be a bullish catalyst for the nascent digital asset in the future.

The integration with pop culture for me is really where ECOMI starts to get exciting because while other cryptocurrencies are still only hoping for adoption, are still hoping for awareness, ECOMIs got it. VeVe has already formed huge partnerships with brands like Warner Brothers, Capcom, and DC Comics.

Altcoin number two on Arnolds list is Superfarm (SUPER). Superfarm is a platform for cross-chain NFT farming with no coding skills required. The trader notes hes bullish on the fact that projects can launch on the platform using a suite of built-in tools, including marketing, fundraising, development, advisory, and more. SUPER is currently trading at $0.77, according to CoinGecko.

Third on the list is MurAll (PAINT), which Arnold believes is way more of a hidden gem in the cryptocurrency space.

MurAll is a collaborative digital canvas that artists can contribute to by using the PAINT token. Each PAINT token can draw two pixels, and just like real-life paint, it can only be used to draw once.

MurAll is definitely an altcoin to watch and, by the way, is doing something very, very different than all of the other altcoins mentioned today.

At time of writing, PAINT is trading at $0.0004.

The fourth coin on Arnolds radar is Origin Protocol (OGN), which provides a platform for building peer-to-peer marketplaces and e-commerce applications. Users also have the ability to earn yields on its stablecoin OUSD and through staking OGN. Arnold notes that Origin has partnered with popular DJ and recording artist Justin Blau. The trader says the partnership is a green flag for me.

The last NFT-focused crypto on Arnolds watchlist for June is RFOX, the native token on virtual marketplace ecosystem RedFOX. The projects RFOX VALT aims to build an augmented reality (AR) network with virtual shopfronts, billboards, meeting places and more.

According to RFOX, the team will incorporate VR/AR (virtual reality) and AI (artificial intelligence) in addition to offering digital items, including those issued through blockchain smart contracts and NFTs.

The reason they make our list today is because they recently announced RFOX VALT, an out-of-this-world immersive commercial experience. So in the most simplistic terms, think of this potentially as the next Decentraland (MANA).

RFOX is launching a fully immersive virtual ecosystem that offers users a unique experience. This is coupled with the capacity to transition traditional commercial enterprises like merchants into a virtual econosystem built upon unique NFTs that emulate real-world economies.

lDon't Miss a Beat Subscribe to get crypto email alerts delivered directly to your inbox Follow us on Twitter, Facebook and TelegramSurf The Daily Hodl Mix

Featured Image: Shutterstock/cobalt88

View post:
Here Are Five Undervalued Altcoins To Buy in June: Altcoin Daily - The Daily Hodl

Read More..

Ethereum Classic Has Its Strengths, But Consider Other Altcoins Instead – InvestorPlace

Is speculation related to Ethereum (CCC:ETH-USD) the only positive catalyst that can move Ethereum Classic (CCC:ETC-USD) higher? Or does Ethereum Classic have its own merits?

Source: Shutterstock

For the most part, only speculation regarding Ethereum can move Ethereum Classic higher. Yet the case can be made that Ethereum Classic is much more than a speculative altcoin that many are betting on.

As InvestorPlace columnist Mark Hake pointed out last month, Ethereum Classic, which is the original Ethereum, has its share of exceptional features. Whats known as Ethereum today was actually derived from Ethereum Classic. The split came about following a 2016 hacking incident.

In short, Ethereum Classic is anything but a knock-off of Ethereum, which is the second-most valuable cryptocurrency, based on market capitalization. Even so, that doesnt mean that Ethereum Classic will climb further.

It may continue to advance, as cryptocurrencies bounce back from last months crash. But whether Ethereum Classic can climb faster than Ethereum is debatable. Further, other altcoins, like Cardano (CCC:ADA-USD) and Polygon (CCC:MATIC-USD) appear to be more solid contenders to steal Ethereums lunch.

Those betting on Ethereum Classic rallying on name recognition alone may make some money. But, if youre looking to wager on an altcoin that could soar thanks to its strong utility, Ethereum Classic may not be the ticket.

The dust may have finally settled on the May 2021 cryptocurrency meltdown. Bitcoin (CCC:BTC-USD) has found support between $35,000 and $40,000. Ethereum is holding steady at around $2,700. It may take time for both those names, along with less well-established cryptocurrencies, to start moving sharply higher once again. But, as retail investors are moving towards the sidelines on cryptocurrencies, now may be the time to buy them.

The recent stabilization of the cryptocurrencies may only be the calmness ahead of another meltdown. Yet, with some people believing that the U.S. dollar will continue to be devalued, it remains easy to make the case that cryptocurrencies are a worthwhile alternative to the American currency.

In short, cryptocurrencies may rebound, despite concerns that theyve become overheated. But does that mean Ethereum Classic can climb? Its prices have fallen from as high as $176.16 in early May to around $68.50 this afternoon.

A cryptocurrency recovery could give it a boost, but it may not reach last months highs. Besides the general bubble mode cryptocurrencies were in at the time, there was another factor behind Ethereum Classics temporary rally several weeks ago. Specifically, the traders who were covering borrowed positions in Ethereum Classic played a major role in the currencys huge gains.

What about the potential of Ethereum Classic to climb on increased usage? It may rise as DeFi (decentralized finance) gains critical mass. But, given that two other cryptocurrencies stand to gain more from this megatrend, they are better options at this point.

Some speculators may see Ethereum Classic as another way to wager on the rise of DeFi, while Ethereum may be the cryptocurrency that is most widely used for staking and other DeFi transactions. But theres plenty of chatter about other names becoming widely used mediums of exchange as well.

Ethereum Classic may have DeFi capabilities. But it could be a stretch to say that its a leading contender to become an Ethereum killer. Cardano or Polygon fit that profile much better.

Cardano is in the process of implementing major protocol upgrades. Those upgrades could help it become much more widely used in blockchain and DeFi transactions.

Polygon is in a similar situation. It solves many of the pain points associated with Ethereum, including the latters high transaction fees and the issues with using a large amount of it. Developers active in Ethereum Classic could put in place changes that will improve its functionality. But, for now, dont count on increased appreciation of its utility sparking another parabolic run.

With other names that have greater odds of becoming the next Ethereum, why bother with Ethereum Classic? Still playing catch up, Ethereum Classic is far from becoming a strong DeFi play.

If cryptocurrencies continue to recover, Ethereum Classic can climb above its current price levels. But it will not bounce back to its all-time high anytime soon.

Some may want to speculate on it becoming widely used in DeFi transactions. But Ethereum remains the most widely used cryptocurrency in that area. Cardano and Polygon are gaining the most ground, and Ethereum Classic will have difficulty catching them.

With more limited potential for gains than other altcoins, Ethereum Classic is not very attractive. Instead, stick with the more promising contenders.

On the date of publication, Thomas Niel held long positions in Bitcoin and Ethereum. He did not have (either directly or indirectly) any positions in any other securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Thomas Niel, contributor for InvestorPlace.com, has been writing single-stock analysis for web-based publications since 2016.

Here is the original post:
Ethereum Classic Has Its Strengths, But Consider Other Altcoins Instead - InvestorPlace

Read More..

AI vs. Machine Learning: Their Differences and Impacts – CIO Insight

Artificial intelligence (AI) vs. machine learning. Just the words can bring up visions of decision-making computers that are replacing whole departments and divisionsa future many companies believe is too far away to warrant investment. But the reality is, AI is here, and here to stay. And particularly at the enterprise level, a growing number of companies are tuning in to the productivity and promise of machines that can think for themselves.

In fact, a recent study by McKinsey showed that by 2019, venture capital investment in AI had already topped $18.5 billion. And IDC predicted that by 2023, global spending on AI and Machine Learning solutions will reach nearly $98 billion.

All this development promises to have a tremendous impact on every corner of industry. McKinsey recently released figures predicting that by 2030, 375 million workersabout 14 percent of the total global workforcewill need to switch occupations as robots and algorithms take over tasks once done by humans. Yet most analyses project net job gains as a result of AIlike this report from Gartner, which predicts that in the US, AI will displace as many as 1.8 million jobs in the near future, yet experience a net gain of at least 500,000 to two million new jobs as companies expand to absorb the new productivity.

So, with all that in mind, how do you understand dial back the AI vs. machine learning hype? And how should you be thinking about what cognitive computing can do for your business? Lets take a closer look.

Artificial intelligence is a computer system designed to think the way humans think. That means more than just doing one task well, like say, Alexa, who responds to your voice command to play your favorite song. True artificial intelligence has the ability to parse data, make decisions, and learn from those decisions to create something new.

AI has been famously used to tackle big problems, like testing drug compounds for curing cancer. Alibaba uses AI not just for predictive advertising on their sites, but also for monitoring cars and creating constantly changing traffic patterns, or helping farmers monitor crops to increase yield. Amazon Go is using AI to rethink the future of retail, creating unmanned convenient stores that monitor your shopping experience and charge you automatically when you walk out the door with an item.

Experimental AI has written novels (badly), played chess against world masters (very well), and parsed the worlds medical literature to help doctors make better and more complete diagnoses (and saved lives.) With AI platforms like Microsoft Azure, Google Cloud, and many others, developers now have the resources they need to think creatively about AI for their own businesses. Further, AI in the cloud significantly reduces a companys infrastructure costs for the massive computing capacity AI needs to be most useful.

Sometimes, machine learning is used interchangeably with artificial intelligence, but thats not quite correct. Machine learning is actually a subset of artificial intelligence. Machine learning refers to a program which does one task really well by parsing and analyzing data over time. It is only as good as the data flowing into it. However, examples of machine learning are all around us, from Alexa on our tabletops, to the dynamic pricing that goes up or down on a website based on your personal information, or the email that gets automatically filtered to your inbox, and the chatbot that responds when you ask a question on a website.

Artificial Intelligence has promise, and is becoming more feasible for companies to incorporate into their systems, says Sitima Fowler, vice president of marketing for national IT consulting firm Iconic IT. But she recommends most companies start small.

AI is trendy right now, definitely. But the reality is, most companies will be starting with machine learning, such as bots that parse their user traffic, for instance, to mine data. They might use it for chatbots on their website to direct consumer inquiries to the right information. From there, many companies can use the AI development tools available in the cloud from services like Amazon and Microsoft to develop AI that powers their consumer facing apps, and so much more. Were all very excited about the future of where artificial intelligence can take us. But its important to take it one step at a time, so the rest of your systems can integrate and keep up, Fowler said.

For example, at Iconic IT, we use AI to prevent cyber security breaches.Just simply installing an antivirus and email spam filter on your computer isnt enough. The bad guys have figured out ways around this software.So we incorporate AI on top of these software so it looks as the persons normal behavior and interactions with other people. Over time it learns a users email habits, communication styles, contacts to determine if a particular email is legitimate or potentially harmful, she added.

Originally posted here:
AI vs. Machine Learning: Their Differences and Impacts - CIO Insight

Read More..

Machine Learning Through The Lens of Econometrics – Analytics India Magazine

While we can predict house prices with accuracy, we cannot use such ML models to answer questions like whether one needs more dining rooms.

Artificial Intelligence has been a force of nature in many fields. From augmenting advancements in health and education to bridging gaps through speech recognition and translation AImachine intelligence is becoming more vital to us every day. Sendhil Mullainathan, a professor at the University of Chicago Booth School of Business, and Jann Spiess, an assistant professor at the Stanford Graduate School of Business, observed how machine learning, specifically supervised machine learning, was more empirical than it was procedural. For instance, face recognition algorithms do not use rigid rules to scan certain pixel recognitions. Au contraire, these algorithms utilise large datasets of photographs to predict how a face looks. This means that the machine would use the images to estimate a function f(x) that predicts the presence (y) of a face from pixels (x).

Register for Free Hands-on Workshop: oneAPI AI Analytics Toolkit

Another discipline that heavily relies on such approaches is econometrics. Econometrics is the application of statistical procedures in economic data to provide empirical analysis on economic relationships. With machine learning being used on data for uses like forecasting, can empirical economists employ ML tools in their work?

Today, we see a considerable change in what constitutes the data individuals can work within. Machine learning enables statisticians and analysts to work with data considered too high dimensional for standard estimation methods, such as online posts and reviews, images, and language information. Statisticians could barely look at such data types for processes such as regression. In a 2016 study, however, researchers used images from Google Street View to measure block-level income in New York City and Boston. Moreover, a 2013 research developed a model to use online posts to predict the outcome of hygiene inspections. Thus, we see how machine learning can augment how we research today. Lets look at this in further detail.

Traditional estimation methods, like ordinary least squares (OLS), are already used to make predictions. So how does ML fit into this? To see this, we return to Sendhil Mullainathan and Jann Spiess workwhich was written in 2017, when the former taught and the latter was a PhD candidate at Harvard University. The paper took an example, predicting house prices, for which they selected ten thousand owner-occupied houses (chosen at random) from the 2011 American Housing Surveys metropolitan sample. They included 150 variables on the house and its location, such as the number of bedrooms. They used multiple tools (OLS and ML) to predict log unit values on a separate set of 41,808 housing unitsfor out-of-sample testing.

Applying OLS to this will require making specifically curated choices on which variables to include in the regression. Adding every interaction between variables (e.g. between base area and the number of bedrooms) is not feasible because that would consist of more regressors than data points. ML, however, searches for such interactions automatically. For instance, in regression trees, the prediction function would take the form of a tree that splits at each node, representing one variable. Such methods would allow researchers to build an interactive function class.

One problem here is that a tree with these many interactions would result in an overfiti.e. It would not be flexible enough to work with other data sets. This problem can be solved by something called regularisation. In the case of a regression tree, a tree of a certain depth will need to be chosen based on the tradeoff between a worse in-sample fit and a lower overfit. This level of regularisation will be selected by empirically tuning the ML algorithmby creating an out-of-sample experiment within the original sample.

Thus, picking the ML-based prediction function involves two steps: selecting the best loss-minimising function and finding the optimal level of complexity by empirically tuning it. Trees and their depts are just one such example. Mullainathan and Speiss stated that the technique would work with other ML tools such as neural networks. For their data, they tested this on various other ML methods, including forests and LASSO, and found them to outperform OLS (trees tuned by depths, however, were not more effective than the traditional OLS). The best prediction performance was seen by an Ensemble that ran several separate algorithms (the paper ran LASSO, tree and forest). Thus, econometrics can guide design choices to help improve prediction quality.

There are, of course, a few problems associated with the use of ML here. The first is the lack of standard errors on the coefficients in ML approaches. Lets see how this can be a problem: The Mullainathan-Spiess study randomly divided the sample of housing units into ten equal partitions. After this, they re-estimated the LASSO predictor (with the regulariser kept fixed). The results displayed a massive problem: a variable used by the LASSO model in one partition may be unused in another. There were very few stable patterns throughout the partitions.

This does not affect the prediction accuracy too much, but it does not help decipher whether two variables are highly correlated. In traditional estimation methods, such correlations are reflected as significant standard errors. Due to this, while we can predict house prices with accuracy, we cannot use such ML models to answer questions like whether a variable, e.g. number of dining rooms, is unimportant in this research just because the LASSO regression did not use it. Regularisation also leads to problems: it allows the choice of less complex but potentially wrong models. It could also bring up concerns of omitted variable biases.

Finally, it is essential to understand the type of problems ML solves. ML revolves around predicting a function y from variable x. However, many economic applications work around estimating parameter that might underlie the relationship between x and y. ML algorithms are not built for this purpose. The danger here is taking an algorithm built for y= and presuming that its value would have the properties associated with estimation output.

Still, ML does improve predictionso one might benefit from it by looking for problems with more significant consequences (i.e. situations where improved predictions have immense applied value).

One such category is within the new kinds of data (language, images) mentioned earlier. Analysing such data involves prediction as a pre-processing step. This is particularly relevant in the presence of missing data on economic outcomes. For example, a 2016 study trained a neural network to predict local economic outcomes with the help of satellite data in five African countries. Economists can also use such ML methods in policy applications. An example provided by Mullainathan and Spiess paper was of deciding which teacher to hire. This would involve a prediction task (deciphering the teachers added value) and help make informed decisions. These tools, therefore, make it clear that AI and ML are not to be left unnoticed in todays world.

See more here:
Machine Learning Through The Lens of Econometrics - Analytics India Magazine

Read More..

Cellarity: Transforming Drug Development at the Confluence of Biology and Machine Learning – BioSpace

In the field of drug discovery, one must always begin with the target, right? Not if you ask Cellarity, a quickly emerging biotech company revolutionizing the drug development space.

Rather than the traditional target centric approach to drug discovery, Cellarity works at the level of the cell to understand how disease impacts cell behavior via a target agnostic approach that can help illuminate the most complex diseases science has not yet been able to crack.

For decades, drug discovery has been about reducing diseases down to a single molecular target that we can drug to influence the course of a given disease, explained Cellarity CEO Fabrice Chouraqui, who is also a CEO-Partner at Flagship Pioneering. This approach has produced a significant number of breakthrough treatments, but the target-centric assumptions that we make in vitro or in vivo do not often translate into human. Human biology is far more complex than any single target could ever predict, which is one reason why right now many drugs fail in clinical development. Our approach is different.

Founded in 2019 and already rising to the top of lists such as BioSpaces own Top Life Sciences Startups to Watch in 2021, Cellarity believes there is a better wayone based on the computational modeling of cell behavior.

Cellaritys unique platform generates unprecedented biological insights by combining unique expertise in network biology, high-resolution data, and machine learning. The result is a new understanding of the cells trajectory from health to disease and how cells relate to one another in tissues. This in turn opens up a world of opportunities for the discovery of novel therapeutics, particularly for complex diseases.

Diseases are complex and often not linked to a single target in a single cell in a single system, said Chouraqui. Conditions like T cell exhaustion, metabolic disease, complex neurodegenerative diseases like Alzheimerstheres a reason we havent made a lot of progress in these areas. So we asked ourselves if there was a way to work at a higher levelthe level of the cellto really harness the complexity of human biology.

Because Cellaritys pioneering approach does not start with a single molecular target, its scientists are able to uncover a much more diverse set of compounds that can be deeply characterized to understand how they work on both known and previously unknown targets. Indeed, the companys algorithms, data assets and approach were inspired by systems biology.

In the early 2000s, systems biology proposed that by looking at biology as a whole, we would be able to better understand the interplay between its parts, specifically genes, proteins and pathways in the context of disease and health, said Cellarity Chief Digital and Data Officer Milind Kamkolkar. However, due to a lack of well-integrated high-resolution data and sophisticated computational power, the industry had no choice but to study biologys parts in the absence of its networks.

A lot has changed since then. In the past five years alone, phenotypic drug discovery has evolved as an alternative to the single-target approach, thanks to advances in high-throughput imaging technology and machine learning. Yet the gap between a drugs success in vitro and an efficacious drug in patients remains immense.

Cellaritys solution: Unlike single molecules, single target or phenotypic representations of cellular programs, Cellarity directly targets cellular programs critical to disease, leveraging a platform that systematically addresses the problems of translation beyond simplifying target discovery, toxicity, adverse effects, and drug design.

One key part of the approach is the way Cellarity predicts drugs and their properties by tying them to computationally engineered representations of cell behavior called Cellarity Maps.

Cellarity Maps give us a much higher-resolution picture of the cellular components of a tissue and really allow us to understand the mechanism of action that one would want to reverse to go from a state of disease to a state of health, said Chouraqui.

Chouraqui believes that there is no limit to where this cell-centric approach can take Cellarity and the field of medicine. His assertion is backed up by the cadre of investors that recently put up $123 million in series B financing.

Our investors recognized that Cellarity stood out in the field of drug discovery, said Saif Rathore, MD PhD, Cellaritys VP and Head of Strategy and Partnerships. We are the only company taking a target-agnostic approach that evaluates cell behavior changes and works through product optimization, whereas others in the field are primarily working on optimizing different parts of the target centric molecular or phenotypic drug discovery processes.

To execute its vision, Cellarity has assembled a team of diverse, world class talent. We have brought together international leaders from pharma, graduates of Flagship Pioneering academic programs, physicians, scientists, and pedigrees that span the spectrum from the Broad Institute to McKinsey. said Rathore.

The outcome: the pioneering biotech already has 7 drug discovery programs underway across 10 therapeutic areas including the high-value fields of hematology, immuno-oncology, metabolism, and respiratory.

All diseases stem from a disorder at the cellular level, said Chouraqui. This cell-centric approach can be applied to virtually every single disease. We are progressing programs in diverse therapeutic areas to show the depth of our platform, starting with diseases for which there is a well-understood and direct correlation between a change in cell behavior and the etiology of the disease.

Chouraquis vision for the platform transcends the company. In a few years time, he sees Cellarity with unparalleled predictive power in different drug modalities and a deep exploratory pipeline with multiple clinical proofs of concept in different disease areas.

Our platform has the potential to change how the world approaches the discovery of medicines said Chouraqui.

Featured Jobs on BioSpace

Continued here:
Cellarity: Transforming Drug Development at the Confluence of Biology and Machine Learning - BioSpace

Read More..

The intersection of machine learning and biology is the future, and we want to be one of the first companies really helping push that forward. – CTech

It is not often when an entrepreneur with an idea to improve our lives can literally improve the biology of our bodies, but that is exactly what Luis Voloch and his co-founder Noam Solomon at Immunai are doing. Combining machine learning and biology, they are working to no and reprogram our immune systems by applying knowledge about certain cancers, for example, to other forms of cancer. The company currently works with various pharmaceutical companies to help them develop, improve, and combine their drugs, but Voloch has goals of helping people understand their immune systems better beyond cancer as well, and onto fighting autoimmune and age-related diseases as well. To reach their goals, they have hired the best and the brightest they can find in fields such as software, computational biology, and immunology. Other crucial characteristics for Immunais hires are that they are very curious about the other disciplines and want to learn and work together.

Click Here For More 20MinuteLeaders

Michael Matias, Forbes 30 Under 30, is the author of Age is Only an Int: Lessons I Learned as a Young Entrepreneur. He studies Artificial Intelligence at Stanford University, while working as a software engineer at Hippo Insurance and as a Senior Associate at J-Ventures. Matias previously served as an officer in the 8200 unit. 20MinuteLeaders is a tech entrepreneurship interview series featuring one-on-one interviews with fascinating founders, innovators and thought leaders sharing their journeys and experiences.

Contributing editors: Michael Matias, Amanda Katz

Read this article:
The intersection of machine learning and biology is the future, and we want to be one of the first companies really helping push that forward. - CTech

Read More..

Machine Learning Deserves Better Than This | In the Pipeline – Science Magazine

This is an excellent overview at Stat on the current problems with machine learning in healthcare. Its a very hot topic indeed, and has been for some time. There has especially been a flood of manuscripts during the pandemic, applying ML/AI techniques to all sorts of coronavirus-related issues. Some of these have been pretty far-fetched, but others are working in areas that everyone agrees that machine learning can be truly useful, such as image analysis.

How about coronavirus pathology as revealed in lung X-ray data? This new paper (open access) reviewed hundreds of such reports and focused in on 62 papers and preprints on this exact topic. On closer inspection, none of these is of any clinical use at all. Every single one of the studies falls into clear methodological errors that invalidate their conclusions. These range from failures to reveal key details about the training and experimental data sets, to not performing robustness or sensitivity analyses of their models, not performing any external validation work, not showing any confidence intervals around the final results (or not revealing the statistical methods used to compute any such), and many more.

A very common problem was the (unacknowledged) risk of bias right up front. Many of these papers relied on public collections of radiological data, but these have not been checked to see if the scans marked as COVID-19 positive patients really were (or if the ones marked negative were as well). It also needs to be noted that many of these collections are very light on actual COVID scans compared to the whole database, which is not a good foundation to work from, either, even if everything actually is labeled correctly by some miracle. Some papers used the entire dataset in such cases, while others excluded images using criteria that were not revealed, which is naturally a further source of unexamined bias.

In all AI/ML approaches, data quality is absolutely critical. Garbage in, garbage out is turbocharged to an amazing degree under these conditions, and you have to be really, really sure about what youre shoveling into the hopper. We took all the images from this public database that anyone can contribute to and took everyones word for it is, sadly, insufficient. For example, one commonly used pneumonia dataset turns out to be a pediatric collection of patients between one and five, so comparing that to adults with coronavirus infections is problematic, to say the least. Youre far more likely to train the model to recognize children versus adults.

That point is addressed in this recent preprint, which shows how such radiology analysis systems are vulnerable to this kind of short-cutting. Thats a problem for machine learning in general, of course: if your data include some actually-useless-but-highly-correlated factor for the system to build a model around, it will do so cheerfully. Why wouldnt it? Our own brains pull stunts like that if we dont keep a close eye on them. That paper shows that ML methods too often pick up on markings around the edges of the actual CT and X-ray images if the control set came from one source or type of machine and the disease set came from another, just to pick one example.

To return to the original Nature paper, remember, all this trouble is after the authors had eliminated (literally) hundreds of other reports on the topic, for insufficient documentation. They couldnt even get far enough to see if something had gone wrong, or how, because these other papers did not provide details of how the imaging data were pre-processed, how the training of the model was accomplished, how the model was validated, or how the final best model was selected at all. These fall into Paulis category of not even false. A machine learning paper that does not go into such details is, for all real-world purposes, useless. Unless you count putting a publication on the CV as a real-world purpose, and I suppose it is.

But if we want to use these systems for some slightly more exalted purposes, we have to engage in a lot more tire-kicking than most current papers do. I have a not-very-controversial prediction: in coming years, virtually all of the work thats being published now on such systems is going to be deliberately ignored and forgotten about, because its of such low quality. Hundreds, thousands of papers are going to be shoved over into the digital scrap heap, where they most certainly belong, because they never should have been published in the state that theyre in. Who exactly does all this activity benefit, other than the CV-padders and the scientific publishers?

Read the original:
Machine Learning Deserves Better Than This | In the Pipeline - Science Magazine

Read More..

PathAI to Present Machine Learning-based Quality Control Tool for HER2 Testing in Breast Cancer at the American Society of Clinical Oncology Virtual…

BOSTON, June 8, 2021 /PRNewswire/ --PathAI, a global leader of AI-powered technology applied to pathology, today announced that new data highlighting a quality control tool for HER2 testing in digital pathology images captured in clinical trials will be presented in the American Society of Clinical Oncology (ASCO) Virtual Scientific Program 2021, held from June 4-8, 2021. These results will be shared in the poster presentation, Machine learning models to quantify HER2 for real-time tissue image analysis in prospective clinical trials (Abstract #3061), in the session, Developmental Therapeutics Molecularly Targeted Agents and Tumor Biology.

Together, PathAI, AstraZeneca (LSE/STO/Nasdaq: AZN) and Daiichi Sankyo Company, Limited have developed ML-based models for the automated quantification of HER2 IHC images in breast cancer tissue. Expression of HER2, a protein localized in the cell membrane, is typically assessed by pathologists to evaluate patient eligibility for anti-HER2 targeted therapies. ML-based models trained to identify and quantify tumor histology features can provide highly accurate and reproducible scores that are highly concordant with manual pathology.

The PathAI HER2 models were developed to generate HER2 scores consistent with the 2018 ASCO/CAP HER2 scoring guidelines. The models also produce metrics that reflect the quality of HER2 testing, such as the area and number of tumor cells, the presence of ductal carcinoma in situ (DCIS), background staining and artifact content. In a test set including diverse tissue-types across a wide range of breast cancer types, ML quantification of HER2 was consistent with manual scores from a consensus of pathologists (ICC 0.88, 95% CI 0.82-0.92). ML scores were even more closely aligned with pathologist scores after further training to learn pathologist scoring methods (ICC 0.91, 95% CI 0.89-0.94). By providing consistent, automated HER2 IHC image analysis, PathAI ML models can provide real-time QC read-outs enabling identification of drifts or inconsistencies in HER2 testing data and images captured during clinical trials.

PathAI's broad approach towards integrating AI-powered tools into oncology clinical trial workflows is also represented by a separate study that PathAI is presenting at ASCO (Abstract #106). Both presentations are examples of how AI can enhance pathologist performance by generating accurate and reproducible clinically relevant scores that can be scaled to levels that are currently unachievable.

About PathAI:PathAI is a leading provider of AI-powered research tools and services for pathology. PathAI's platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit pathai.com.

View original content to download multimedia:http://www.prnewswire.com/news-releases/pathai-to-present-machine-learning-based-quality-control-tool-for-her2-testing-in-breast-cancer-at-the-american-society-of-clinical-oncology-virtual-scientific-program-2021-301307942.html

SOURCE PathAI

View post:
PathAI to Present Machine Learning-based Quality Control Tool for HER2 Testing in Breast Cancer at the American Society of Clinical Oncology Virtual...

Read More..

Machine Learning as a Service (MLaaS) Market to Witness Huge Growth by 2028 | Microsoft, International Business Machine, Amazon Web Services KSU |…

Global Machine Learning as a Service (MLaaS) Market Report is an objective and in-depth study of the current state aimed at the major drivers, market strategies, and key players growth. The study also involves the important Achievements of the market, Research & Development, new product launch, product responses and regional growth of the leading competitors operating in the market on a universal and local scale. The structured analysis contains graphical as well as a diagrammatic representation of worldwide Machine Learning as a Service (MLaaS) Market with its specific geographical regions.

[Due to the pandemic, we have included a special section on the Impact of COVID 19 on the @ Market which would mention How the Covid-19 is Affecting the Global Machine Learning as a Service (MLaaS) Market

Get sample copy of report @ jcmarketresearch.com/report-details/1333841/sample

** The Values marked with XX is confidential data. To know more about CAGR figures fill in your information so that our business development executive can get in touch with you.

Global Machine Learning as a Service (MLaaS) (Thousands Units) and Revenue (Million USD) Market Split by Product Type such as [Type]

The research study is segmented by Application such as Laboratory, Industrial Use, Public Services & Others with historical and projected market share and compounded annual growth rate.Global Machine Learning as a Service (MLaaS) by Region (2019-2028)

Geographically, this report is segmented into several key Regions, with production, consumption, revenue (million USD), and market share and growth rate of Machine Learning as a Service (MLaaS) in these regions, from 2013 to 2029 (forecast), covering

Additionally, the export and import policies that can make an immediate impact on the Global Machine Learning as a Service (MLaaS) Market. This study contains a EXIM* related chapter on the Machine Learning as a Service (MLaaS) market and all its associated companies with their profiles, which gives valuable data pertaining to their outlook in terms of finances, product portfolios, investment plans, and marketing and business strategies. The report on the Global Machine Learning as a Service (MLaaS) Market an important document for every market enthusiast, policymaker, investor, and player.

Key questions answered in this report Data Survey Report 2029

What will the market size be in 2029 and what will the growth rate be?What are the key market trends?What is driving Global Machine Learning as a Service (MLaaS) Market?What are the challenges to market growth?Who are the key vendors in space?What are the key market trends impacting the growth of the Global Machine Learning as a Service (MLaaS) Market?What are the key outcomes of the five forces analysis of the Global Machine Learning as a Service (MLaaS) Market?

Get Interesting Discount with Additional Customization @ jcmarketresearch.com/report-details/1333841/discount

There are 15 Chapters to display the Global Machine Learning as a Service (MLaaS) Market.

Chapter 1, to describe Definition, Specifications and Classification of Machine Learning as a Service (MLaaS), Applications of Machine Learning as a Service (MLaaS), Market Segment by Regions;

Chapter 2, to analyze the Manufacturing Cost Structure, Raw Material and Suppliers, Manufacturing Process, Industry Chain Structure;

Chapter 3, to display the Technical Data and Manufacturing Plants Analysis of Machine Learning as a Service (MLaaS), Capacity and Commercial Production Date, Manufacturing Plants Distribution, R&D Status and Technology Source, Raw Materials Sources Analysis;

Chapter 4, to show the Overall Market Analysis, Capacity Analysis (Company Segment), Sales Analysis (Company Segment), Sales Price Analysis (Company Segment);

Chapter 5 and 6, to show the Regional Market Analysis that includes North America, Europe, Asia-Pacific etc., Machine Learning as a Service (MLaaS) Segment Market Analysis by [Type];

Chapter 7 and 8, to analyze the Machine Learning as a Service (MLaaS) Segment Market Analysis (by Application) Major Manufacturers Analysis of Machine Learning as a Service (MLaaS);

Chapter 9, Market Trend Analysis, Regional Market Trend, Market Trend by Product Type [Type], Market Trend by Application [Application];

Chapter 10, Regional Marketing Type Analysis, International Trade Type Analysis, Supply Chain Analysis;

Chapter 11, to analyze the Consumers Analysis of Machine Learning as a Service (MLaaS);

Chapter 12, to describe Machine Learning as a Service (MLaaS) Research Findings and Conclusion, Appendix, methodology and data source;

Chapter 13, 14 and 15, to describe Machine Learning as a Service (MLaaS) sales channel, distributors, traders, dealers, Research Findings and Conclusion, appendix and data source.

Buy Instant Copy of Full Research Report: @ jcmarketresearch.com/checkout/1333841

Find more research reports on Machine Learning as a Service (MLaaS) Industry. By JC Market Research.

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.

About Author:JCMR global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the Accurate Forecast in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their Goals & Objectives.

Contact Us:JCMARKETRESEARCHMark Baxter (Head of Business Development)Phone: +1 (925) 478-7203Email: sales@jcmarketresearch.com

Connect with us at LinkedIn

Excerpt from:
Machine Learning as a Service (MLaaS) Market to Witness Huge Growth by 2028 | Microsoft, International Business Machine, Amazon Web Services KSU |...

Read More..

Avnet to showcase power of AI and machine learning – IT Brief Australia

Tech distributor Avnet will showcase new innovative technology, applications and solutions in artificial intelligence and machine learning at the Avnet AI Cloud Exhibition, together with its suppliers and partners.

The company will also hold the Avnet 2021 Artificial Intelligence Cloud Conference on 29 June, 2021. Joined by developers, engineers, and decision makers in the AI field, the summit will feature cutting-edge technology trends in artificial intelligence and machine learning, and in-depth discussions on the development, future prospects and blueprints for AI to encourage and accelerate innovation.

"MarketsandMarkets forecasts the global artificial intelligence market size to grow to over USD$300 billion by 2026, and the market in Asia Pacific is anticipated to grow at the highest CAGR during the forecast period," says KS Lim, senior director of supplier management at Avnet Asia.

"As the world's leading technology distributor and solution provider, Avnet has a comprehensive ecosystem that provides customers with end-to-end artificial intelligence and machine learning solutions, reducing the cost and complexity of product development to enable application scenarios," he says.

"We will continue to work hand in hand with our suppliers and partners to further contribute to the development and maturity of the entire AI ecosystem."

The virtual exhibition is divided into three sections: AI smart solution demonstration area, Avnet design service demonstration area, and a partner solution demonstration area.

In the AI smart solution demonstration area, participants can learn about Avnet's various innovative technologies and industrial applications, including:

AI camera: A smart AI camera utilising a neural network implemented in the FPGA fabric. It integrates an independent high-performance ISP camera module based on the Xilinx Zynq7020 to achieve a variety of functions, including noise reduction, wide dynamic range, light source detection, motion detection and edge enhancement function.

BlueBox AI platform: The embedded edge artificial intelligence box can perform multi-channel convolutional neural network operations. It facilitates real-time multi-channel AI functions such as face detection, passenger and traffic statistics, and license plate recognition. All functions operate independently and can work simultaneously to provide edge artificial intelligence analytic solutions.

The box integrates all the above functions through the underlying Xilinx Zynq UltraScale+ MPSoC to perform AI computing on demand.

ROS on Ultra96: The open source Robot Operating System (ROS) runs on the Avnet Ultra96 development board, which features the Xilinx Zynq UltraScale+ MPSoC. The programmable logic part of the Zynq UltraScale+ MPSoC provides deep learning acceleration capabilities, while consolidating a range of ROS functions such as control, SLAM, and navigation. The small form-factor Ultra96 single board computer running ROS makes an ideal platform for developing autonomous robots, service robots, and general purpose ROS experimentation.

In the partner demonstration area, Avnet's suppliers and partners will also showcase their innovations:

ON Semiconductor: A variety of advanced imaging technologies such as high speed, short exposure, global shutter and platform solutions will be displayed to address the application of different scenarios such as factory automation and the challenges faced in industrial AI applications, to accelerate innovation.

Samtec: Fast-growing technologies like Artificial Intelligence are driving new system architectures that demand increased bandwidths, frequencies and densities. To meet these challenges, Samtec offers innovative high-performance interconnects that exceed AI industry standards.

STMicroelectronics: Introduce embedded AI solutions based on deep learning models running on high performance 32-bit microcontroller and also machine learning based MEMS sensor.Western Digital: IX SN530 NVMeTM industrial-grade SSD will be displayed, which supports a new generation of data-rich industrial design and autonomous vehicle design.

Xilinx: Will showcase its real-time multi-task autonomous driving AI perception processing solution, which uses the industry-leading lightweight optimisation algorithm to achieve vehicle detection, lane line detection, lane detection in ADAS and autonomous driving scenarios through a single model. It can perform multiple tasks such as driving area detection and depth estimation.

In addition, Xilinx will also demonstrate the application of Versal-based DPU in low-latency automatic driving and pose detection.

YAGEO Group: Will showcase the flagship products from its main brands YAGEO, KEMET and PULSE, including YAGEO resistors, KEMET polymer capacitors, and PULSE network devices, to provide high reliability polymer and ceramic capacitor solutions for AI chips and DC power supplies for autopilot computers.

Read the original:
Avnet to showcase power of AI and machine learning - IT Brief Australia

Read More..