Page 3,279«..1020..3,2783,2793,2803,281..3,2903,300..»

The Weirdest Objects in the Universe | Space – Air & Space Magazine

As long as astronomers have gazed at the stars and planets, there has been the temptation to see the handiwork of intelligent life when there is no other apparent explanation.

Babylonians believed eclipses to be omens sent by gods. Percival Lowell thought he saw canals on Mars. Aware of this history, some astronomers have found humor in it. When pulsars were discovered in the 1960s, says Seth Shostak, the senior astronomer at the SETI Institute in California, they were first called LGMs, for little green men by the Cambridge astronomers who found them because they didnt know what they were. They were very regular radio sources. Within a year, the theoreticians figured out what they were rapidly rotating compact stars emitting radiation like lighthouses. More recently, he notes, a weirdly dimming object called Tabbys Star inspired theories of alien life, before astronomers solved the puzzle.

But what about the many puzzles in the universe that astronomers have not yet solved? Recently, scientists engaged in the search for extraterrestrial intelligence (SETI) have suggested that its time to lookas well as listenfor advanced civilizations. SETI researchers do exactly what Frank Drake did in the first SETI experiment 60 years ago, says Shostak. A SETI founder, Drake assumed that because life developed on Earth, it would be most likely found on planets like ours orbiting stars like our sun. He and others pointed radio antennas at star systems likely to have planets and listened. Now scientists engaged in the search think its time for a change. Its worthwhile to not just do what was done 60 years ago, but also to keep an eye out for very unusual things, says Shostak. The universe has been around for three times as long as the Earth has been around, so there could be aliens out there that are very, very much more advanced than we arenot just 1,000 years, but millions and billions of years ahead. What could they build? Maybe they could re-engineer a star system.

NASA recently awarded a grant to a SETI project that will seek out signs of alien technosignatures, such as solar panel arrays on distant exoplanets. Breakthrough Listen, the largest SETI initiative to date, is thinking even bigger, searching for sophisticated engineering projects that span entire star systems or even galaxies (see Signs, Feb./Mar. 2019).

Its a fine line, says Andrew Siemion, director of the Berkeley SETI Research Center and the principal investigator of Breakthrough Listen. We have no evidence for technologically capable life anywhere else in the universe. So one doesnt want to immediately leap to that solution for any newly discovered astronomical phenomenon. But at the same time, one doesnt want to neglect the possibility that it could be going on.

Earlier this year, to help locate possible targets for future searcheswhether for signals or artifactsBreakthrough Listen released the first cut of its Exotica Catalog, a listing of almost 800 astronomical objects. Most of the catalog is a one-of-everything compilation, from rocky planets to blue-straggler stars. A smaller section lists the superlatives: the biggest, the hottest, the brightest, the farthest. But the most intriguing section catalogs anomalies: strange, unexplained objects such as puffy planets, slow-spinning pulsars, interstellar asteroids, fast radio bursts from beyond the Milky Way, and dozens more.

Astronomers say theyre likely to discover natural (if bizarre) explanations for them. But weve asked them to use their powers of imagination and extrapolation to suggest how the four examples listed here might be indications of intelligent life. Even if these four phenomena arent the products of alien civilizations, they illustrate that the universe can be pretty weird all on its own.

Infrared Alert

An alien megastructure could make stars appear unusually bright. One imagined megastructurea sphere or ring of giant panels built around stars to capture as much of their energy as possibleis named the Dyson Sphere for the physicist Freeman Dyson who popularized the concept in a 1960 Science article. One should expect that, within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere which completely surrounds its parent star, he wrote.

Giant structures around a star would absorb a lot of energy, not all of which could be used by the civilization that built them. The rest would be radiated into space as heat, which would look especially bright in the infrared.

The difficulty is that you need a lot of industry around a star to detect the excess heat, says Jason Wright, a professor of astronomy and astrophysics at Pennsylvania State University. Theyd have to be using one percent or more of a stars light to have a noticeable effect. And many stars produce extra infrared light by heating nearby dust or asteroids.

Wright decided to kick things up a notch by looking at galaxies. I wondered if there might be galaxy-spanning technological species, he says. He found a few candidate galaxies in the observations of NASAs Wide-field Infrared Survey Explorer (WISE), an infrared space telescope. The properties of those galaxies were consistent with high rates of star formation, Wright says, which could be an explanation for the abundant heat. But one infrared-bright galaxy that hasnt yet been explained is known by a catalog number, WISE J224436.12. Its an especially red object, and images show what appears to be a galaxy. In a 2015 paper about candidate homes for extraterrestrial civilizations, Wright and colleagues gave it an A, writing that it deserves further study to understand its superlative nature.

Its probably an extreme starburst galaxy, Wright says now. I would love for someone to get a spectrum of itthat is, to break down its light into its individual wavelengths. This would reveal if it shows high levels of star formation, explaining its infrared brightness, or if something else is going on.

Elements of Surprise

No star should look like that.

We dont know if Antoni Przybylski actually said that back in April 1960, when the Polish-Australian astronomer was using a telescope to study fast-moving stars in southern skies. We do know that he made an astonishing discovery.

Przybylskis Star, as it is now known, is about twice the diameter and four times the mass of the sun, and its surface is several thousand degrees hotter than the suns. Those traits make the star impressive but not extraordinary.

What elevates the star to true weirdness is its spectrum. Each chemical element leaves a unique imprint in the spectrum, and the spectrum of Przybylskis Star places it in a class known as chemically peculiar stars that show unusual abundances of different elements.

Przybylski identified barium, strontium, plus all 15 rare Earth elements in the star. And later research added most of the actinidesradioactive elements such as neptunium, plutonium, and curium. These are all short-lived elementsthey have half-lives of as little as a year or so, Wright says. But the star has been there for millions of years, so the elements should have decayed away.

Charles Cowley, a professor emeritus at the University of Michigan who has led several studies of the star, says the default explanation for the odd chemistry is that the elements are levitated to the surface by magnetic fields and radiation pressure from below.

To me thats rather shaky, but we havent known anything better, says Cowley. But there are many possibilities that have to do with dumping material on the surface from the surroundings. The star might have ingested a planet with high concentrations of the odd elements, for example. Or it might have passed through an interstellar cloud laced with the elements, especially if the cloud contained debris from the collision between two dense stellar corpses known as neutron stars, which produce rare earths and other heavy elements.

On the other hand, it has been informally proposed that perhaps the radioactive elements were dumped into the star by a civilization inhabiting its planets. On Earth, its been suggested that to get rid of nuclear waste, we might launch it into the sun, says Brian Lacki, a postdoctoral researcher who compiled the Exotica Catalog. Some people have wondered, well, if we could do it, could an extraterrestrial intelligence do it? In a particular type of star, the nuclear waste could stay at the surface and be seen in the spectrum.

Cowley, for one, puts his money on the colliding-neutron-star hypothesis. Weve seen evidence of colliding neutron stars, but weve never seen any evidence of extraterrestrial life anywhere, he says. I dont know any reason why it couldnt happen that way because there are so many sun-like stars and probably many Earth-like planets, but we just dont know at this point.

Taking a Dip

Few astronomical objects have generated as much buzz as Tabbys (or Boyajians) Star. Observations by the planet-hunting Kepler space telescope showed that the stars light flickers irregularly, dropping by as much as 22 percent for up to several days at a time. The most likely explanation for the dips is that something is passing in front of the star, partially blocking it from view.

The nature of that something caught peoples attention when Wright suggested it might be alien megastructures. But a years-long study by Tabetha Boyajian, for whom the star is named, ruled out that possibility. Observations showed that some forms of energy shine through the dips in starlight, suggesting that the star is being eclipsed not by solid panels but by swarms of smaller objects surrounded by dustsuch as comets or the remnants of a destroyed moon.

Astronomers are looking for more of these dipper stars, though, in hopes of pinpointing what causes the drops in light. One recently reported example listed in the Exotica Catalog is VVV-WIT-07 (WIT stands for What is This?). During eight years of observations, it showed sudden, irregular drops in brightness of 30 to 40 percent, with one drop of about 80 percent. Its a one-in-a-billion object, says Roberto Saito, a researcher at Universidade Federal de Santa Catarina in Brazil, who led the project.

Saito and his colleagues suggest the star may be eclipsed by a planetary ring system many times larger than Saturns, by a ring of comets, or by a lumpy or warped disk of dust.

Edward Schmidt, an emeritus professor of astronomy at the University of Nebraska-Lincoln, recently identified 21 more dipper stars. He used data-mining tools to comb through a database of 14 million objects, then studied details of the candidate stars from other studies. His discoveries fell into two distinct groups. One group of stars showed dips that are similar to Tabbys Star, while the other group flickered much more rapidly. And when Schmidt studied the characteristics of the stars, he found that some of them are about the same mass and in the same stage of life as the sun, while the others are red giantsold, bloated stars that are near the ends of their lives.

Many explanations for these stars is that something is passing in front of them, but the fact that the stars all fall within a narrow range of properties suggests that may not be it, says Schmidt, who is preparing a list of additional dipper stars found in another region of the sky. Extraterrestrial intelligence is a low-probability explanation but its a possibility, and the SETI people ought to be looking at these stars.

The Hole in the Galaxy

NGC 247 is a beautiful spiral galaxy that we see almost edge-on. Its smaller than the Milky Way, but it contains some bright nurseriesdense gaseous regions where new stars form. Its most unusual feature, though, is a dark void on one side of the galaxys core, which looks like a hole has been punched through the disk. The region contains a few old, faint stars but almost no young, bright ones.

Initially, astronomers suggested that the void might have been caused by gravitational interactions with part of another galaxy. More recent research suggests the void could have formed when a blob of dark mattertheoretical, so far, undetected material that is estimated to account for 27 percent of the universeplunged through the disk like a rock through tissue paper. The impact would have scattered stars and blown away the gas and dust for making more stars.

Or there could be an even weirder explanation, says Lacki.The SETI community hasnt paid much attention to it, but if you have extraterrestrial intelligences that build Dyson Spheres and have interstellar travel, what would that society look like? he asks. They might start by building a sphere around their own star, then expand to nearby stars, building a Dyson Sphere around each star that they arrive at. That might look like a hole in a galaxy that grows as they expand. NGC 247 reminds me of that. The hole is probably natural, but we cant be absolutely sure.

The Exotica Catalog lists a few other odd galaxies that could be home to vast civilizations. We know that the universe is capable of giving rise to an intelligent, technologically capable species, says Siemion. And if you think even further, one could imagine that extremely advanced technologiesmillions, maybe billions, of years more advanced than oursmight even have capabilities that would manifest on cosmological scales.

We are building new telescopes that will make looking for these things easier, says Shostak. The Vera C. Rubin Observatory, under construction in Chile, will in the next few years be able to observe the entire sky every few nightsover and overand therefore show anything that changes.

With new instruments and the imagination required for all scientific discovery, astronomers may find the advanced civilizations Frank Drake once dreamed ofor the most mundane of explanations for the puzzles presented by the universe.

Read more here:

The Weirdest Objects in the Universe | Space - Air & Space Magazine

Read More..

Biological Data Visualization Market Analysis, COVID-19 Impact,Outlook, Opportunities, Size, Share Forecast and Supply Demand 2021-2027|Trusted…

Trusted Business Insights answers what are the scenarios for growth and recovery and whether there will be any lasting structural impact from the unfolding crisis for the Biological Data Visualization market.

Trusted Business Insights presents an updated and Latest Study on Biological Data Visualization Market 2020-2029. The report contains market predictions related to market size, revenue, production, CAGR, Consumption, gross margin, price, and other substantial factors. While emphasizing the key driving and restraining forces for this market, the report also offers a complete study of the future trends and developments of the market.The report further elaborates on the micro and macroeconomic aspects including the socio-political landscape that is anticipated to shape the demand of the Biological Data Visualization market during the forecast period (2020-2029).It also examines the role of the leading market players involved in the industry including their corporate overview, financial summary, and SWOT analysis.

Get Sample Copy of this Report @ Biological Data Visualization Market 2020 and Forecast 2021-2027 Includes Business Impact Analysis of COVID-19

Report Overview: Biological Data Visualization Market

The global biological data visualization market size was estimated at USD 630.8 million in 2020 and looks set to grow at a compound annual growth rate (CAGR) of 8.5% from 2021 to 2027.. The biological data visualization field is rapidly evolving. Image-processing integrated with artificial intelligence-based pattern recognition, programming language, and new libraries for visual analytics have witnessed remarkable advances in recent years. Moreover, the advent of virtual reality environment is expected to revolutionize the market growth as it allows the integration of biological data in virtual worlds.

Rapid development in the field of big data coupled with large size and complexity of biological data has resulted in increased utilization of visualization tools globally. Moreover, increasing demand for decisions to be made in a lesser time requires advanced analytical tools. These tools assist in deriving substantial data from larger chunks of unorganized data in a short period of time. These factors are anticipated to drive the adoption rate of visualizing tools over the forecast period 2021-2027.The interpretation of relationships between biological networks and molecules has become a major bottleneck in systems biology. In order to combat this issue, several conferences are hosted to showcase the importance of visualizations in systems biology, which increases the adoption of these tools among the researchers. The Visualizing Biological Data workshop, held in Germany in 2019, represented that data visualization can potentially drive advancements in molecular and systems biology.

The broadening horizon of artificial intelligence in big data analysis accelerates the adoption rate of computational visualizing tools in biomedical research. As compared to traditional methods, machine learning-based methods are more reliable, robust, and accurate. Besides, with the emergence of improved computing and storage capabilities, the analysis level for biological data has shifted to a molecular level from sequence-based level, which in turn, benefits the biomedical research.

Recently, biological data mining is increasingly being integrated into biological and medical discovery processes. This has resulted in the generation of large amounts of genomic and proteomic datasets. The tremendous growth in data mining contributed to the employment of visualization tools for the analysis of genomic data, thereby accelerating market growth.

Technique Insights: Biological Data Visualization Market

The sequencing segment is anticipated to account for the largest revenue share over the forecast period owing to wide-range applications of next-generation sequencing (NGS) in the analysis of genomics and epigenetics, systems biology, and evolutionary datasets. Besides, the emergence of single-cell RNA sequencing is proven to be a revolutionary technology that provides in-depth insights into cell behavior, thus expanding the sequencing applications.

The declining cost of sequencing and the increase in throughput has rendered RNA-sequencing as an attractive technique for transcriptome profiling, which further contributes to the largest share. Besides, sequence alignments are commonly used data type in phylogenetic analysis and their usage helps in gaining better insights about the molecular mechanisms, which differentiates each species.

Magnetic resonance imaging (MRI) technique is expected to register the fastest growth rate owing to constant developments in this field. Furthermore, the tools introduced for imaging of diffusion-weighted MRI information are expected to be utilized for microscopy applications. This is expected to enhance the adoption of MRI in other techniques as well and drive the growth of the biological data visualization market.

Application Insights: Biological Data Visualization Market

The cell and organism imaging application segment accounted for the largest share in the market in terms of revenue generation. The continuous development of big data analytics and data mining is expected to provide real-time data to radiology professionals during the imaging procedures, which, in turn, drives the employment of visualizing software for cell and organism or molecular graphics images.

Constant technical advances in the cryo-electron microscopy have transformed the examination of the cell architecture, viruses, and protein assemblies at molecular resolution. The widespread application of this technique is driven by developments in statistical analysis software, transmission electron microscopes optics, and sensors that allow rapid readouts with the ability to directly detect electrons. These advancements are anticipated to propel the cell imaging segment.Genomic analysis is expected to grow at a significant pace, registering the fastest CAGR over the forecast period. The advent of second and third-generation sequencing technologies has enabled the transcriptome sequencing at a very low cost and within a reasonable timeframe. In addition, the presence of various projects, such as the 1000 Genomes Project and the International HapMap Project, are resulting in large genomic data explosion, thereby driving the segment growth.

Platform Insights: Biological Data Visualization Market

The visualizing tools can run on different operating systems, namely Windows, MAC OS, Linux, and others. Among these, the Windows operating system is estimated to capture a comparatively larger revenue share in 2019. Windows is an extensively used operating system and around 90% of computers and laptops are compatible with Microsoft Windows 10 globally, resulting in a larger share.

Windows is highly compatible with computer accessories, such as monitors, mouse, graphics tablets, printers, keyboards, storage drives, scanners, and microphones. In addition, this operating system is highly compatible with all kinds of software used for data visualization, thus widely adopted by researchers and pharmaceutical companies.

MAC OS is expected to witness lucrative growth through the forecast period owing to a rise in the global sales of the MAC OS over the years. This operating system is easy to control and has a simple design as compared to Windows. This is because Windows has several layers of menu beneath the surface, which is not the case in MAC OS.

End-Use Insights: Biological Data Visualization Market

Academic research is estimated to be the largest end-use segment in 2019 and is expected to further expand during the forecast period. The extensive usage of sequencing and MRI methodologies in biomedical research, on-site bioinformatics courses, and workshops contribute to the estimated share. Ongoing partnerships between research entities and companies for the development of visualization tools and supporting the researchers are resulting in a larger share.

For instance, in October 2019, Plotly and McGill University entered into a partnership for the funding for interns. Plotly is working together with a not-for-profit organization, Mitacs, to provide financial support to interns and also supports the use of Plotlys Dash software for data analysis and visualization. This accelerated the analysis of nervous disorders in researchers.

The pharmaceutical and biotechnology companies segment is anticipated to register the fastest growth rate due to the expansion of personalized medicine and companion diagnostics. As biological information is being generated at an unprecedented rate, drug developers need rapid analysis tools to gather relevant information for drug discovery and development. Besides, these tools also transform decision-making processes in R&D laboratories in pharmaceutical companies.

Regional Insights: Biological Data Visualization Market

North America accounted for the largest share in terms of revenue generation in 2019. The growth of capital-intensive biotechnology sectors, such as personalized medicine and the development of high-throughput sequencing techniques, contribute to the larger share. High R&D investment in the region positively impacts the advancement of these sectors, accelerating the demand for analysis software to visualize the biological data generated or required for these fields.

The Asia Pacific region accounted for a highly lucrative growth rate due to significant developments initiated by public agencies to reinforce the utilization of big data and data science in this region. The market is driven by the strategic initiatives adopted by regional participants to reinforce their market presence in local and global markets. For instance, in July 2019, a Japan-based company, Olympus Corporation launched scanR 3.1 High-content Screening Station (HCS) that employs artificial intelligence for the analysis of live cells, thus benefitting the companys sales.

Key Companies & Market Share Insights: Biological Data Visualization Market

The market witnesses immense competition because of the presence of substantial public as well as private companies. Public market players are making significant investments to expand their presence. On the other hand, private firms are involved in the agreement and collaborative models to strengthen their product portfolio and strive in the market.

For instance, in March 2019, Media Cybernetics entered in a partnership with Objective Imaging to implement support for peripheral hardware for its visualization software, Image-Pro. The Hardware Automation Module possesses automated hardware control capabilities, which addresses the users demand for application and budget. Some of the prominent players in the biological data visualization market include:

Key companies Profiled: Biological Data Visualization Market Report

This report forecasts revenue growth at global, regional, and country levels and provides an analysis of the latest industry trends in each of the sub-segments from 2016 to 2027. For the purpose of this study, Trusted Business Insights has segmented the global biological data visualization market report on the basis of technique, application, platform, end-use, and region:

Technique Outlook (Revenue, USD Million, 2016 2027)

Application Outlook (Revenue, USD Million, 2016 2027)

Platform Outlook (Revenue, USD Million, 2016 2027)

End-Use Outlook (Revenue, USD Million, 2016 2027)

Pharmaceutical & Biotechnology Companies

Looking for more? Check out our repository for all available reports on Biological Data Visualization in related sectors.

Quick Read Table of Contents of this Report @ Biological Data Visualization Market 2020 and Forecast 2021-2027 Includes Business Impact Analysis of COVID-19

Trusted Business InsightsShelly ArnoldMedia & Marketing ExecutiveEmail Me For Any ClarificationsConnect on LinkedInClick to follow Trusted Business Insights LinkedIn for Market Data and Updates.US: +1 646 568 9797UK: +44 330 808 0580

See the original post here:

Biological Data Visualization Market Analysis, COVID-19 Impact,Outlook, Opportunities, Size, Share Forecast and Supply Demand 2021-2027|Trusted...

Read More..

The 12 Coolest Machine-Learning Startups Of 2020 – CRN

Learning Curve

Artificial intelligence has been a hot technology area in recent years and machine learning, a subset of AI, is one of the most important segments of the whole AI arena.

Machine learning is the development of intelligent algorithms and statistical models that improve software through experience without the need to explicitly code those improvements. A predictive analysis application, for example, can become more accurate over time through the use of machine learning.

But machine learning has its challenges. Developing machine-learning models and systems requires a confluence of data science, data engineering and development skills. Obtaining and managing the data needed to develop and train machine-learning models is a significant task. And implementing machine-learning technology within real-world production systems can be a major hurdle.

Heres a look at a dozen startup companies, some that have been around for a few years and some just getting off the ground, that are addressing the challenges associated with machine learning.

AI.Reverie

Top Executive: Daeil Kim, Co-Founder, CEO

Headquarters: New York

AI.Reverie develops AI and machine -earning technology for data generation, data labeling and data enhancement tasks for the advancement of computer vision. The companys simulation platform is used to help acquire, curate and annotate the large amounts of data needed to train computer vision algorithms and improve AI applications.

In October AI.Reverie was named a Gartner Cool Vendor in AI core technologies.

Anodot

Top Executive: David Drai, Co-Founder, CEO

Headquarters: Redwood City, Calif.

Anodots Deep 360 autonomous business monitoring platform uses machine learning to continuously monitor business metrics, detect significant anomalies and help forecast business performance.

Anodots algorithms have a contextual understanding of business metrics, providing real-time alerts that help users cut incident costs by as much as 80 percent.

Anodot has been granted patents for technology and algorithms in such areas as anomaly score, seasonality and correlation. Earlier this year the company raised $35 million in Series C funding, bringing its total funding to $62.5 million.

BigML

Top Executive: Francisco Martin, Co-Founder, CEO

Headquarters: Corvallis, Ore.

BigML offers a comprehensive, managed machine-learning platform for easily building and sharing datasets and data models, and making highly automated, data-driven decisions. The companys programmable, scalable machine -earning platform automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery and topic modeling tasks.

The BigML Preferred Partner Program supports referral partners and partners that sell BigML and oversee implementation projects. Partner A1 Digital, for example, has developed a retail application on the BigML platform that helps retailers predict sales cannibalizationwhen promotions or other marketing activity for one product can lead to reduced demand for other products.

StormForge

Top Executive: Matt Provo, Founder, CEO

Headquarters: Cambridge, Mass.

StormForge provides machine learning-based, cloud-native application testing and performance optimization software that helps organizations optimize application performance in Kubernetes.

StormForge was founded under the name Carbon Relay and developed its Red Sky Ops tools that DevOps teams use to manage a large variety of application configurations in Kubernetes, automatically tuning them for optimized performance no matter what IT environment theyre operating in.

This week the company acquired German company Stormforger and its performance testing-as-a-platform technology. The company has rebranded as StormForge and renamed its integrated product the StormForge Platform, a comprehensive system for DevOps and IT professionals that can proactively and automatically test, analyze, configure, optimize and release containerized applications.

In February the company said that it had raised $63 million in a funding round from Insight Partners.

Comet.ML

Top Executive: Gideon Mendels, Co-Founder, CEO

Headquarters: New York

Comet.ML provides a cloud-hosted machine-learning platform for building reliable machine-learning models that help data scientists and AI teams track datasets, code changes, experimentation history and production models.

Launched in 2017, Comet.ML has raised $6.8 million in venture financing, including $4.5 million in April 2020.

Dataiku

Top Executive: Florian Douetteau, Co-Founder, CEO

Headquarters: New York

Dataikus goal with its Dataiku DSS (Data Science Studio) platform is to move AI and machine-learning use beyond lab experiments into widespread use within data-driven businesses. Dataiku DSS is used by data analysts and data scientists for a range of machine-learning, data science and data analysis tasks.

In August Dataiku raised an impressive $100 million in a Series D round of funding, bringing its total financing to $247 million.

Dataikus partner ecosystem includes analytics consultants, service partners, technology partners and VARs.

DotData

Top Executive: Ryohei Fujimaki, Founder, CEO

Headquarters: San Mateo, Calif.

DotData says its DotData Enterprise machine-learning and data science platform is capable of reducing AI and business intelligence development projects from months to days. The companys goal is to make data science processes simple enough that almost anyone, not just data scientists, can benefit from them.

The DotData platform is based on the companys AutoML 2.0 engine that performs full-cycle automation of machine-learning and data science tasks. In July the company debuted DotData Stream, a containerized AI/ML model that enables real-time predictive capabilities.

Eightfold.AI

Top Executive: Ashutosh Garg, Co-Founder, CEO

Headquarters: Mountain View, Calif.

Eightfold.AI develops the Talent Intelligence Platform, a human resource management system that utilizes AI deep learning and machine-learning technology for talent acquisition, management, development, experience and diversity. The Eightfold system, for example, uses AI and ML to better match candidate skills with job requirements and improves employee diversity by reducing unconscious bias.

In late October Eightfold.AI announced a $125 million Series round of financing, putting the startups value at more than $1 billion.

H2O.ai

Top Executive: Sri Ambati, Co-Founder, CEO

Headquarters: Mountain View, Calif.

H2O.ai wants to democratize the use of artificial intelligence for a wide range of users.

The companys H2O open-source AI and machine-learning platform, H2O AI Driverless automatic machine-learning software, H20 MLOps and other tools are used to deploy AI-based applications in financial services, insurance, health care, telecommunications, retail, pharmaceutical and digital marketing.

H2O.ai recently teamed up with data science platform developer KNIME to integrate Driverless AI for AutoMl with KNIME Server for workflow management across the entire data science life cyclefrom data access to optimization and deployment.

Iguazio

Top Executive: Asaf Somekh, Co-Founder, CEO

Headquarters: New York

The Iguazio Data Science Platform for real-time machine learning applications automates and accelerates machine-learning workflow pipelines, helping businesses develop, deploy and manage AI applications at scale that improve business outcomeswhat the company calls MLOps.

In early 2020 Iguazio raised $24 million in new financing, bringing its total funding to $72 million.

OctoML

Top Executive: Luis Ceze, Co-Founder, CEO

Headquarters: Seattle

OctoMLs Software-as-a-Service Octomizer makes it easier for businesses and organizations to put deep learning models into production more quickly on different CPU and GPU hardware, including at the edge and in the cloud.

OctoML was founded by the team that developed the Apache TVM machine-learning compiler stack project at the University of Washingtons Paul G. Allen School of Computer Science & Engineering. OctoMLs Octomizer is based on the TVM stack.

Tecton

Top Executive: Mike Del Balso, Co-Founder, CEO

Headquarters: San Francisco

Tecton just emerged from stealth in April 2020 with its data platform for machine learning that enables data scientists to turn raw data into production-ready machine-learning features. The startups technology is designed to help businesses and organizations harness and refine vast amounts of data into the predictive signals that feed machine-learning models.

The companys three founders: CEO Mike Del Balso, CTO Kevin Stumpf and Engineering Vice President Jeremy Hermann previously worked together at Uber where they developed the companys Michaelangelo machine-learning platform the ride-sharing company used to scale its operations to thousands of production models serving millions of transactions per second, according to Tecton.

The company started with $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia.

More here:
The 12 Coolest Machine-Learning Startups Of 2020 - CRN

Read More..

DIY Camera Uses Machine Learning to Audibly Tell You What it Sees – PetaPixel

Adafruit Industries has created a machine learning camera built with the Raspberry Pi that can identify objects extremely quickly and audibly tell you what it sees. The group has listed all the necessary parts you need to build the device at home.

The camera is based on Adafruits BrainCraft HAT add-on for the Raspberry Pi 4, and uses TensorFlow Lite object recognition software to be able to recognize what it is seeing. According to Adafruits website, its compatible with both the 8-megapixel Pi camera and the 12.3-megapixel interchangeable lens version of module.

While interesting on its own, DIY Photography makes a solid point by explaining a more practical use case for photographers:

You could connect a DSLR or mirrorless camera from its trigger port into the Pis GPIO pins, or even use a USB connection with something like gPhoto, to have it shoot a photo or start recording video when it detects a specific thing enter the frame.

A camera that is capable of recognizing what it is looking at could be used to only take a photo when a specific object, animal, or even a person comes into the frame. That would mean it could have security system or wildlife monitoring applications. Whenever you might wish your camera knew what it was looking at, this kind of technology would make that a reality.

You can find all the parts you will need to build your own version of this device on Adafruits website here. They also have published an easy machine learning guide for the Raspberry Pi as well as a guide on running TensorFlow Lite.

(via DPReview and DIY Photography)

See the article here:
DIY Camera Uses Machine Learning to Audibly Tell You What it Sees - PetaPixel

Read More..

The way we train AI is fundamentally flawed – MIT Technology Review

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training testsuggesting that they were equally accuratetheir performance varied wildly in the stress test.

The stress test used ImageNet-C, a dataset of images from ImageNet that have been pixelated or had their brightness and contrast altered, and ObjectNet, a dataset of images of everyday objects in unusual poses, such as chairs on their backs, upside-down teapots, and T-shirts hanging from hooks. Some of the 50 models did well with pixelated images, some did well with the unusual poses; some did much better overall than others. But as far as the standard training process was concerned, they were all the same.

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. It pokes some significant holes in the fundamental assumptions we've been making.

DAmour agrees. The biggest, immediate takeaway is that we need to be doing a lot more testing, he says. That wont be easy, however. The stress tests were tailored specifically to each task, using data taken from the real world or data that mimicked the real world. This is not always available.

Some stress tests are also at odds with each other: models that were good at recognizing pixelated images were often bad at recognizing images with high contrast, for example. It might not always be possible to train a single model that passes all stress tests.

One option is to design an additional stage to the training and testing process, in which many models are produced at once instead of just one. These competing models can then be tested again on specific real-world tasks to select the best one for the job.

Thats a lot of work. But for a company like Google, which builds and deploys big models, it could be worth it, says Yannic Kilcher, a machine-learning researcher at ETH Zurich. Google could offer 50 different versions of an NLP model and application developers could pick the one that worked best for them, he says.

DAmour and his colleagues dont yet have a fix but are exploring ways to improve the training process. We need to get better at specifying exactly what our requirements are for our models, he says. Because often what ends up happening is that we discover these requirements only after the model has failed out in the world.

Getting a fix is vital if AI is to have as much impact outside the lab as it is having inside. When AI underperforms in the real-world it makes people less willing to want to use it, says co-author Katherine Heller, who works at Google on AI for healthcare: We've lost a lot of trust when it comes to the killer applications, thats important trust that we want to regain.

Read the original post:
The way we train AI is fundamentally flawed - MIT Technology Review

Read More..

Utilizing machine learning to uncover the right content at KMWorld Connect 2020 – KMWorld Magazine

At KMWorld Connect 2020 David Seuss, CEO, Northern Light, Sid Probstein, CTO, Keeeb, and Tom Barfield, chief solution architect, Keeb discussed Machine Learning & KM.

KMWorld Connect, November 16-19, and its co-located events, covers future-focused strategies, technologies, and tools to help organizations transform for positive outcomes.

Machine learning can assist KM activities in many ways. Seuss discussed using a semantic analysis of keywords in social posts about a topic of interest to yield clear guidance as to which terms have actual business relevance and are therefore worth investing in.

What are we hearing from our users? Seuss asked. The users hate the business research process.

By using AstraZeneca as an example, Seuss started the analysis of the companys conference presentations. By looking at the topics, Diabetes sank lower as a focus of AstraZenicas focus.

When looking at their twitter account, themes included oncology, COVID-19, and environmental issues. Not one reference was made to diabetes, according to Seuss.

Social media is where the energy of the company is first expressed, Seuss said.

An instant news analysis using text analytics tells us the same story: no mention of diabetes products, clinical trials, marketing, etc.

AI-based automated insight extraction from 250 AstraZeneca oncolcogy conference presentations gives insight into R&D focus.

Let the machine read the content and tell you what it thinks is important, Seuss said.

You can do that with a semantic graph of all the ideas in the conference presentations. Semantic graphs look for relationships between ideas and measure the number and strength of the relationships. Google search results are a real-world example of this in action.

We are approaching the era when users will no longer search for information, they will expect the machine to analyze and then summarize for them what they need to know, Seuss said. Machine-based techniques will change everything.

Probstein and Barfield addressed new approaches to integrate knowledge sharing into work. They looked at collaborative information curation so end users help identify the best content, allowing KM teams to focus on the most strategic knowledge challenges as well as the pragmatic application of AI through text analytics to improve both curation and findability and improve performance.

The super silo is on the rise, Probstein said. It stores files, logs, customer/sales and can be highly variable. He looked at search results for how COVID-19 is having an impact on businesses.

Not only are there many search engines, each one is different, Probstein said.

Probstein said Keeeb can help with this problem. The solution can search through a variety of data sources to find the right information.

One search, a few seconds, one pane of glass, Probstein said. Once you solve the search problem, now you can look through the documents.

Knowledge isnt always a whole document, it can be a few paragraphs or an image, which can then be captured and shared through Keeeb.

AI and machine learning can enable search to be integrated with existing tools or any system. Companies should give end-users simple approaches to organize with content-augmented with AI-benefitting themselves and others, Barfield said.

More here:
Utilizing machine learning to uncover the right content at KMWorld Connect 2020 - KMWorld Magazine

Read More..

Machine Learning Predicts How Cancer Patients Will Respond to Therapy – HealthITAnalytics.com

November 18, 2020 -A machine learning algorithm accurately determined how well skin cancer patients would respond to tumor-suppressing drugs in four out of five cases, according to research conducted by a team from NYU Grossman School of Medicine and Perlmutter Cancer Center.

The study focused on metastatic melanoma, a disease that kills nearly 6,800 Americans each year. Immune checkpoint inhibitors, which keep tumors from shutting down the immune systems attack on them, have been shown to be more effective than traditional chemotherapies for many patients with melanoma.

However, half of patients dont respond to these immunotherapies, and these drugs are expensive and often cause side effects in patients.

While immune checkpoint inhibitors have profoundly changed the treatment landscape in melanoma, many tumors do not respond to treatment, and many patients experience treatment-related toxicity, said corresponding study authorIman Osman, medical oncologist in the Departments of Dermatology and Medicine (Oncology) at New York University (NYU) Grossman School of Medicine and director of the Interdisciplinary Melanoma Program at NYU Langones Perlmutter Cancer Center.

An unmet need is the ability to accurately predict which tumors will respond to which therapy. This would enable personalized treatment strategies that maximize the potential for clinical benefit and minimize exposure to unnecessary toxicity.

READ MORE: How Social Determinants Data Can Enhance Machine Learning Tools

Researchers set out to develop a machine learning model that could help predict a melanoma patients response to immune checkpoint inhibitors. The team collected 302 images of tumor tissue samples from 121 men and women treated for metastatic melanoma with immune checkpoint inhibitors at NYU Langone hospitals.

They then divided these slides into 1.2 million portions of pixels, the small bits of data that make up images. These were fed into the machine learning algorithm along with other factors, such as the severity of the disease, which kind of immunotherapy regimen was used, and whether a patient responded to the treatment.

The results showed that the machine learning model achieved an AUC of 0.8 in both the training and validation cohorts, and was able to predict which patients with a specific type of skin cancer would respond well to immunotherapies in four out of five cases.

Our findings reveal that artificial intelligence is a quick and easy method of predicting how well a melanoma patient will respond to immunotherapy, said study first author Paul Johannet, MD, a postdoctoral fellow at NYU Langone Health and its Perlmutter Cancer Center.

Researchers repeated this process with 40 slides from 30 similar patients at Vanderbilt University to determine whether the results would be similar at a different hospital system that used different equipment and sampling techniques.

READ MORE: Simple Machine Learning Method Predicts Cirrhosis Mortality Risk

A key advantage of our artificial intelligence program over other approaches such as genetic or blood analysis is that it does not require any special equipment, said study co-author Aristotelis Tsirigos, PhD, director of applied bioinformatics laboratories and clinical informatics at the Molecular Pathology Lab at NYU Langone.

The team noted that aside from the computer needed to run the program, all materials and information used in the Perlmutter technique are a standard part of cancer management that most, if not all, clinics use.

Even the smallest cancer center could potentially send the data off to a lab with this program for swift analysis, said Osman.

The machine learning method used in the study is also more streamlined than current predictive tools, such as analyzing stool samples or genetic information, which promises to reduce treatment costs and speed up patient wait times.

Several recent attempts to predict immunotherapy responses do so with robust accuracy but use technologies, such as RNA sequencing, that are not readily generalizable to the clinical setting, said corresponding study authorAristotelis Tsirigos, PhD, professor in the Institute for Computational Medicine at NYU Grossman School of Medicine and member of NYU Langones Perlmutter Cancer Center.

READ MORE: Machine Learning Forecasts Prognosis of COVID-19 Patients

Our approach shows that responses can be predicted using standard-of-care clinical information such as pre-treatment histology images and other clinical variables.

However, researchers also noted that the algorithm is not yet ready for clinical use until they can boost the accuracy from 80 percent to 90 percent and test the algorithm at more institutions. The research team plans to collect more data to improve the performance of the model.

Even at its current level of accuracy, the model could be used as a screening method to determine which patients across populations would benefit from more in-depth tests before treatment.

There is potential for using computer algorithms to analyze histology images and predict treatment response, but more work needs to be done using larger training and testing datasets, along with additional validation parameters, in order to determine whether an algorithm can be developed that achieves clinical-grade performance and is broadly generalizable, said Tsirigos.

There is data to suggest that thousands of images might be needed to train models that achieve clinical-grade performance.

See the original post:
Machine Learning Predicts How Cancer Patients Will Respond to Therapy - HealthITAnalytics.com

Read More..

This New Machine Learning Tool Might Stop Misinformation – Digital Information World

Misinformation has always been a problem, but the combination of widespread social media as well as a loose definition of what can be seen as factual truth in recent times has lead to a veritable explosion in misinformation over the course of the past few years. The problem is so dire that in a lot of cases websites are made specifically because of the fact that this is the sort of thing that could potentially end up allowing misinformation to spread more easily, and this is a problem that might just have been addressed by a new machine learning tool.

This machine learning tool was developed by researchers at UCL, Berkeley and Cornell will be able to detect domain registration data and use this to ascertain whether the URL is legitimate or if it has been made specifically to legitimize a certain piece of information that people might be trying to spread around. A couple of other factors also come into play here. For example, if the identity of the person that registered the domain is private, this might be a sign that the site is not legitimate. The timing of the domain registration matters to. If it was done around the time a major news event broke out, such as the recent US presidential election, this is also a negative sign.

With all of that having been said and out of the way, it is important to note that this new machine learning tool has a pretty impressive success rate of about 92%, which is the proportion of fake domains it was able to discover. Being able to tell whether or not a news source is legitimate or whether it is direct propaganda is useful because of the fact that it can help reduce the likelihood that people might just end up taking the misinformation seriously.

Original post:
This New Machine Learning Tool Might Stop Misinformation - Digital Information World

Read More..

Fujitsu, AIST and RIKEN Achieve Unparalleled Speed on MLPerf HPC Machine Learning Processing Benchmark – HPCwire

TOKYO, Nov 19, 2020 Fujitsu, the National Institute of Advanced Industrial Science and Technology (AIST), and RIKEN today announced a performance milestone in supercomputing, achieving the highest performance and claiming the ranking positions on the MLPerf HPC benchmark. The MLPerf HPC benchmark measures large-scale machine learning processing on a level requiring supercomputers and the parties achieved these outcomes leveraging approximately half of the AI-Bridging Cloud Infrastructure (ABCI) supercomputer system, operated by AIST, and about 1/10 of the resources of the supercomputer Fugaku, which is currently under joint development by RIKEN and Fujitsu.

Utilizing about half the computing resources of its system, ABCI achieved processing speeds 20 times faster than other GPU-type systems. That is the highest performance among supercomputers based on GPUs, computing devices specialized in deep learning. Similarly, about 1/10 of Fugaku was utilized to set a record for CPU-type supercomputers consisting of general-purpose computing devices only, achieving a processing speed 14 times faster than that of other CPU-type systems.

The results were presented as MLPerf HPC v0.7 on November 18th (November 19th Japan Time) at the 2020 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC20) event, which is currently being held online.

Background

MLPerf HPC is a performance competition in two benchmark programs: CosmoFlow(2), which predicts cosmological parameters, and DeepCAM, which identifies abnormal weather phenomena. The ABCI ranked first in metrics of all registered systems in the CosmoFlow benchmark program, with about half of the whole ABCI system, and Fugaku ranked second with measurement of about 1/10 of the whole system. The ABCI system delivered 20 times the performance of the other GPU types, while Fugaku delivered 14 times the performance of the other CPU types. ABCI achieved first place amongst all registered systems in the DeepCAM benchmark program as well, also with about half of the system. In this way, ABCI and Fugaku overwhelmingly dominated the top positions, demonstrating the superior technological capabilities of Japanese supercomputers in the field of machine learning.

Fujitsu, AIST, RIKEN and Fujitsu Laboratories Limited will release the software stacks including the library and the AI framework which accelerate the large-scale machine learning process developed for this measurement to the public. This move will make it easier to use large-scale machine learning with supercomputers, while its use in analyzing simulation results is anticipated to contribute to the detection of abnormal weather phenomena and to new discoveries in astrophysics. As a core platform for building Society 5.0, it will also contribute to solve social and scientific issues, as it is expected to expand to applications such as the creation of general-purpose language models that require enormous computational performance.

About MLPerf HPC

MLPerf is a machine learning benchmark community established in May 2018 for the purpose of creating a performance list of systems running machine learning applications. MLPerf developed MLPerf HPC as a new machine learning benchmark to evaluate the performance of machine learning calculations using supercomputers. It is used for supercomputers around the world and is expected to become a new industry standard. MLPerf HPC v0.7 evaluated performance on two real applications, CosmoFlow and DeepCAM, to measure large-scale machine learning performance requiring the use of a supercomputer.

All measurement data are available on the following website: https://mlperf.org/

Comments from the Partners

Fujitsu, Executive Director, Naoki Shinjo: The successful construction and optimization of the software stack for large-scale deep learning processing, executed in close collaboration with AIST, RIKEN, and many other stakeholders made this achievement a reality, helping us to successfully claim the top position in the MLPerf HPC benchmark in an important milestone for the HPC community. I would like to express my heartfelt gratitude to all concerned for their great cooperation and support. We are confident that these results will pave the way for the use of supercomputers for increasingly large-scale machine learning processing tasks and contribute to many research and development projects in the future, and we are proud that Japans research and development capabilities will help lead global efforts in this field.

Hirotaka Ogawa, Principal Research Manager, Artificial Intelligence Research Center, AIST: ABCI was launched on August 1, 2018 as an open, advanced, and high-performance computing infrastructure for the development of artificial intelligence technologies in Japan. Since then, it has been used in industry-academia-government collaboration and by a diverse range of businesses, to accelerate R&D and verification of AI technologies that utilize high computing power, and to advance social utilization of AI technologies. The overwhelming results of MLPerf HPC, the benchmark for large-scale machine learning processing, showed the world the high level of technological capabilities of Japans industry-academia-government collaboration. AISTs Artificial Intelligence Research Center is promoting the construction of large-scale machine learning models with high versatility and the development of its application technologies, with the aim of realizing easily-constructable AI. We expect that the results of this time will be utilized in such technological development.

Satoshi Matsuoka, Director General, RIKEN Center for Computational Science: In this memorable first MLPerf HPC, Fugaku, Japans top CPU supercomputer, along with AISTs ABCI, Japans top GPU supercomputer, exhibited extraordinary performance and results, serving as a testament to Japans ability to compete at an exceptional level on the global stage in the area of AI research and development. I only regret that we couldnt achieve the overwhelming performance as we did for HPL-AI to be compliant with inaugural regulations for MLPerf HPC benchmark. In the future, as we continue to further improve the performance on Fugaku, we will make ongoing efforts to take advantage of Fugakus super large-scale environment in the area of high-performance deep learning in cooperation with various stakeholders.

About Fujitsu

Fujitsu is a leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Approximately 130,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE:6702) reported consolidated revenues of 3.9 trillion yen (US$35 billion) for the fiscal year ended March 31, 2020. For more information, please see http://www.fujitsu.com.

About National Institute of Advanced Industrial Science & Technology (AIST)

AIST is the largest public research institute established in 1882 in Japan. The research fields of AIST covers all industrial sciences, e.g., electronics, material science, life science, metrology, etc. Our missions are bridging the gap between basic science and industrialization and solving social problems facing the world. we prepare several open innovation platforms to contribute to these missions, where researchers in companies, university professors, graduated students, as well as AIST researchers, get together to achieve our missions. The open innovation platform established recently is The Global Zero Emission Research Center which contributes to achieving a zero-emission society collaborating with foreign researches.https://www.aist.go.jp/index_en.html

About RIKEN Center for Computational Science

RIKEN is Japans largest comprehensive research institution renowned for high-quality research in a diverse range of scientific disciplines. Founded in 1917 as a private research foundation in Tokyo, RIKEN has grown rapidly in size and scope, today encompassing a network of world-class research centers and institutes across Japan including the RIKEN Center for Computational Science (R-CCS), the home of the supercomputer Fugaku. As the leadership center of high-performance computing, the R-CCS explores the Science of computing, by computing, and for computing. The outcomes of the exploration the technologies such as open source software are its core competence. The R-CCS strives to enhance the core competence and to promote the technologies throughout the world.

Source: Fujitsu

View post:
Fujitsu, AIST and RIKEN Achieve Unparalleled Speed on MLPerf HPC Machine Learning Processing Benchmark - HPCwire

Read More..

SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD – Sports Video Group

This fall SVG will be presenting a series of White Papers covering the latest advancements and trends in sports-production technology. The full series of SVGs Tech Insight White Papers can be found in the SVG Fall SportsTech Journal HERE.

Following the height of the 2020 global pandemic, live sports are starting to re-emerge worldwide albeit predominantly behind closed doors. For the majority of sports fans, video is the only way they can watch and engage with their favorite teams or players. This means the quality of the viewing experience itself has become even more critical.

With UHD being adopted by both households and broadcasters around the world, there is a marked expectation around visual quality. To realize these expectations in the immediate term, it will be necessary for some years to up-convert from HD to UHD when creating 4K UHD sports channels and content.

This is not so different from the early days of HD, where SD sporting related content had to be up-converted to HD. In the intervening years, however, machine learning as a technology has progressed sufficiently to be a serious contender for performing better up-conversions than with more conventional techniques, specifically designed to work for TV content.

Ideally, we want to process HD content into UHD with a simple black box arrangement.

The problem with conventional up-conversion, though, is that it does not offer an improved resolution, so does not fully meet the expectations of the viewer at home watching on a UHD TV. The question, therefore, becomes: can we do better for the sports fan? If so, how?

UHD is a progressive scan format, with the native TV formats being 38402160, known as 2160p59.64 (usually abbreviated to 2160p60) or 2160p50. The corresponding HD formats, with the frame/field rates set by region, are either progressive 1280720 (720p60 or 720p50) or interlaced 19201080 (1080i30 or 1080i25).

Conversion from HD to UHD for progressive images at the same rate is fairly simple. It can be achieved using spatial processing only. Traditionally, this might typically use a bi-cubic interpolation filter, (a 2-dimensional interpolation commonly used for photographic image scaling.) This uses a grid of 44 source pixels and interpolates intermediate locations in the center of the grid. The conversion from 1280720 to 38402160 requires a 3x scaling factor in each dimension and is almost the ideal case for an upsampling filter.

These types of filters can only interpolate, resulting in an image that is a better result than nearest-neighbor or bi-linear interpolation, but does not have the appearance of being a higher resolution.

Machine Learning (ML) is a technique whereby a neural network learns patterns from a set of training data. Images are large, and it becomes unfeasible to create neural networks that process this data as a complete set. So, a different structure is used for image processing, known as Convolutional Neural Networks (CNNs). CNNs are structured to extract features from the images by successively processing subsets from the source image and then processes the features rather than the raw pixels.

Up-conversion process with neural network processing

The inbuilt non-linearity, in combination with feature-based processing, mean CNNs can invent data not in the original image. In the case of up-conversion, we are interested in the ability to create plausible new content that was not present in the original image, but that doesnt modify the nature of the image too much. The CNN used to create the UHD data from the HD source is known as the Generator CNN.

When input source data needs to be propagated through the whole chain, possibly with scaling involved, then a specific variant of a CNN known as a Residual Network (ResNet) is used. A ResNet has a number of stages, each of which includes a contribution from a bypass path that carries the input data. For this study, a ResNet with scaling stages towards the end of the chain was used as the Generator CNN.

For the Generator CNN to do its job, it must be trained with a set of known data patches of reference images and a comparison is made between the output and the original. For training, the originals are a set of high-resolution UHD images, down-sampled to produce HD source images, then up-converted and finally compared to the originals.

The difference between the original and synthesized UHD images is calculated by the compare function with the error signal fed back to the Generator CNN. Progressively, the Generator CNN learns to create an image with features more similar to original UHD images.

The training process is dependent on the data set used for training, and the neural network tries to fit the characteristics seen during training onto the current image. This is intriguingly illustrated in Googles AI Blog [1], where a neural network presented with a random noise pattern introduces shapes like the ones used during training. It is important that a diverse, representative content set is used for training. Patches from about 800 different images were used for training during the process of MediaKinds research.

The compare function affects the way the Generator CNN learns to process the HD source data. It is easy to calculate a sum of absolute differences between original and synthesized. This causes an issue due to training set imbalance; in this case, the imbalance is that real pictures have large proportions with relatively little fine detail, so the data set is biased towards regenerating a result like that which is very similar to the use of a bicubic interpolation filter.

This doesnt really achieve the objective of creating plausible fine detail.

Generative Adversarial Neural Networks (GANs) are a relatively new concept [2], where a second neural network, known as the Discriminator CNN, is used and is itself trained during the training process of the Generator CNN. The Discriminator CNN learns to detect the difference between features that are characteristic of original UHD images and synthesized UHD images. During training, the Discriminator CNN sees either an original UHD image or a synthesized UHD image, with the detection correctness fed back to the discriminator and, if the image was a synthesized one, also fed back to the Generator CNN.

Each CNN is attempting to beat the other: the Generator by creating images that have characteristics more like originals, while the Discriminator becomes better at detecting synthesized images.

The result is the synthesis of feature details that are characteristic of original UHD images.

With a GAN approach, there is no real constraint to the ability of the Generator CNN to create new detail everywhere. This means the Generator CNN can create images that diverge from the original image in more general ways. A combination of both compare functions can offer a better balance, retaining the detail regeneration, but also limiting divergence. This produces results that are subjectively better than conventional up-conversion.

Conversion from 1080i60 to 2160p60 is necessarily more complex than from 720p60. Starting from 1080i, there are three basic approaches to up-conversion:

Training data is required here, which must come from 2160p video sequences. This enables a set of fields to be created, which are then downsampled, with each field coming from one frame in the original 2160p sequence, so the fields are not temporally co-located.

Surprisingly, results from field-based up-conversion tended to be better than using de-interlaced frame conversion, despite using sophisticated motion-compensated de-interlacing: the frame-based conversion being dominated by the artifacts from the de-interlacing process. However, it is clear that potentially useful data from the opposite fields did not contribute to the result, and the field-based approach missed data that could produce a better result.

A solution to this is to use multiple fields data as the source data directly into a modified Generator CNN, letting the GAN learn how best to perform the deinterlacing function. This approach was adopted and re-trained with a new set of video-based data, where adjacent fields were also provided.

This led to both high visual spatial resolution and good temporal stability. These are, of course, best viewed as a video sequence, however an example of one frame from a test sequence shows the comparison:

Comparison of a sample frame from different up-conversion techniques against original UHD

Up-conversion using a hybrid GAN with multiple fields was effective across a range of content, but is especially relevant for the visual sports experience to the consumer. This offers a realistic means by which content that has more of the appearance of UHD can be created from both progressive and interlaced HD source, which in turn can enable an improved experience for the fan at home when watching a sports UHD channel.

1 A. Mordvintsev, C. Olah and M. Tyka, Inceptionism: Going Deeper into Neural Networks, 2015. [Online]. Available: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

2 I. e. a. Goodfellow, Generative Adversarial Nets, Neural Information Processing Systems Proceedings, vol. 27, 2014.

Continued here:
SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD - Sports Video Group

Read More..