Page 4,108«..1020..4,1074,1084,1094,110..4,1204,130..»

When will 5G arrive? – Verizon Communications

For many cities across the U.S., 5G has already arrived.

Verizon has been preparing for the launch of 5G over the past few years and has been testing the technology in a variety of markets to ensure it is built right. Verizons 5G Ultra Wideband mobile network officially launched in April of 2019 in Chicago and Minneapolis and is expected to reach more than 30 cities nationwide by year end. In addition to 5G mobility, Verizon also offers 5G Home broadband internet service in select cities in the United States.

Verizons 5G network can help spur the continuous growth of business, education and technology around the world. It has the potential to support millions of devices at once and can support improvements in accessibility, expand the capabilities of broadband and progress public safety, health and security.

The following cities and neighborhoods have already welcomed the Verizon 5G mobile network in 2019 and there are more on the way.

Expect to see Verizon launch 5G in more cities across the country throughout the rest of the year. These will be in both large and small cities, focusing on areas like business districts, public spaces and tourist gathering spots to provide super-fast speeds in high-density regions. For example, if you visit Millennium Park in Chicago or the National Mall in Washington, D.C., and you have a 5G device, you can experience the power of Verizon 5G Ultra Wideband in parts of these locales.

Learn more about all there is to come with Verizon 5G as well as 5G Home internet. Dont forget to check out our selection of 5G phones.

Read this article:
When will 5G arrive? - Verizon Communications

Read More..

New Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing – Mozilla & Firefox

New community effort will create a new cross-platform, cross-device computing runtime based on the unique advantages of WebAssembly

MOUNTAIN VIEW, California, November 12, 2019 The Bytecode Alliance is a newly-formed open source community dedicated to creating new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI). Mozilla, Fastly, Intel, and Red Hat are founding members.

The Bytecode Alliance will, through the joint efforts of its contributing members, deliver a state-of-the-art runtime environment and associated language toolchains, where security, efficiency, and modularity can all coexist across the widest possible range of devices and architectures. Technologies contributed and collaboratively evolved through the Alliance leverage established innovation in compilers, runtimes, and tooling, and focus on fine-grained sandboxing, capabilities-based security, modularity, and standards such as WebAssembly and WASI.

Founding members are making several open source project contributions to the Bytecode Alliance, including:

Modern software applications and services are built from global repositories of shared components and frameworks, which greatly accelerates creation of new and better multi-device experiences but understandably increases concerns about trust, data integrity, and system vulnerability. The Bytecode Alliance is committed to establishing a capable, secure platform that allows application developers and service providers to confidently run untrusted code, on any infrastructure, for any operating system or device, leveraging decades of experience doing so inside web browsers.

Partner quotes:

Mozilla:

WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers. This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix whats broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way. Together with our partners in the Bytecode Alliance, Mozilla is building these new secure foundationsfor everything from small, embedded devices to large, computing clouds, said Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly.

Fastly:

Fastly is very happy to help bring the Bytecode Alliance to the community, said Tyler McMullen, CTO at Fastly. Lucet and Cranelift have been developed together for years, and were excited to formalize their relationship and help them grow faster together. This is an important moment in computing history, marking our chance to redefine how software will be built across clients, origins, and the edge. The Bytecode Alliance is our way of contributing to and working with the community, to create the foundations that the future of the internet will be built on.

Intel:

Intel is joining the Bytecode Alliance as a founding member to help extend WebAssemblys performance and security benefits beyond the browser to a wide range of applications and servers. Bytecode Alliance technologies can help developers extend software using a wide selection of languages, building upon the full capabilities of leading-edge compute platforms, said Mark Skarpness, VP, Intel Architecture, Graphics, and Software; Director, Data-Centric System Stacks.

Red Hat:

Red Hat believes deeply in the role open source technologies play in helping provide the foundation for computing, from the operating system to the browser to the open hybrid cloud, said Chris Wright, senior vice president and Chief Technology Officer at Red Hat. Wasmtime is an exciting development that helps move WebAssembly out of the browser into the server space where we are experimenting with it to change the trust model for applications, and we are happy to be involved in helping it grow into a mature, community-based project.

Useful Links:

About Mozilla

Mozilla has been a pioneer and advocate for the web for more than 20 years. We are a global organization with a mission to promote innovation and opportunity on the Web. Today, hundreds of millions of people worldwide use the popular Firefox browser to discover, experience, and connect to the Web on computers, tablets and mobile phones. Together with our vibrant, global community of developers and contributors, we create and promote open standards that ensure the internet remains a global public resource, open and accessible to all.

Go here to see the original:
New Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing - Mozilla & Firefox

Read More..

Artificial Intelligence Essay – 966 Words | Bartleby

Artificial Intelligence Computers are everywhere today. It would be impossible to go your entire life without using a computer. Cars, ATMs, and TVs we use everyday, and all contain computers. It is for this reason that computers and their software have to become more intelligent to make our lives easier and computers more accessible. Intelligent computer systems can and do benefit us all; however people have constantly warned that making computers too intelligent can be to our disadvantage. Artificial intelligence, or AI, is a field of computer science that attempts to simulate characteristics of human intelligence or senses. These include learning, reasoning, and adapting. This field studies the designs of intelligentshow more content

Expert systems are also known as knowledge based systems. These systems rely on a basic set of rules for solving specific problems and are capable of learning. The laws are defined for the system by experts and then implemented using if-then rules. These systems basically imitate the experts thoughts in solving the problem. An example of this is a system that diagnosis medical conditions. The doctor would input the symptoms to the computer system and it would then ask more questions if need or give diagnoses. Other examples include banking systems for acceptance of loans, advanced calculators, and weather predictions. Natural language systems interact allow computers to interact with the user in their usual language. They accept, interpret, and execute the commands in this language. The attempt is to allow a more natural interaction between the computer and user. Language is sometimes thought to be the foundation of intelligence in humans. Therefore, it is reasonable for intelligent systems to be able to understand language. Some of these systems are advanced enough to hold conversations. A system that emulates human senses uses human sensory simulation. These can include methods of sight, sound, and touch. A very common implementation of this intelligence is in voice recognition software. It listens to what the user says, interprets the sounds, and displays the information on the screen. These are

Read the original:

Artificial Intelligence Essay - 966 Words | Bartleby

Read More..

AI News: Track The Latest Artificial Intelligence Trends And …

Investors beware: there's plenty of buzz around artificial intelligence (AI) as more and more companies say they're using it. In some cases, companies are using older data analytics tools and labeling it as AI for a public relations boost. But identifying companies actually getting material revenue growth from AI can be tricky.

AI uses computer algorithms to replicate the human ability to learn and make predictions. AI software needs computing power to find patterns and make inferences from large quantities of data. The two most common types of AI tools are called "machine learning" and "deep learning networks."

Nvidia (NVDA) is one company that can lay claim to AI-driven growth. Internet and tech companies buy its processors for cloud computing. Nvidia's AI chips also are helping guide some self-driving cars in early trials.

Startups are racing to build AI chips for data centers, robotics, smartphones, drones and other devices. Tech giants Apple (AAPL), Google-parent Alphabet (GOOGL), Facebook (FB) and Microsoft (MSFT) have forged ahead in applying AI software to speech recognition, internet search, and classifying images. Amazon.com's AI prowess spans cloud-computing services and voice-activated home digital assistants.

Then, there are tech companies that embed AI tools in their own products to make them better. Those include video streamer Netflix (NFLX), payment processor PayPal (PYPL), Salesforce.com (CRM) and Facebook.

Customers of tech companies spanning banks and finance, health care, energy, retail, agriculture and other sectors are expected to increase spending on AI to get productivity gains or a strategic edge on rivals.

Bookmark this page to stay on top of the latest AI trends and developments.

Get instant access to more trading ideas, exclusive stock lists and IBD proprietary ratings for only $5.

RELATED:

Technology Stocks And Related News

Looking For The Next Nvidia? Start With This Simple Investing Routine

Continue reading here:

AI News: Track The Latest Artificial Intelligence Trends And ...

Read More..

4 Reasons to Use Artificial Intelligence in Your Next Embedded Design – DesignNews

For many, just mentioning artificial intelligence brings up mental images of sentient robots at war with mankind and mans struggle to avoid the endangered species list. While this may one day be a real scenario for when (perhaps a big if?) mankind ever creates an artificial general intelligence (AGI), the more pressing matter is whether embedded software developers should be embracing or fearing the use of artificial intelligence in their systems. Here are five reasons why you may want to include machine learning in your next project.

Reason #1 Marketing Buzz

From an engineering perspective, including a technology or methodology in a design simply because it has marketing buzz is something that every engineer should fight. The fact though is that if there is a buzz around something, odds are it will in the end help to sell the product better. Technology marketing seems to come in cycles, but there are always underlying themes that are driving those cycles that at the end of the day do turn out to be real.

Artificial intelligence has progressed through the years, with deep learning on the way. (Image source: Oracle)

Machine learning has a ton of buzz around it right now. Im finding this year that had industry events, machine learning typically makes up at least 25% of the event talks. Ive had several clients tell me that they need machine learning in their product and when I ask them their use case and why they need it, the answer is just that they need it. Ive heard this same story from dozens of colleagues, but the push for machine learning seems relentless right now. The driver is not necessarily engineering, but simply leveraging industry buzz to sell product.

Reason #2 The Hardware Can Support It

Its truly amazing how much microcontroller and application processors have changed in just the last few years. Microcontrollers which I have always considered to be resource constrained devices are now supporting megabytes of flash and RAM, having on-board cache and reaching system clock rates of 1 GHz and beyond! These little controllers are now even supporting DSP instructions which means that they can efficiently execute inferences.

With the amount of computing power available on these processors, it may not require much additional cost on the BOM to be able to support machine learning. If theres no added cost, and the marketing department is pushing for it, then leveraging machine learning might make sense simply because the hardware can support it!

Reason #3 It May Simplify Development

Machine learning has risen on the buzz charts for a reason. It has become a nearly indispensable tool for the IoT and the cloud. Machine learning can dramatically simplify software development. For example, have you ever tried to code up an application that can recognize gestures, handwriting or classify objects? These are really simple problems for a human brain to solve, but extremely difficult to write a program for. In certain program domains such as voice recognition, image classification and predictive maintenance, machine learning can dramatically simplify the development process and speed-up development.

With an ever expanding IoT and more data than one could ever hope for, its becoming far easier to classify large datasets and then train a model to use that information to generate the desired outcome for the system. In the past, developers may have had configuration values or acceptable operation bars that were constantly checked during runtime. These often involved lots of testing and a fair amount guessing. Through machine learning this can all be avoided by providing the data, developing a model and then deploying the inference on an embedded systems.

Reason #4 To Expand Your Solution Toolbox

One aspect of engineering that I absolutely love is that the tools and technologies that we use to solve problems and development products is always changing. Just look at how you developed an embedded one, three and five years ago! While some of your approaches have undoubtedly stayed constant, there should have been considerable improvements and additions to your processes that have improved your efficiency and the way that you solve problems.

Leveraging machine learning is yet another tool to add to the toolbox that in time, will prove to be an indispensable tool for developing embedded systems. However, that tool will never be sharpened if developers dont start to learn about, evaluate and use that tool. While it may not make sense to deploy a machine learning solution for a product today or even next year, understanding how it applies to your product and customers, the advantages and disadvantages can help to ensure that when the technology is more mature, that it will be easier to leverage for product development.

Real Value Will Follow the Marketing Buzz

There are a lot of reasons to start using machine learning in your next design cycle. While I believe marketing buzz is one of the biggest driving forces for tinyML right now, I also believe that real applications are not far behind and that developers need to start experimenting today if they are going to be successful tomorrow. While machine learning for embedded holds great promise, there are several issues that I think should strike a little bit of fear into the cautious developer such as:

These are concerns for a later time though, once weve mastered just getting our new tool to work the way that we expect it to.

Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer, and holds three degrees which include a Masters of Engineering from the University of Michigan. Feel free to contact him at [emailprotected], at his website, and sign-up for his monthly Embedded Bytes Newsletter.

January 28-30:North America's largest chip, board, and systems event,DesignCon, returns to Silicon Valleyfor its 25th year!The premier educational conference and technology exhibition, this three-day event brings together the brightest minds across the high-speed communications and semiconductor industries, who are looking to engineer the technology of tomorrow. DesignCon is your rocket to the future. Ready to come aboard?Register to attend!

See original here:

4 Reasons to Use Artificial Intelligence in Your Next Embedded Design - DesignNews

Read More..

Artificial intelligence has become a driving force in everyday life, says LivePerson CEO – CNBC

2020 is going to be a big year for artificial intelligence, that is.

At least, that was the message LivePerson CEO Robert LoCascio delivered to CNBC's Jim Cramer on Friday.

"When we think about 2020, I really think it's the start of everyone having [AI]," LoCascio said on "Mad Money." "AI is now becoming something that's not just out there. It's something that we use to drive everyday life."

LivePerson, based in New York City, provides the mobile and online messaging technology that companies use to interact with customers.

Shares of LivePerson closed up just more than 5% on Friday, at $38.32. While it sits below its 52-week high of $42.85, it is up more than 100% for the year.

It reported earnings last week, with total revenue at $75.2 million for the third quarter, which is up 17% compared with the same quarter in 2018.

More than 18,000 companies use LivePerson, including four of the five largest airlines, LoCascio said. Around 60 million digital conversations happen through LivePerson each month, he said.

"You can buy shoes with AI on our platform. You can do airlines. You can do T-Mobile, change your subscription with T-Mobile," he said. "That's the stuff in everyday life."

The world has entered a point where technology has transformed all aspects of communication, LoCascio said.

"Message your brand like you message your friends and family," he said, predicting a day where few people want to pick up the phone and call a company to ask questions. "We're powering all that ... for some of the biggest brands in the world."

LoCascio said LivePerson, which he founded in 1995, now uses AI to power about 50% of the conversations on its platform.

"We're one of the few companies where it's not a piece of the puzzle. It's the entire puzzle," he said.

Visit link:

Artificial intelligence has become a driving force in everyday life, says LivePerson CEO - CNBC

Read More..

Artificial Intelligence Will Enable the Future, Blockchain Will Secure It – Cointelegraph

Speaking at BlockShow Asia 2019, Todalarity CEO Toufi Saliba posed a hypothetical question to the audience: How many people would take a pill that made you smarter, knowing they can be controlled by a social entity?

No one raised their hand, and he was unsurprised.

Thats the response that I get, zero percent of you, he continued. Now imagine at the same time the pill has autonomous decentralized governance so that no one can control or repurpose that pill but the host yourself.

This time hands were raised in abundance. Decentralized governance represents a necessary step for the tech community to build up a trust in digital developments related to securely managing big data.

Economics and ethics can go together thanks to decentralization, commented SingularityNET CEO Ben Goertzel.

But does the decentralized governance represent a step forward from centralization, or it is just an illusion of evolution? Cole Sirucek, co-founder of DocDoc, shared his vision:

It is when we are at a point of centralizing data that you can begin to think about decentralization. For example, electronic medical records: in five years the data will be centralized. After that, you can decentralize it.

Goertzal didnt fully agree: I dont think it is intrinsic. The reason centralized systems are simpler to come by is how institutions are built right now. There is nothing natural about centralization of data. He elaborated on the mutual dependence of two important technologies:

Blockchain is not as complex as AI, but it is a necessary component of the future. Without BTC, you dont have means of decentralized governance. AI enables the future, blockchain secures it.

Visit link:

Artificial Intelligence Will Enable the Future, Blockchain Will Secure It - Cointelegraph

Read More..

One way for the Pentagon to prove it’s serious about artificial intelligence – C4ISRNet

Department of Defense officials routinely talk about the need to more fully embrace machine learning and artificial intelligence, but one leader in the Marine Corps said those efforts are falling short.

Were not serious about AI. If we were serious about AI we would put all of our stuff into one location, Lt. Gen. Eric Smith, commander of the Marine Corps Combat Development Command and the Deputy Commandant for Combat Development and Integration, said at an AFCEA Northern Virginia chapter lunch Nov. 15.

Smith was broadly discussing the ability to provide technologies and data thats collected in large quantities and pushed to the battlefield and tactical edge. Smith said leaders want the ability to send data to a 50-60 Marine cell in the Philippines that might be surrounded by the Chinese. That means being able to manage the bandwidth and signature so that those forces arent digitally targeted. That ability doesnt currently doesnt exist, he said.

He pointed to IBMs Watson computer, noting that the system is able to conduct machine learning and artificial intelligence because it connects to the internet, which allows it to draw from a much wider data pool to learn from. Military systems arent traditionally connected to the broader commercial internet, and thus are limited from a machine learning sense.

We have stovepipes of excellence everywhere from interagency, CIA, NSA. The Navys got theirs, Marine Corps got theirs, everybodys got theirs. You cant do AI when the machine cant learn from one pool of data, he said.

Brown noted that he was not speaking on behalf of the entire department.

Pentagon leadership has come to similar conclusions. Top officials have noted that one of the critical roles the Joint Enterprise Defense Infrastructure cloud program will do is provide a central location for data.

The warfighter needed enterprise cloud yesterday. Dominance in A.I. is not a question of software engineering. But instead, its the result of combining capabilities at multiple levels: code, data, compute and continuous integration and continuous delivery. All of these require the provisioning of hyper-scale commercial cloud, Lt. Gen. Jack Shanahan, director of the Joint AI Center, said in August. For A.I. across DOD, enterprise cloud is existential. Without enterprise cloud, there is no A.I. at scale. A.I. will remain a series of small-scale stovepipe projects with little to no means to make A.I. available or useful to warfighters. That is, it will be too hard to develop, secure, update and use in the field. JEDI will provide on-demand, elastic compute at scale, data at scale, substantial network and transport advantages, DevOps and a secure operating environment at all classification levels.

Overall, Smith said that industry should start calling out DoD when policies or technical requirements hinder what it can offer.

If were asking for something that is unobtanium or if our policies are keeping you from producing something we can buy, youve got to tell us, he said

See more here:

One way for the Pentagon to prove it's serious about artificial intelligence - C4ISRNet

Read More..

Indiana University Touts Big Red 200 and Artificial Intelligence at SC19 – HPCwire

DENVER, Nov. 14 Big Red 200 will be the fastest university-owned artificial intelligence supercomputer in the nation when it is installed at Indiana University in January. Named in honor of the universitys 2020 bicentennial, Big Red 200 is just one of many high-performance computing tools and resources on display at the31st annual International Conference for High-Performance Computing, Networking, Storage, and AnalysisNovember 17-22 in Denver.

IUsPervasive Technology Institute,Global Research Network Operations Center, andtheLuddy School of Informatics, Computing, and Engineering(SICE) will team up to host an IU Bicentennial- and artificial intelligence-themed booth (#643), showcasing current research and educational initiatives. As one of the worlds largest HPC events, SC attracts thousands of scientists, researchers, and IT experts from across the world.

This year,Geoffrey Fox, a distinguished professor at SICE, has been named the 2019 recipient of theAssociation for Computing Machinery IEEE Computer Societys Ken Kennedy Award. Fox was honored for foundational contributions to parallel computing methodology, algorithms and software, and data analysis, and their interfaces with broad classes of applications.The award will be presented at the SC19 awards plenary session Tuesday, November 19.

Artificial intelligence will take center stage in the IU booth, thanks to a$60 million naming giftfrom alumnus and tech pioneer Fred Luddy to establish a multidisciplinary initiative in artificial intelligence. Announced in October, the gift will fund the creation of six endowed chairs, six endowed professorships, and six endowed faculty fellowships, as well as graduate and undergraduate scholarships, including scholarships for high-achieving Hoosier students.

In addition, the IU booth will feature the following expert presentations:

For more information on IUs SC19 presence, please visit booth #643.

About the Pervasive Technology InstituteIUsPervasive Technology Instituteis a collaborative organization with seven affiliated research and development centers, representing collaboration among the IU Office of the Vice President for IT and CIO (which leads the effort), University Information Technology Services, the Maurer School of Law, the Kelley School of Business, the Luddy School of Informatics, Computing, and Engineering, and the College of Arts and Sciences at IU. Its mission is to transform new innovations in cyberinfrastructure and computer science into robust tools and support the use of such tools in academic and private sector research and development. IU PTI does this while aiding the Indiana economy and helping to build Indianas twenty-first century workforce.

About the Global Research Network Operations CenterTheGlobal Research Network Operations Center(GlobalNOC) supports advanced international, national, regional and local high-performance research and education networks. GlobalNOC plays a major role in transforming the face of digital science, research, and education in Indiana, the United States, and the world by providing unparalleled network operations and engineering needed for reliable and cost-effective access to specialized facilities for research and education.

About the IU Luddy School of Informatics, Computing, and EngineeringTheLuddy School of Informatics, Computing, and Engineeringsrare combination of programsincluding informatics, computer science, library science, information science, and intelligent systems engineeringmakes SICE one of the largest, broadest and most accomplished of its kind. The extensive programs are united by a focus on information and technology.

Source: Indiana University

See the original post here:

Indiana University Touts Big Red 200 and Artificial Intelligence at SC19 - HPCwire

Read More..

NIST researchers use artificial intelligence for quality control of stem cell-derived tissues – National Institutes of Health

News Release

Thursday, November 14, 2019

Technique key to scale up manufacturing of therapies from induced pluripotent stem cells.

Researchers used artificial intelligence (AI) to evaluate stem cell-derived patches of retinal pigment epithelium (RPE) tissue for implanting into the eyes of patients with age-related macular degeneration (AMD), a leading cause of blindness.

The proof-of-principle study helps pave the way for AI-based quality control of therapeutic cells and tissues. The method was developed by researchers at the National Eye Institute (NEI) and the National Institute of Standards and Technology (NIST) and is described in a report appearing online today in the Journal of Clinical Investigation. NEI is part of the National Institutes of Health.

This AI-based method of validating stem cell-derived tissues is a significant improvement over conventional assays, which are low-yield, expensive, and require a trained user, said Kapil Bharti, Ph.D., a senior investigator in the NEI Ocular and Stem Cell Translational Research Section.

Our approach will help scale up manufacturing and will speed delivery of tissues to the clinic, added Bharti, who led the research along with Carl Simon Jr., Ph.D., and Peter Bajcsy, Ph.D., of NIST.

Cells of the RPE nourish the light-sensing photoreceptors in the eye and are among the first to die from geographic atrophy, commonly known as dry AMD. Photoreceptors die without the RPE, resulting in vision loss and blindness.

Bhartis team is working on a technique for making RPE replacement patches from AMD patients own cells. Patient blood cells are coaxed in the lab to become induced pluripotent stem cells (IPSCs), which can become any type of cell in the body. The IPS cells are then seeded onto a biodegradable scaffold where they are induced to differentiate into mature RPE. The scaffold-RPE patch is implanted in the back of the eye, behind the retina, to rescue photoreceptors and preserve vision.

The patch successfully preserved vision in an animal model, and a clinical trial is planned.

The researchers AI-based validation method employed deep neural networks, an AI technique that performs mathematical computations aimed at detecting patterns in unlabeled and unstructured data. The algorithm operated on images of the RPE obtained using quantitative bright-field absorbance microscopy. The networks were trained to identify visual indications of RPE maturation that correlated with positive RPE function.

Those single-cell visual characteristics were then fed into traditional machine-learning algorithms, which in turn helped the computers learn to detect discrete cell features crucial to the prediction of RPE tissue function.

The method was validated using stem cell-derived RPE from a healthy donor. Its effectiveness was then tested by comparing iPSC-RPE derived from healthy donors with iPSC-RPE from donors with oculocutaneous albinism disorder and with clinical-grade stem cell-derived RPE from donors with AMD.

In particular, the AI-based image analysis method accurately detected known markers of RPE maturity and function: transepithelial resistance, a measure of the junctions between neighboring RPE; and secretion of endothelial growth factors. The method also can match a particular iPSC-RPE tissue sample to other samples from the same donor, which helps confirm the identity of tissues during clinical-grade manufacturing.

Multiple AI-methods and advanced hardware allowed us to analyzeterabytesandterabytesof imaging data for each individual patient, and do it more accurately and much faster than in the past, Bajcsy said.

This work demonstrates how a garden variety microscope, if used carefully, can make a precise, reproducible measurement of tissue quality,Simon said.

The work was supported by the NEI Intramural Research Program and the Common Fund Therapeutics Challenge Award. The flow cytometry core, led by the National Heart, Lung and Blood Institute, also contributed to the research.

NEI leads the federal governments research on the visual system and eye diseases. NEI supports basic and clinical science programs to develop sight-saving treatments and address special needs of people with vision loss. For more information, visit https://www.nei.nih.gov.

About the National Institutes of Health (NIH):NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

Schaub NJ, Hotaling NA, Manescu P, Padi S, Wan Q, Sharma R, George A, Chalfoun J, Simon M, Ouladi M, Simon CG, Bajcsy P, Bharti K. Deep learning predicts function of live retinal pigment epithelium from quantitative microscopy. In-press preview published online November 14, 2019 in J. Clin. Investigation.

###

Go here to see the original:

NIST researchers use artificial intelligence for quality control of stem cell-derived tissues - National Institutes of Health

Read More..