Page 3,940«..1020..3,9393,9403,9413,942..3,9503,960..»

Leveraging AI and Machine Learning to Advance Interoperability in Healthcare – – HIT Consultant

(Left- Wilson To, Head of Worldwide Healthcare BD, Amazon Web Services (AWS) & Patrick Combes, Worldwide Technical Leader Healthcare and Life Sciences at Amazon Web Services (AWS)- Right)

Navigating the healthcare system is often a complex journey involving multiple physicians from hospitals, clinics, and general practices. At each junction, healthcare providers collect data that serve as pieces in a patients medical puzzle. When all of that data can be shared at each point, the puzzle is complete and practitioners can better diagnose, care for, and treat that patient. However, a lack of interoperability inhibits the sharing of data across providers, meaning pieces of the puzzle can go unseen and potentially impact patient health.

The Challenge of Achieving Interoperability

True interoperability requires two parts: syntactic and semantic. Syntactic interoperability requires a common structure so that data can be exchanged and interpreted between health information technology (IT) systems, while semantic interoperability requires a common language so that the meaning of data is transferred along with the data itself.This combination supports data fluidity. But for this to work, organizations must look to technologies like artificial intelligence (AI) and machine learning (ML) to apply across that data to shift the industry from a fee-for-service where government agencies reimburse healthcare providers based on the number of services they provide or procedures ordered to a value-based model that puts focus back on the patient.

The industry has started to make significant strides toward reducing barriers to interoperability. For example, industry guidelines and resources like the Fast Healthcare Interoperability Resources (FHIR) have helped to set a standard, but there is still more work to be done. Among the biggest barriers in healthcare right now is the fact there are significant variations in the way data is shared, read, and understood across healthcare systems, which can result in information being siloed and overlooked or misinterpreted.

For example, a doctor may know that a diagnosis of dropsy or edema may be indicative of congestive heart failure, however, a computer alone may not be able to draw that parallel. Without syntactic and semantic interoperability, that diagnosis runs the risk of getting lost in translation when shared digitally with multiple health providers.

Employing AI, ML and Interoperability in Healthcare

Change Healthcare is one organization making strides to enable interoperability and help health organizations achieve this triple aim. Recently, Change Healthcareannounced that it is providing free interoperability services that breakdown information silos to enhance patients access to their medical records and support clinical decisions that influence patients health and wellbeing.

While companies like Change Healthcare are creating services that better allow for interoperability, others like Fred Hutchinson Cancer Research Center and Beth Israel Deaconess Medical Center (BIDMC) are using AI and ML to further break down obstacles to quality care.

For example, Fred Hutch is using ML to help identify patients for clinical trials who may benefit from specific cancer therapies. By using ML to evaluate millions of clinical notes and extract and index medical conditions, medications, and choice of cancer therapeutic options, Fred Hutch reduced the time to process each document from hours, to seconds, meaning they could connect more patients to more potentially life-saving clinical trials.

In addition, BIDMC is using AI and ML to ensure medical forms are completed when scheduling surgeries. By identifying incomplete forms or missing information, BIDMC can prevent delays in surgeries, ultimately enhancing the patient experience, improving hospital operations, and reducing costs.

An Opportunity to Transform The Industry

As technology creates more data across healthcare organizations, AI and ML will be essential to help take that data and create the shared structure and meaning necessary to achieve interoperability.

As an example, Cernera U.S. supplier of health information technology solutionsis deploying interoperability solutions that pull together anonymized patient data into longitudinal records that can be developed along with physician correlations. Coupled with other unstructured data, Cerner uses the data to power machine learning models and algorithms that help with earlier detection of congestive heart failure.

As healthcare organizations take the necessary steps toward syntactic and semantic interoperability, the industry will be able to use data to place a renewed focus on patient care. In practice, Philips HealthSuite digital platform stores and analyses 15 petabytes of patient data from 390 million imaging studies, medical records and patient inputsadding as much as one petabyte of new data each month.

With machine learning applied to this data, the company can identify at-risk patients, deliver definitive diagnoses and develop evidence-based treatment plans to drive meaningful patient results. That orchestration and execution of data is the definition of valuable patient-focused careand the future of what we see for interoperability drive by AI and ML in the United States. With access to the right information at the right time that informs the right care, health practitioners will have access to all pieces of a patients medical puzzleand that will bring meaningful improvement not only in care decisions, but in patients lives.

About Wilson To, Global Healthcare Business Development lead at AWS & Patrick Combes, Global Healthcare IT Lead at AWS

Wilson To is the Head Worldwide Healthcare Business Development at Amazon Web Services (AWS). currently leads business development efforts across the AWS worldwide healthcare practice.To has led teams across startup and corporate environments, receiving international recognition for his work in global health efforts. Wilson joined Amazon Web Services in October 2016 to lead product management and strategic initiatives.

Patrick Combes is the Worldwide Technical Leader for Healthcare Life & Sciences at Amazon (AWS) where he is responsible for AWS world-wide technical strategy in Healthcare and Life Sciences (HCLS). Patrick helps develop and implement the strategic plan to engage customers and partners in the industry and leads the community of technically focused HCLS specialists within AWS wide technical strategy in Healthcare and Life Sciences (HCLS). Patrick helps develop and implement the strategic plan to engage customers and partners in the industry and leads the community of technically focused HCLS specialists within AWS.

See the original post here:
Leveraging AI and Machine Learning to Advance Interoperability in Healthcare - - HIT Consultant

Read More..

Looking at the most significant benefits of machine learning for software testing – The Burn-In

Software development is a massive part of the tech industry that is absolutely set to stay. Its importance is elemental, supporting technology from the root. Its unsurprisingly a massive industry, with lots of investment and millions of jobs that help to propel technology on its way with great force. Software testing is one of the vital cogs in the software development machine, without which faulty software would run amuck and developing and improving software products would be a much slower and much more inefficient process. Software testing as its own field has gone through several different phases, most recently landing upon the idea of using machine learning. Machine learnings importance is elemental to artificial intelligence, and is a method of freeing up the potential of computers through the use of data feeding. Effective machine learning can greatly improve software testing.

Lets take a look at how that is the case.

As well as realizing the immense power of data over the last decade, we have also reached a point in our technological, even sociological evolution in which we are producing more data than ever, proposes Carl Holding, software developer at Writinity and ResearchPapersUK. This is significant in relation to software testing. The more complex and widely adopted software becomes, the more data that is generated about its use. Under traditional software testing conditions, that amount of data would actually be unhelpful, since it would overwhelm testers. Conversely, machine learning computers hoover up vast data sets as fuel for their analysis and their learning pattern. Not only do the new data conditions only suit large machine learning computers, its also precisely what makes large machine learning computers most successful.

Everyone makes mistakes, as the old saying goes. Except, thats not true: machine learning computers dont. Machine learning goes hand in hand with automation, something which has become very important for all sorts of industries. Not only does it save time, it also gets rid of the potential for human mistakes, which can be very damaging in software testing, notes Tiffany Lee, IT expert at DraftBeyond and LastMinuteWriting. It doesnt matter how proficient a human being is at this task, they will always slip up, especially under the increased pressure put on them with the volume of data that now comes in. A software test sullied by human error can actually be even worse than if no test had been done at all, since getting misinformation is worse than no information. With that in mind, its always just better to leave it to the machines.

Business has always been about getting ahead, regardless of the era or the nature of the products and services. Machine learning is often looked to as a way to predict the future by spotting trends in data and feeding those predictions to the companies that want it most. Software is by no means an industry where this is an exception. In fact, given that it is within the tech sector, its even more important to software development than other industries. Using a machine learning computer for software testing can help to quickly identify the way things are shaping up for the future which means that you get two functions out of your testing process, for the price of one. This can give you an excellent competitive edge.

That machine learning computers save you time should be a fairly obvious point at this stage. Computers handle tasks that take humans hours in a matter of seconds. If you add the increased accuracy advantage over traditional methods then you can see that using this method of testing will get better products out more quickly, which is a surefire way to start boosting your sales figures with ease.

Overall, its a no-brainer. And, as machine learning computers become more affordable, you really have no reason to opt for any other method beyond it. Its a wonderful age for speed and accuracy in technology and with the amount that is at stake with software development, you have to be prepared to think ahead.

See original here:
Looking at the most significant benefits of machine learning for software testing - The Burn-In

Read More..

Seton Hall Announces New Courses in Text Mining and Machine Learning – Seton Hall University News & Events

Professor Manfred Minimair, Data Science, Seton Hall University

As part of its online M.S. in Data Science program, Seton Hall University in South Orange, New Jersey, has announced new courses in Text Mining and Machine Learning.

Seton Hall's master's program in Data Science is the first 100% online program of its kind in New Jersey and one of very few in the nation.

Quickly emerging as a critical field in a variety of industries, data science encompasses activities ranging from collecting raw data and processing and extracting knowledge from that data, to effectively communicating those findings to assist in decision making and implementing solutions. Data scientists have extensive knowledge in the overlapping realms of business needs, domain knowledge, analytics, and software and systems engineering.

"We're in the midst of a pivotal moment in history," said Professor Manfred Minimair, director of Seton Hall's Data Science program. "We've moved from being an agrarian society through to the industrial revolution and now squarely into the age of information," he noted. "The last decade has been witness to a veritable explosion in data informatics. Where once business could only look at dribs and drabs of customer and logistics dataas through a glass darklynow organizations can be easily blinded by the sheer volume of data available at any given moment. Data science gives students the tools necessary to collect and turn those oceans of data into clear and readily actionable information."

These tools will be provided by Seton Hall in new ways this spring, when Text Mining and Machine Learning make their debut.

Text MiningTaught by Professor Nathan Kahl, text mining is the process of extracting high-quality information from text, which is typically done by developing patterns and trends through means such as statistical pattern learning. Professor Nathan Kahl is an Associate Professor in the Department of Mathematics and Computer Science. He has extensive experience in teaching data analytics at Seton Hall University. Some of his recent research lies in the area of network analysis, another important topic which is also taught in the M.S. program.

Professor Kahl notes, "The need for people with these skills in business, industry and government service has never been greater, and our curriculum is specifically designed to prepare our students for these careers." According to EAB (formerly known as the Education Advisory Board), the national growth in demand for data science practitioners over the last two years alone was 252%. According to Glassdoor, the median base salary for these jobs is $108,000.

Machine LearningIn many ways, machine learning represents the next wave in data science. It is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. The course will be taught by Sophine Clachar, a data engineer with more than 10 years of experience. Her past research has focused on aviation safety and large-scale and complex aviation data repositories at the University of North Dakota. She was also a recipient of the Airport Cooperative Research Program Graduate Research Award, which fostered the development of machine learning algorithms that identify anomalies in aircraft data.

"Machine learning is profoundly changing our society," Professor Clachar remarks. "Software enhanced with artificial intelligence capabilities will benefit humans in many ways, for example, by helping design more efficient treatments for complex diseases and improve flight training to make air travel more secure."

Active Relationships with Google, Facebook, Celgene, Comcast, Chase, B&N and AmazonStudents in the Data Science program, with its strong focus on computer science, statistics and applied mathematics, learn skills in cloud computing technology and Tableau, which allows them to pursue certification in Amazon Web Services and Tableau. The material is continuously updated to deliver the latest skills in artificial intelligence/machine learning for automating data science tasks. Their education is bolstered by real world projects and internships, made possible through the program's active relationships with such leading companies as Google, Facebook, Celgene, Comcast, Chase, Barnes and Noble and Amazon. The program also fosters relationships with businesses and organizations through its advisory board, which includes members from WarnerMedia, Highstep Technologies, Snowflake Computing, Compass and Celgene. As a result, students are immersed in the knowledge and competencies required to become successful data science and analytics professionals.

"Among the members of our Advisory Board are Seton Hall graduates and leaders in the field," said Minimair. "Their expertise at the cutting edge of industry is reflected within our curriculum and coupled with the data science and academic expertise of our professors. That combination will allow our students to flourish in the world of data science and informatics."

Learn more about the M.S. in Data Science at Seton Hall

Read the rest here:
Seton Hall Announces New Courses in Text Mining and Machine Learning - Seton Hall University News & Events

Read More..

Educate Yourself on Machine Learning at this Las Vegas Event – Small Business Trends

One of the biggest machine learning events is taking place in Las Vegas just before summer, Machine Learning Week 2020

This five-day event will have 5 conferences, 8 tracks, 10 workshops, 160 speakers, more than 150 sessions, and 800 attendees.

If there is anything you want to know about machine learning for your small business, this is the event. Keynote speakers from Google, Facebook, Lyft, GM, Comcast, WhatsApp, FedEx, and LinkedIn to name just some of the companies that will be at the event.

The conferences will include predictive analytics for business, financial services, healthcare, industry and Deep Learning World.

Training workshops will include topics in big data and how it is changing business, hands-on introduction to machine learning, hands-on deep learning and much more.

Machine Learning Week will take place from May 31 to June 4, 2020, at Ceasars Palace in Las Vegas.

Click the red button and register.

Register Now

This weekly listing of small business events, contests and awards is provided as a community service by Small Business Trends.

You can see a full list of events, contest and award listings or post your own events by visiting the Small Business Events Calendar.

Image: Depositphotos.com

See the original post:
Educate Yourself on Machine Learning at this Las Vegas Event - Small Business Trends

Read More..

New Contest: Train All The Things – Hackaday

The old way was to write clever code that could handle every possible outcome. But what if you dont know exactly what your inputs will look like, or just need a faster route to the final results? The answer is Machine Learning, and we want you to give it a try during the Train All the Things contest!

Its hard to find a more buzz-worthy term than Artificial Intelligence. Right now, where the rubber hits the road in AI is Machine Learning and its never been so easy to get your feet wet in this realm.

From an 8-bit microcontroller to common single-board computers, you can do cool things like object recognition or color classification quite easily. Grab a beefier processor, dedicated ASIC, or lean heavily into the power of the cloud and you can do much more, like facial identification and gesture recognition. But the skys the limit. A big part of this contest is that we want everyone to get inspired by what you manage to pull off.

Wait, wait, come back here. Have we already scared you off? Dont read AI or ML and assume its not for you. Weve included a category for Artificial Intelligence Blinky your first attempt at doing something cool.

Need something simple to get you excited? How about Machine Learning on an ATtiny85 to sort Skittles candy by color? That uses just one color sensor for a quick and easy way to harvest data that forms a training set. But you could also climb up the ladder just a bit and make yourself a camera-based LEGO sorter or using an IMU in a magic wand to detect which spell youre casting. Need more scientific inspiration? Were hoping someday someone will build a training set that classifies microscope shots of micrometeorites. But wed be equally excited with projects that tackle robot locomotion, natural language, and all the other wild ideas you can come up with.

Our guess is you dont really need prizes to get excited about this one most people have been itching for a reason to try out machine learning for quite some time. But we do have $100 Tindie gift certificates for the most interesting entry in each of the four contest categories: ML on the edge, ML on the gateway, AI blinky, and ML in the cloud.

Get started on your entry. The Train All The Things contest is sponsored by Digi-Key and runs until April 7th.

Continue reading here:
New Contest: Train All The Things - Hackaday

Read More..

AFTAs 2019: Best New Technology Introduced Over the Last 12 MonthsAI, Machine Learning and AnalyticsActiveViam – www.waterstechnology.com

Following the global financial crisis, the banking industry has had to deal with more stringent risk capital requirements that demand agility, flexibility, speed, and ease of communication across traditionally siloed departments. Banks also needed a firm grasp of their enterprise-wide data to meet regulatory requirements, and also to ensure a return on capital. It is for this reason that Allen Whipple, co-founder and managing director at ActiveViam, says it makes sense for any regulatory solution to pivot from prescriptive to predictive analytics.

ActiveViam topped this category at this years AFTAs due to its FRTB Accelerator, part of a suite of Accelerator products that it launched in the past year. The products contain all the source code and formulae to meet a particular set of regulations and/or business requirements. In this case, it was those needed for the standardized approach (SA) and the internal model approach (IMA) risk framework, which stems from the Basel Committee on Banking Supervisions Fundamental Review of the Trading Book (FRTB).

The FRTB Accelerator includes capabilities such as the capital decomposition tool, which provides clients with the ability to deploy capital across an organization more precisely. This allows a client to take risk management a step further and perform predictive analysis, which can be applied to broader internal market risk scenarios, Whipple explains. He adds that banks can perform limit-monitoring and back-testing, which allows them to stay within the scope of their IMA status.

Looking ahead, ActiveViam will add a product for Python notebooks to facilitate data science work, reducing the time it takes to move from data to insight. Quants will no longer need to switch between notebooks, data visualization tools, and end-user business intelligence applications. Using the ActiveViam Python Library, they will be able to create dashboards and share them within the same environment. Coders can do everything in Jupyteror a Python notebook of choicefrom beginning to end, Whipple says.

You are currently unable to print this content. Please contact [emailprotected] to find out more.

You are currently unable to copy this content. Please contact [emailprotected] to find out more.

Read the original here:
AFTAs 2019: Best New Technology Introduced Over the Last 12 MonthsAI, Machine Learning and AnalyticsActiveViam - http://www.waterstechnology.com

Read More..

AI-System Flags the Under-Vaccinated in Israel – PrecisionVaccinations

An Israeli-based provider of machine learning-based solutions announced its flu complications algorithm has been selected as part of the Israeli healthcare organizations integrated strategy to enhance its vaccination campaign.

This innovative machine-learning (AI) program from Medial EarlySign is designed to facilitate more effective and targeted outreach to people in need of disease protection.

EarlySigns software applies advanced algorithms to ordinary patient data, collected over the course of routine care.

In a press release on January 14, 2020, EarlySign said its algorithm can flag individuals at high risk for developing flu-related complications and is being used as part of a clinical study undertaken by Maccabi Healthcare Services and EarlySign.

Varda Shalev, M.D., MPH, director of KSM Kahn-Sagol-Maccabi Research and Innovation Institute, founded by Maccabi Healthcare Services, said in this press release, Due to the late arrival of influenza vaccines in Israel this year, the time we have to vaccinate patients this flu season especially those at high risk for developing flu-related complications is much shorter than usual.

The influenza identification algorithm uses EMR generated data to identify and stratify unvaccinated individuals at high risk of developing flu-related complications, often requiring hospitalization.

This is good news since many areas in the Northern Hemisphere have reported increasing rates of influenza infections, according to the Global Flu Update #359, published by the World Health Organization (WHO).

On January 20, 2020, the WHO reported seasonal influenza A(H3N2) viruses have accounted for the majority of detections around the Northern Hemisphere this flu season.

Maccabis clinical study using EarlySigns flu complications algorithm supports the Israeli HMOs commitment to investigating and implementing machine learning-based solutions to improve the health of populations.

The program follows EarlySign collaboration with Geisinger Health System in December 2019, to apply advanced artificial intelligence and machine learning algorithms to Medicare claims data to predict and improve patient outcomes.

Approximately 4.3 million hospital readmissions occur each year in the U.S., costing more than $60 billion, with preventable adverse patient events creating additional clinical and financial burdens for both patients and healthcare systems, said David Vawdrey, Geisingers chief data informatics officer, in a related press release.

The AI vendor and Danville, Pennsylvania based healthcare provider intend to develop models that predict unplanned hospital and skilled nursing facility admissions within 30 days of discharge and adverse events such as respiratory failure, postoperative pulmonary embolism or deep vein thrombosis, as well as postoperative sepsis before they occur.

Maccabi Healthcare Services is Israels 2nd-largest HMO, covering approximately 2.3 million patients, operating 5 regional centers, including hundreds of branches and clinics throughout Israel.

Medial EarlySignenables healthcare providers to identify risks for critical threats, leading to potentially life-changing diagnoses for millions of patients every single day.

Machine-learning (AI) program news published by Precision Vaccinations.

Read more:
AI-System Flags the Under-Vaccinated in Israel - PrecisionVaccinations

Read More..

Google claims to have invented a quantum computer, but IBM begs to differ – The Conversation CA

On Oct. 23, 2019, Google published a paper in the journal Nature entitled Quantum supremacy using a programmable superconducting processor. The tech giant announced its achievement of a much vaunted goal: quantum supremacy.

This perhaps ill-chosen term (coined by physicist John Preskill) is meant to convey the huge speedup that processors based on quantum-mechanical systems are predicted to exhibit, relative to even the fastest classical computers.

Googles benchmark was achieved on a new type of quantum processor, code-named Sycamore, consisting of 54 independently addressable superconducting junction devices (of which only 53 were working for the demonstration).

Each of these devices allows the storage of one bit of quantum information. In contrast to the bits in a classical computer, which can only store one of two states (0 or 1 in the digital language of binary code), a quantum bit qbit can store information in a coherent superposition state which can be considered to contain fractional amounts of both 0 and 1.

Sycamore uses technology developed by the superconductivity research group of physicist John Martinis at the University of California, Santa Barbara. The entire Sycamore system must be kept cold at cryogenic temperatures using special helium dilution refrigeration technology. Because of the immense challenge involved in keeping such a large system near the absolute zero of temperature, it is a technological tour de force.

The Google researchers demonstrated that the performance of their quantum processor in sampling the output of a pseudo-random quantum circuit was vastly better than a classical computer chip like the kind in our laptops could achieve. Just how vastly became a point of contention, and the story was not without intrigue.

An inadvertent leak of the Google groups paper on the NASA Technical Reports Server (NTRS) occurred a month prior to publication, during the blackout period when Nature prohibits discussion by the authors regarding as-yet-unpublished papers. The lapse was momentary, but long enough that The Financial Times, The Verge and other outlets picked up the story.

A well-known quantum computing blog by computer scientist Scott Aaronson contained some oblique references to the leak. The reason for this obliqueness became clear when the paper was finally published online and Aaronson could at last reveal himself to be one of the reviewers.

The story had a further controversial twist when the Google groups claims were immediately countered by IBMs quantum computing group. IBM shared a preprint posted on the ArXiv (an online repository for academic papers that have yet to go through peer review) and a blog post dated Oct. 21, 2019 (note the date!).

While the Google group had claimed that a classical (super)computer would require 10,000 years to simulate the same 53-qbit random quantum circuit sampling task that their Sycamore processor could do in 200 seconds, the IBM researchers showed a method that could reduce the classical computation time to a mere matter of days.

However, the IBM classical computation would have to be carried out on the worlds fastest supercomputer the IBM-developed Summit OLCF-4 at Oak Ridge National Labs in Tennessee with clever use of secondary storage to achieve this benchmark.

While of great interest to researchers like myself working on hardware technologies related to quantum information, and important in terms of establishing academic bragging rights, the IBM-versus-Google aspect of the story is probably less relevant to the general public interested in all things quantum.

For the average citizen, the mere fact that a 53-qbit device could beat the worlds fastest supercomputer (containing more than 10,000 multi-core processors) is undoubtedly impressive. Now we must try to imagine what may come next.

The reality of quantum computing today is that very impressive strides have been made on the hardware front. A wide array of credible quantum computing hardware platforms now exist, including ion traps, superconducting device arrays similar to those in Googles Sycamore system and isolated electrons trapped in NV-centres in diamond.

These and other systems are all now in play, each with benefits and drawbacks. So far researchers and engineers have been making steady technological progress in developing these different hardware platforms for quantum computing.

What has lagged quite a bit behind are custom-designed algorithms (computer programs) designed to run on quantum computers and able to take full advantage of possible quantum speed-ups. While several notable quantum algorithms exist Shors algorithm for factorization, for example, which has applications in cryptography, and Grovers algorithm, which might prove useful in database search applications the total set of quantum algorithms remains rather small.

Much of the early interest (and funding) in quantum computing was spurred by the possibility of quantum-enabled advances in cryptography and code-breaking. A huge number of online interactions ranging from confidential communications to financial transactions require secure and encrypted messages, and modern cryptography relies on the difficulty of factoring large numbers to achieve this encryption.

Quantum computing could be very disruptive in this space, as Shors algorithm could make code-breaking much faster, while quantum-based encryption methods would allow detection of any eavesdroppers.

The interest various agencies have in unbreakable codes for secure military and financial communications has been a major driver of research in quantum computing. It is worth noting that all these code-making and code-breaking applications of quantum computing ignore to some extent the fact that no system is perfectly secure; there will always be a backdoor, because there will always be a non-quantum human element that can be compromised.

More appealing for the non-espionage and non-hacker communities in other words, the rest of us are the possible applications of quantum computation to solve very difficult problems that are effectively unsolvable using classical computers.

Ironically, many of these problems emerge when we try to use classical computers to solve quantum-mechanical problems, such as quantum chemistry problems that could be relevant for drug design and various challenges in condensed matter physics including a number related to high-temperature superconductivity.

So where are we in the wonderful and wild world of quantum computation?

In recent years, we have had many convincing demonstrations that qbits can be created, stored, manipulated and read using a number of futuristic-sounding quantum hardware platforms. But the algorithms lag. So while the prospect of quantum computing is fascinating, it will likely be a long time before we have quantum equivalents of the silicon chips that power our versatile modern computing devices.

[ Deep knowledge, daily. Sign up for The Conversations newsletter. ]

Read the original:
Google claims to have invented a quantum computer, but IBM begs to differ - The Conversation CA

Read More..

Xanadu Receives $4.4M Investment from SDTC to Advance its Photonic Quantum Computing Technology – Quantaneo, the Quantum Computing Source

Xanadu, a Canadian quantum hardware and technology company has received a $4.4M investment from Sustainable Development Technology Canada (SDTC). The investment will expedite the development of Xanadu's photonic quantum computers and make them available over the cloud. This project will also further the company's overall progress towards the construction of energy-efficient universal quantum computers.

"Canadian cleantech entrepreneurs are tackling problems across Canada and in every sector. I have never been more positive about the future. The quantum hardware technology that Xanadu is building will develop quantum computers with the ability to solve extremely challenging computational problems, completing chemical calculations in minutes which would otherwise require a million CPUs in a data center," said Leah Lawrence, President and CEO, Sustainable Development Technology Canada.

Despite efforts to improve the power efficiency of traditional computing methods, the rapid growth of data centres and cloud computing presents a major source of new electricity consumption. In comparison to classical computing, quantum computing systems have the benefit of performing certain tasks and algorithms at an unprecedented rate. This will ultimately reduce the requirements for electrical power and the accompanying air and water emissions associated with electricity production.

Xanadu is developing a unique type of quantum computer, based on photonic technology, which is inherently more power-efficient than electronics. Xanadu's photonic approach uses laser light to carry information through optical chips, rather than the electrons or ions used by their competitors. By using photonic technology, Xanadu's quantum computers will one day have the ability to perform calculations at room temperature, and eliminate the bulky and power-hungry cooling systems required by most other types of quantum computers.

The project will be undertaken by Xanadu's team of in-house scientists, with collaboration from the University of Toronto and Swiftride. The project will be carried out over three years and will encompass the development of Xanadu's architecture, hardware, software and client interfaces with the overall goal of expediting the development of the company's technology, and demonstrating the practical benefits of quantum computing for users and customers by the end of 2022.

"We are thrilled by the recognition and support that we are receiving from SDTC for the development of our technology. We firmly believe that our unique, photonic-based approach to quantum computing will deliver both valuable insights and tangible environmental benefits for our customers and partners," said Christian Weedbrook, CEO of Xanadu.

Excerpt from:
Xanadu Receives $4.4M Investment from SDTC to Advance its Photonic Quantum Computing Technology - Quantaneo, the Quantum Computing Source

Read More..

Meet the Living Robot; PigeonBot With Real Feathers; DeepMind Introduces AlphaFold; PyTorch 1.4 Released – Synced

Subscribe toSynced Global AI Weekly

Meet Xenobot, an Eerie New Kind of Programmable OrganismResearchers hope the living robots, made up of masses of cells working in coordination, can help unlock the mysteries of cellular communication.(WIRED)

PigeonBot Uses Real Feathers to Explore How Birds FlyIn a paper published in Science Robotics, researchers at Stanford University have presented some new work on understanding exactly how birds maintain control by morphing the shape of their wings. (Stanford University) / (IEEE SPECTRUM)

AlphaFold: Using AI for Scientific DiscoveryAlphaFold described in peer-reviewed papers now published in Nature and PROTEINS is the culmination of several years of work, and builds on decades of prior research using large genomic datasets to predict protein structure. (DeepMind)

PyTorch 1.4 Released, Domain Libraries UpdatedThe 1.4 release of PyTorch adds new capabilities, including the ability to do fine grain build level customization for PyTorch Mobile, and new experimental features including support for model parallel training and Java language bindings.(PyTorch)

ImagineNet: Restyling Apps Using Neural Style Transfer Researchers propose a neural solution by adding a new loss term to the original formulation, which minimizes the squared error in the uncentered cross-covariance of features from different levels in a CNN between the style and output images.(Stanford University)

Using Neural Networks to Solve Advanced Mathematics EquationsFacebook AI has built the first AI system that can solve advanced mathematics equations using symbolic reasoning. Researchers leverage proven techniques in neural machine translation (NMT), training models to essentially translate problems into solutions.(Facebook AI)

Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples Researchers present a preliminary design for an interactive visualization tool, CEB, to reveal biases in a commonly used AI method, Neural Networks (NN). CEB combines counterfactual examples and abstraction of an NN decision process to empower non-experts to detect bias.(Drexel University & IT University of Copenhagen)

AWS Introduces Open Source AutoML Toolkit AutoGluonAutoGluon is designed to be an easy-to-use and easy-to-extend AutoML toolkit, suitable for both machine learning beginners and experts. It enables prototyping deep learning models with a few lines; automatic hyperparameter tuning, model selection and data processing; and automatic utilization of SOTA deep learning models.(Synced)

EmotionCues: AI Knows Whether Students Are Paying AttentionA research team from the Hong Kong University of Science and Technology and Harbin Engineering University has adopted facial recognition technology to analyze students emotions in the classroom through a visual analytics system called EmotionCues.(Synced)

Share My ResearchShare My Research is Synceds new column that welcomes scholars to share their own research breakthroughs with global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Share your research with us by clicking here.

February 712:AAAI 2020in New York,United States

February 2427:Mobile World Congressin Barcelona,Spain

March 2326:GPU Technology Conference (GTC)in San Jose,United States

Apr 26-30: ICLR | 2020 in Addis Ababa,Ethiopia

Twitter is Hiring Engineering Manager, ML

Alan Turing Institute Safe and Ethical AI Research Fellow/Fellow

OpenAI Scholars Spring 2020

DeepMind Internship Program

NVIDIA Graduate Fellowships

DeepMind Scholarship: Access to Science

LANDING AI is Recruiting

Stanford HAI is Recruiting

OpenAI Seeking Software Engineers and Deep Learning Researchers

DeepMind is Recruiting

Stay tight with AI!Subscribe toSynced Global AI Weekly

Like Loading...

See the rest here:
Meet the Living Robot; PigeonBot With Real Feathers; DeepMind Introduces AlphaFold; PyTorch 1.4 Released - Synced

Read More..