This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
Consider the animal in the following image. If you recognize it, a quick series of neuron activations in your brain will link its image to its name and other information you know about it (habitat, size, diet, lifespan, etc). But if like me, youve never seen this animal before, your mind is now racing through your repertoire of animal species, comparing tails, ears, paws, noses, snouts, and everything else to determine which bucket this odd creature belongs to. Your biological neural network is reprocessing your past experience to deal with a novel situation.
(source: Wikipedia)
Our brains, honed through millions of years of evolution, are very efficient processing machines, sorting out the ton of information we receive through our sensory inputs, associating known items with their respective categories.
That picture, by the way, is an Indian civet, an endangered species that has nothing to do with cats, dogs, and rodents. It should be placed in its own separate category (viverrids). There you go. You now have a new bucket to place civets in, which includes this variant that was sighted recently in India.
While we have yet to learn much about how the mind works, we are in the midst (or maybe still at the beginning) of an era of creating our own version of the human brain. After decades of research and development, researchers have managed to create deep neural networks that sometimes match or surpass human performance in specific tasks.
[Read: Artificial vs augmented intelligence: whats the difference?]
But one of the recurring themes in discussions about artificial intelligence is whether artificial neural networks used in deep learning work similarly to the biological neural networks of our brains. Many scientists agree that artificial neural networks are a very rough imitation of the brains structure, and some believe that ANNs are statistical inference engines that do not mirror the many functions of the brain. The brain, they believe, contains many wonders that go beyond the mere connection of biological neurons.
A paper recently published in the peer-reviewed journal Neuron challenges the conventional view of the functions of the human brain. Titled Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks, the paper discusses that contrary to the beliefs of many scientists, the human brain is a brute-force big data processor that fits its parameters to the many examples that it experiences. Thats the kind of description usually given to deep neural networks.
Authored by researchers at Princeton University, the thought-provoking paper provides a different perspective on neural networks, analogies between ANNs and their biological counterparts, and future directions for creating more capable artificial intelligence systems.
Neuroscientists generally believe that the complex functionalities of the brain can be broken down into simple, interpretable models.
For instance, I can explain the complex mental process of my analysis of the civet picture (before I knew its name, of course), as such: Its definitely not a bird because it doesnt have feathers and wings. And it certainly isnt a fish. Its probably a mammal, given the furry coat. It could be a cat, given the pointy ears, but the neck is a bit too long and the body shape a bit weird. The snout is a bit rodent-like, but the legs are longer than most rodents and finally I would come to the conclusion that its probably an esoteric species of cat. (In my defense, it is a very distant relative of felines if you insist.)
Artificial neural networks, however, are often dismissed as uninterpretable black boxes. They do not provide rich explanations of their decision process. This is especially true when it comes to the complex deep neural networks that are composed of hundreds (or thousands of layers) and millions (or billions) or parameters.
During their training phase, deep neural networks review millions of images and their associated labels, and then they mindlessly tune their millions of parameters to the patterns they extract from those images. These tuned parameters then allow them to determine which class a new image belongs to. They dont understand the higher-level concepts that I just mentioned (neck, ear, nose, legs, etc.) and only look for consistency between the pixels of an image.
The authors of Direct Fit to Nature acknowledge that neural networksboth biological and artificialcan differ considerably in their circuit architecture, learning rules, and objective functions.
All networks, however, use an iterative optimization process to pursue an objective, given their input or environmenta process we refer to as direct fit, the researchers write. The term direct fit is inspired from the blind fitting process observed in evolution, an elegant but mindless optimization process where different organisms adapt to their environment through a series of random genetic transformations carried out over a very long period.
This framework undercuts the assumptions of traditional experimental approaches and makes unexpected contact with long-standing debates in developmental and ecological psychology, the authors write.
Another problem that the artificial intelligence community faces is the tradeoff between interpretability and generalization. Scientists and researchers are constantly searching for new techniques and structures that can generalize AI capabilities across vaster domains. And experience has shown that, when it comes to artificial neural networks, scale improves generalization. Advances in processing hardware and the availability of large compute resources have enabled researchers to create and train very large neural networks in reasonable timeframes. And these networks have proven to be remarkably better at performing complex tasks such as computer vision and natural language processing.
The problem with artificial neural networks, however, is that the larger they get, the more opaque they become. With their logic spread across millions of parameters, they become much harder to interpret than a simple regression model that assigns a single coefficient to each feature. Simplifying the structure of artificial neural networks (e.g., reducing the number of layers or variables) will make it easier to interpret how they map different input features to their outcomes. But simpler models are also less capable in dealing with the complex and messy data found in nature.
We argue that neural computation is grounded in brute-force direct fitting, which relies on over-parameterized optimization algorithms to increase predictive power (generalization) without explicitly modeling the underlying generative structure of the world, the authors of Direct Fit to Nature write.
Say you want to create an AI system that detects chairs in images and videos. Ideally, you would provide the algorithm with a few images of chairs, and it would be able to detect all types of normal as well as wacky and funky ones.
Are these chairs?
This is one of the long-sought goals of artificial intelligence, creating models that can extrapolate well. This means that, given a few examples of a problem domain, the model should be able to extract the underlying rules and apply them to a vast range of novel examples it hasnt seen before.
When dealing with simple (mostly artificial) problem domains, it might be possible to reach extrapolation level by tuning a deep neural network to a small set of training data. For instance, such levels of generalization might be achievable in domains with limited features such as sales forecasting and inventory management. (But as weve seen in these pages, even these simple AI models might fall apart when a fundamental change comes to their environment.)
But when it comes to messy and unstructured data such as images and text, small data approaches tend to fail. In images, every pixel effectively becomes a variable, so analyzing a set of 100100 pixel images becomes a problem with 10,000 dimensions, each having thousands or millions of possibilities.
In cases in which there are complex nonlinearities and interactions among variables at different parts of the parameter space, extrapolation from such limited data is bound to fail, the Princeton researchers write.
The human brain, many cognitive scientists believe, can rely on implicit generative rules without being exposed to rich data from the environment. Artificial neural networks, on the other hand, do not have such capabilities, the popular belief is. This is the belief that the authors of Direct Fit to Nature challenge.
Dense sampling of the problem space can flip the problem of prediction on its head, turning an extrapolation-based problem into an interpolation-based problem, the researchers note.
In essence, with enough samples, you will be able to capture a large enough area of the problem domain. This makes it possible to interpolate between samples with simple computations without the need to extract abstract rules to predict the outcome of situations that fall outside the domain of the training examples.
When the data structure is complex and multidimensional, a mindless direct-fit model, capable of interpolation-based prediction within a real-world parameter space, is preferable to a traditional ideal-fit explicit model that fails to explain much variance in the data, the authors of Direct Fit to Nature write.
Extrapolation (left) tries to extract rules from big data and apply them to the entire problem space. Interpolation (right) relies on rich sampling of the problem space to calculate the spaces between samples.
In tandem with advances in computing hardware, the availability of very large data sets has enabled the creation of direct-fit artificial neural networks in the past decade. The internet is rich with all sorts of data from various domains. Scientists create vast deep learning data sets from Wikipedia, social media networks, image repositories, and more. The advent of the internet of things (IoT) has also enabled rich sampling from physical environments (roads, buildings, weather, bodies, etc.).
In many types of applications (i.e., supervised learning algorithms), the gathered data still requires a lot of manual labor to associate each sample with its outcome. But nonetheless, the availability of big data has made it possible to apply the direct-fit approach to complex domains that cant be represented with few samples and general rules.
One argument against this approach is the long tail problem, often described as edge cases. For instance, in image classifications, one of the outstanding problems is that popular training data sets such as ImageNet provides millions of pictures of different types of objects. But since most of the pictures were taken under ideal lighting conditions and from conventional angles, deep neural networks trained on these datasets fail to recognize those objects in rare positions.
ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)
The long tail does not pertain to new examples per se, but to low-frequency or odd examples (e.g. a strange view of a chair, or a chair shaped like an unrelated object) or riding in a new context (like driving in a blizzard or with a flat tire), co-authors of the paper Uri Hasson, Professor at Department of Psychology and Princeton Neuroscience Institute, and Sam Nastase, Postdoctoral researcher at Princeton Neuroscience Institute, told TechTalks in written comments. Note that biological organisms, including people, like ANNs, are bad at extrapolating to contexts they never experienced; e.g. many people fail spectacularly when driving in snow for the first time.
Many developers try to make their deep learning models more robust by blindly adding more samples to the training data set, hoping to cover all possible situations. This usually doesnt solve the problem, because the sampling techniques dont widen the distribution of the data set, and edge cases remain uncovered by the easily collected data samples. The solution, Hasson and Nastase argue, is to expand the interpolation zone by providing a more ecological, embodied sampling regime for artificial neural networks that currently perform poorly in the tail of the distribution.
For example, many of the oddities in classical human visual psychophysics are trivially resolved by allowing the observer to simply move and actively sample the environment (something essentially all biological organisms do), they say. That is, the long-tail phenomenon is in part a sampling deficiency. However, the solution isnt necessarily just more samples (which will in large part come from the body of the distribution), but will instead require more sophisticated sampling observed in biological organisms (e.g. novelty seeking).
This observation is in line with recent research that shows employing a more diverse sampling methodology can in fact improve the performance of computer vision systems.
In fact, the need for sampling from the long tail also applies to the human brain. For instance, consider one of the oft-mentioned criticisms against self-driving cars which posits that their abilities are limited to the environments theyve been trained in.
Even the most experienced drivers can find themselves in a new context where they are not sure how to act. The point is to not train a foolproof car, but a self-driving car that can drive, like humans, in 99 percent of the contexts. Given the diversity of driving contexts, this is not easy, but perhaps doable, Hasson and Nastase say. We often overestimate the generalization capacity of biological neural networks, including humans. But most biological neural networks are fairly brittle; consider for example that raising ocean temperatures 2 degrees will wreak havoc on entire ecosystems.
Many scientists criticize AI systems that rely on very large neural networks, arguing that the human brain is very resource-efficient. The brain is a three-pound mass of matter that uses little over 10 watts of electricity. Deep neural networks, however, often require very large servers that can consume megawatts of power.
But hardware aside, comparing the components of the brain to artificial neural networks paints a different picture. The largest deep neural networks are composed of a few billion parameters. The human brain, in contrast, is constituted of approximately 1,000 trillion synapses, the biological equivalent of ANN parameters. Moreover, the brain is a highly parallel system, which makes it very hard to compare its functionality to that of ANNs.
Although the brain is certainly subject to wiring and metabolic constraints, we should not commit to an argument for scarcity of computational resources as long as we poorly understand the computational machinery in question, the Princeton researchers write in their paper.
Another argument is that, in contrast to ANNs, the biological neural network of the human brain has very poor input mechanisms and doesnt have the capacity to ingest and process very large amounts of data. This makes it inevitable for human brains to learn new tasks without learning the underlying rules.
To be fair, calculating the input entering the brain is complicated. But we often underestimate the huge amount of data that we process. For example, we may be exposed to thousands of visual exemplars of many daily categories a year, and each category may be sampled at thousands of views in each encounter, resulting in a rich training set for the visual system. Similarly, with regard to language, studies estimate that a child is exposed to several million words per year, the authors of the paper write.
One thing that cant be denied, however, is that humans do in fact extract rules from their environment and develop abstract thoughts and concepts that they use to process and analyze new information. This complex symbol manipulation enables humans to compare and draw analogies between different tasks and perform efficient transfer learning. Understanding and applying causality remain among the unique features of the human brain.
It is certainly the case that humans can learn abstract rules and extrapolate to new contexts in a way that exceeds modern ANNs. Calculus is perhaps the best example of learning to apply rules across different contexts. Discovering natural laws in physics is another example, where you learn a very general rule from a set of limited observations, Hasson and Nastase say.
These are the kind of capabilities that emerge not from the activations and interactions of a single neural network but are the result of the accumulated knowledge across many minds and generations.
This is one area that direct-fit models fall short, Hasson and Nastase acknowledge. Scientifically, it is called System 1 and System 2 thinking. System 1 refers to the kind of tasks that can be learned by rote, such as recognizing faces, walking, running, driving. You can perform most of these capabilities subconsciously, while also performing some other task (e.g., walking and talking to someone else at the same time, driving and listening to the radio). System 2, however, requires concentration and conscious thinking (can you solve a differential equation while jogging?).
In the paper, we distinguish fast and automatic System 1 capacities from the slow and deliberate cognitive functions, Hasson and Nastase say. While direct fit allows the brain to be competent while being blind to the solution it learned (similar to all evolved functional solutions in biology), and while it explains the ability of System 1 to learn to perceive and act across many contexts, it still doesnt fully explain a subset of human functions attributed to System 2 which seems to gain some explicit understanding of the underlying structure of the world.
So what do we need to develop AI algorithms that have System 2 capabilities? This is one area where theres much debate in the research community. Some scientists, including deep learning pioneer Yoshua Bengio, believe that pure neural network-based systems will eventually lead to System 2 level AI. New research in the field shows that advanced neural network structures manifest the kind of symbol manipulation capabilities that were previously thought to be off-limits for deep learning.
In Direct Fit to Nature, the authors support the pure neural networkbased approach. In their paper, they write: Although the human mind inspires us to touch the stars, it is grounded in the mindless billions of direct-fit parameters of System 1. Therefore, direct-fit interpolation is not the end goal but rather the starting point for understanding the architecture of higher-order cognition. There is no other substrate from which System 2 could arise.
An alternative view is the creation of hybrid systems that incorporate classic symbolic AI with neural networks. The area has drawn much attention in the past year, and there are several projects that show that rule-based AI and neural networks can complement each other to create systems that are stronger than the sum of their parts.
Although non-neural symbolic computingin the vein of von Neumanns model of a control unit and arithmetic logic unitsis useful in its own right and may be relevant at some level of description, the human System 2 is a product of biological evolution and emerges from neural networks, Hasson and Nastase wrote in their comments to TechTalks.
In their paper, Hasson and Nastase expand on some of the possible components that might develop higher capabilities for neural networks. One interesting suggestion is providing a physical body for neural networks to experience and explore the world like other living beings.
Integrating a network into a body that allows it to interact with objects in the world is necessary for facilitating learning in new environments, Hasson and Nastase said. Asking a language model to learn the meaning of words from the adjacent words in text corpora exposes the network to a highly restrictive and narrow context. If the network has a body and can interact with objects and people in a way that relates to the words, it is likely to get a better sense of the meaning of words in context. Counterintuitively, imposing these sorts of limitations (e.g. a body) on a neural network can force the neural network to learn more useful representations.
This article was originally published byBen Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.
Read next: BetterMe Home Workout and Diet can help you bounce back from the Quarantine 15
Why is queer representation so important? What's it like being trans in tech? How do I participate virtually? You can find all our Pride 2020 coverage here.
Read the original post:
Research: Artificial neural networks are more similar to the brain than we thought - TNW
- Working at DeepMind | Glassdoor [Last Updated On: September 8th, 2019] [Originally Added On: September 8th, 2019]
- DeepMind Q&A Dataset - New York University [Last Updated On: October 6th, 2019] [Originally Added On: October 6th, 2019]
- Google absorbs DeepMind healthcare unit 10 months after ... [Last Updated On: October 7th, 2019] [Originally Added On: October 7th, 2019]
- deep mind Mathematics, Machine Learning & Computer Science [Last Updated On: November 1st, 2019] [Originally Added On: November 1st, 2019]
- Health strategies of Google, Amazon, Apple, and Microsoft - Business Insider [Last Updated On: November 21st, 2019] [Originally Added On: November 21st, 2019]
- To Understand The Future of AI, Study Its Past - Forbes [Last Updated On: November 21st, 2019] [Originally Added On: November 21st, 2019]
- Tremor patients can be relieved of the shakes for THREE YEARS after having ultrasound waves - Herald Publicist [Last Updated On: November 21st, 2019] [Originally Added On: November 21st, 2019]
- The San Francisco Gay Mens Chorus Toured the Deep South - SF Weekly [Last Updated On: November 21st, 2019] [Originally Added On: November 21st, 2019]
- The Universe Speaks in Numbers: The deep relationship between math and physics - The Huntington News [Last Updated On: November 21st, 2019] [Originally Added On: November 21st, 2019]
- MINI John Cooper Works GP is a two-seater hot hatch that shouts its 306 HP - SlashGear [Last Updated On: November 21st, 2019] [Originally Added On: November 21st, 2019]
- How To Face An Anxiety Provoking Situation Like A Champion - Forbes [Last Updated On: November 21st, 2019] [Originally Added On: November 21st, 2019]
- The Most Iconic Tech Innovations of the 2010s - PCMag [Last Updated On: November 24th, 2019] [Originally Added On: November 24th, 2019]
- Why tech companies need to hire philosophers - Quartz [Last Updated On: November 24th, 2019] [Originally Added On: November 24th, 2019]
- Living on Purpose: Being thankful is a state of mind - Chattanooga Times Free Press [Last Updated On: November 24th, 2019] [Originally Added On: November 24th, 2019]
- EDITORIAL: West explosion victims out of sight and clearly out of mind - Waco Tribune-Herald [Last Updated On: November 24th, 2019] [Originally Added On: November 24th, 2019]
- Do you need to sit still to be mindful? - The Sydney Morning Herald [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Listen To Two Neck Deep B-Sides, Beautiful Madness And Worth It - Kerrang! [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Worlds Last Male Northern White Rhino Brought Back To Life Using AI - International Business Times [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Eat, drink, and be merryonly if you keep in mind these food safety tips - Williamsburg Yorktown Daily [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- The alarming trip that changed Jeremy Clarksons mind on climate change - The Week UK [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Actionable Insights on Artificial Intelligence in Law Market with Future Growth Prospects by 2026 | AIBrain, Amazon, Anki, CloudMinds, Deepmind,... [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Searching for the Ghost Orchids of the Everglades - Discover Magazine [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Parkinsons tremors could be treated with SOUNDWAVES, claim scientists - Herald Publicist [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Golden State Warriors still have prolonged success in mind - Blue Man Hoop [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- 3 Gratitude Habits You Can Adopt Over The Thanksgiving Holiday For Deeper Connection And Joy - Forbes [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- The minds that built AI and the writer who adored them. - Mash Viral [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Parkinson's Patients are Mysteriously Losing the Ability to Swim After Treatment - Discover Magazine [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Hannah Fry, the woman making maths cool | Times2 - The Times [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Meditate with Urmila: Find balance of body, mind and breath - Gulf News [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- We have some important food safety tips to keep in mind while cooking this Thanksgiving - WQOW TV News 18 [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Being thankful is a state of mind | Opinion - Athens Daily Review [Last Updated On: December 2nd, 2019] [Originally Added On: December 2nd, 2019]
- Can Synthetic Biology Inspire The Next Wave of AI? - SynBioBeta [Last Updated On: December 2nd, 2019] [Originally Added On: December 2nd, 2019]
- LIVING ON PURPOSE: Being thankful is a state of mind - Times Tribune of Corbin [Last Updated On: December 2nd, 2019] [Originally Added On: December 2nd, 2019]
- AI Hardware Summit Europe launches in Munich, Germany on 10-11 March 2020, the ecosystem event for AI hardware acceleration in Europe - Yahoo Finance [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- Of course Facebook and Google want to solve social problems. Theyre hungry for our data - The Guardian [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- Larry, Sergey, and the Mixed Legacy of Google-Turned-Alphabet - WIRED [Last Updated On: December 6th, 2019] [Originally Added On: December 6th, 2019]
- AI Index 2019 assesses global AI research, investment, and impact - VentureBeat [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- For the Holidays, the Gift of Self-Care - The New York Times [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Stopping a Mars mission from messing with the mind - Axios [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Feldman: Impeachment articles are 'high crimes' Founders had in mind | TheHill - The Hill [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Opinion | Frankenstein monsters will not be taking our jobs anytime soon - Livemint [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- DeepMind co-founder moves to Google as the AI lab positions itself for the future - The Verge [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Google Isn't Looking To Revolutionize Health Care, It Just Wants To Improve On The Status Quo - Newsweek [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Artificial Intelligence Job Demand Could Live Up to Hype - Dice Insights [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- What Are Normalising Flows And Why Should We Care - Analytics India Magazine [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- Terence Crawford has next foe in mind after impressive knockout win - New York Post [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- DeepMind proposes novel way to train safe reinforcement learning AI - VentureBeat [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- Winning the War Against Thinking - So you've emptied your brain. Now what? - Chabad.org [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- 'Echo Chamber' as Author of the 'Hive Mind' - Ricochet.com [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- Lindsey Graham: 'I Have Made Up My Mind' to Exonerate Trump and 'Don't Need Any Witnesses' WATCH - Towleroad [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- Blockchain in Healthcare Market to 2027 By Top Leading Players: iSolve LLC, Healthcoin, Deepmind Health, IBM Corporation, Microsoft Corporation,... [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- In sight but out of mind - The Hindu [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- The Case for Limitlessness Has Its Limits: Review of Limitless Mind by Joe Boaler - Education Next - EducationNext [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- The Top 10 Diners In Deep East Texas, According To Yelp - ksfa860.com [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- 3 breathing exercises to reduce stress, anxiety and a racing mind - Irish Examiner [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- DeepMind exec Andrew Eland leaves to launch startup - Sifted [Last Updated On: December 16th, 2019] [Originally Added On: December 16th, 2019]
- The Top 10 Diners In Deep East Texas, According To Yelp - kicks105.com [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Mind the Performance Gap New Future Purchasing Category Management Report Out Now - Spend Matters [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Madison singles and deep cuts that stood out in 2019 - tonemadison.com [Last Updated On: December 19th, 2019] [Originally Added On: December 19th, 2019]
- Hilde Lee: Latkes bring an ancient miracle to mind on first night of Hanukkah - The Daily Progress [Last Updated On: December 19th, 2019] [Originally Added On: December 19th, 2019]
- Political Cornflakes: Trump responds to impeachment with complaints about the 'deep state' and toilet flushing - Salt Lake Tribune [Last Updated On: December 19th, 2019] [Originally Added On: December 19th, 2019]
- Google CEO Sundar Pichai Is the Most Expensive Tech CEO to Keep Around - Observer [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Christmas Lectures presenter Dr Hannah Fry on pigeons, AI and the awesome power of maths - inews [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- The ultimate guitar tuning guide: expand your mind with these advanced tuning techniques - Guitar World [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Inside The Political Mind Of Jerry Brown - Radio Ink [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Elon Musk Fact-Checked His Own Wikipedia Page and Requested Edits Including the Fact He Does 'Zero Investing' - Entrepreneur [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- The 9 Best Blobs of 2019 - Livescience.com [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- AI from Google is helping identify animals deep in the rainforest - Euronews [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Want to dive into the lucrative world of deep learning? Take this $29 class. - Mashable [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Re: Your Account Is Overdrawn - Thrive Global [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- Review: In the Vale is full of characters who linger long in the mind - Nation.Cymru [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- 10 Gifts That Cater to Your Loved One's Basic Senses - Wide Open Country [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- The Most Mind-Boggling Scientific Discoveries Of 2019 Include The First Image Of A Black Hole, A Giant Squid Sighting, And An Exoplanet With Water... [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- DeepMind's new AI can spot breast cancer just as well as your doctor - Wired.co.uk [Last Updated On: January 1st, 2020] [Originally Added On: January 1st, 2020]
- Why the algorithms assisting medics is good for health services (Includes interview) - Digital Journal [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- 2020: The Rise of AI in the Enterprise - IT World Canada [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- An instant 2nd opinion: Google's DeepMind AI bests doctors at breast cancer screening - FierceBiotech [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Google's DeepMind AI outperforms doctors in identifying breast cancer from X-ray images - Business Insider UK [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- New AI toolkit from the World Economic Forum is promising because it's free - The National [Last Updated On: January 20th, 2020] [Originally Added On: January 20th, 2020]
- AKA Wants to Help People Break Bad Habits and Create New Positive Ones - Hospitality Net [Last Updated On: January 20th, 2020] [Originally Added On: January 20th, 2020]