October 11, 2023
Blog
Computer vision refers to the technological goal to bring human vision an information-rich and intuitive sensor to computers, enabling applications such as assembly line inspection, security systems, driver assistance and robotics.
Unfortunately, computers lack the ability to intuit vision and imagery like humans. Instead, we must give computers algorithms to solve domain-specific tasks.
We often take our vision for granted, and how that biological ability can interpret our surroundings, from looking in the refrigerator to check food expiration dates to watching intently for a traffic light to turn green.
Computer vision dates to the 1960s and was initially used for tasks like reading text from a page (optical character recognition) and recognizing simple shapes such as circles or rectangles. Computer vision has since become one of the core domains of artificial intelligence (AI), which encompasses any computer system attempting to perceive, synthesize or infer some deeper meaning from data. There are three types of computer vision: conventional or rules-based, classical machine learning, and deep learning.
In this article, Ill consider AI from the perspective of making computers use vision to perceive the world more like humans. Ill also describe the trade-offs of each type of computer vision, especially in embedded systems that collect, process and act upon data locally, rather than relying on cloud-based resources.
Conventional computer vision refers to programmed algorithms that solve tasks such as motion estimation, panoramic image stitching or line detection.
Conventional computer vision uses standard signal processing and logic to solve tasks. Algorithms such as Canny edge detection or optical flow can find contours or vectors of motion, respectively, which is useful for isolating objects in an image or motion tracking between subsequent images. These types algorithms rely on filters, transforms, heuristics and thresholds to extract meaningful information from an image or video. These algorithms are often a precursor to an application-specific algorithm such as decoding information within a 1-D barcode, where a series of rules decode the barcode upon the detection of individual bars.
Conventional computer vision is beneficial in its straightforwardness and explainability, meaning that developers can analyze the algorithm at each step and explain why the algorithm behaved as it did. This can be useful in software auditing or safety-critical applications. However, conventional computer vision often requires more expertise to implement properly.
The algorithms often have a small set of parameters that require tuning to achieve optimal performance in different environments. Implementation can be difficult, especially for optimized, high-throughput applications. Some rules, algorithmic decisions or parameter values may have unexpected effects on images that do not fit original expectations, such that it becomes possible to trick the algorithm. Such vulnerabilities and edge cases can be difficult to fix without exposing new edge cases or increasing the algorithms complexity.
Machine learning emerged as a class of algorithms that use data to set parameters within an algorithm, rather than direct programming or calibration. These algorithms, such as support vector machine, multilayer perceptron (a precursor to artificial neural networks) and k-nearest neighbor, saw use in applications that were too challenging to solve with conventional computer vision. For example, recognizing a dog is a difficult task to program on a traditional computer vision algorithm, especially where complex scenery and objects are also present. Training a machine learning algorithm to learn parameters from 100 s or 1000 s of sample images is more tractable. Edge cases are solved by using a dataset that contains examples of those edge cases.
Training is computationally intensive, but running the algorithm on new data requires far fewer computing resources, making it possible to run in real time. These trained models generally have less explainability but are more resilient to small, unplanned variations in data, such as the orientation of an object or background noises. It is possible to fix variations that are not handled well by retraining with more data. Larger models with more parameters often boast higher accuracy, but have longer training times as well as more computations needed at run time, which has historically prevented very large models from use in real-time applications on embedded processors.
Classical machine learning-based approaches to computer vision still require an expert to craft the feature set on which the machine learning model is trained. Many of these features are common to conventional computer vision applications. Not all features are useful, thus requiring analysis to prune uninformative features. Implementing these algorithms effectively requires expertise in image processing as well as machine learning.
Deep learning refers to very large neural network models operating on largely unprocessed or raw data. Deep learning has made a large impact on computer vision by pulling feature extraction operations into the model itself, such that the algorithm learns the most informative features as needed. The following figure shows the data flow in each computer vision approach.
Deep learning has the most generality among the types of computer vision; neural networks are universal function approximators, meaning they have the capability of learning any relation between input and output (to the extent that the relation exists). Deep learning excels at finding both subtle and obvious patterns in data, and is the most tolerant to input variations. Applications such as object recognition, human pose estimation and pixel-level scene segmentation are common use cases.
Deep learning requires the least direct-tuning and image processing expertise. The algorithms rely on large and high-quality data sets to help the general-purpose algorithm learn patterns by gradually finding parameters that optimize a loss or error metric during training. Novice developers can make effective use of deep learning because the focus shifts from the algorithms implementation toward data-set curation. Furthermore, many deep learning models are publicly available such that they can be retrained for specific use cases. Using these publicly available models is straightforward; developing fully custom architectures does, however, require more expertise.
Compared to conventional computer vision and classical machine learning, deep learning has consistently higher accuracy and is rapidly improving due to immense popularity in research (and growingly, commercial) communities. However, deep learning typically has poor explainability since the algorithms are very large and complex; images that are completely unlike the training data set can cause unexpected, unpredictable behavior. Because of their size, deep learning models are so computationally intensive that special hardware is necessary to accelerate them for real-time operation. Training large models on large data sets can be costly, and curating a large data set is often time-consuming and tedious.
However, improvements in processing power, speeds, accelerators such as neural processing units and graphics processing units, and improved software support for matrix and vector operations have made the increase in computation requirements less consequential, even on embedded systems. Embedded microprocessors like the AM6xA portfolio leverage hardware accelerators to run deep learning algorithms at high frame rates.
So which type of computer vision is best?
That ultimately depends on its application, as shown in Figure 2.
In short, computer vision with classical machine learning rests between the other two methods for most attributes; the set of applications that benefit compared to the other two approaches is small. Conventional computer vision can be sufficiently accurate and highly efficient in straightforward, high-throughput or safety-critical applications. Deep learning is the most general, the easiest to develop for, and has the highest accuracy in complex applications and environments, such as identifying a tiny missing component during PCB assembly verification for high-density designs.
Some applications benefit from using multiple types of computer vision algorithms in tandem such that they cover each others weak points. This approach is common in safety-critical applications dealing with highly variable environments, such as driver assistance systems. For example, you could employ optical flow using conventional computer vision methods alongside a deep learning model for tracking nearby vehicles, and use an algorithm to fuse the results to ascertain whether the two approaches agree with each other. If they do not, the system could warn the driver or start a graceful safety maneuver. Alternatively, it is possible to use multiple types of computer vision sequentially. A barcode reader can use deep learning to locate regions of interest, crop those regions, and then use a conventional CV computer vision algorithm to decode.
The barrier to entry for computer vision is progressively lowering. Open source libraries like OpenCV provide efficient implementations of common functions like edge detection and color conversion. Deep learning runtimes like tensorflow-lite and ONNX runtime enable deep learning models to run efficiently on embedded processors. These runtimes also provide interfaces that custom hardware accelerators can implement to simplify the developers experience when they are ready to move an algorithm from the training environment on PC or cloud to inference on the embedded processor. Many deep learning architectures are also openly published such that they can be reused for a variety of tasks.
Processors in the Texas Instruments (TI) AM6xA portfolio, such as the AM62A7, contain deep learning acceleration hardware as well as software support for a variety of conventional and deep learning computer vision tasks. Digital signal processor cores like the C66x and hardware accelerators for optical flow and stereo depth estimation also enable high performance conventional computer vision tasks.
With processors capable of both conventional and deep learning computer vision, it becomes possible to build tools that rival sci-fi dreams. Automated shopping carts will streamline shopping; surgical and medical robots will guide doctors to early signs of disease; mobile robots will mow the lawn and deliver packages. If you can envision it, so can the application youll build. See TIs edge AI vision page to explore how embedded computer vision is changing the world.
Reese Grimsley is a Systems Applications Engineer with the Sitara MPU product line within TIs Processors organization. At TI, Reese works on image processing, machine learning, and analytics for a variety of camera-based end-equipment in industrial markets. One of his focal areas is demystifying Edge AI to help both new and experienced customers understand how they can quickly and easily bring complex deep learning algorithms to their products and improve accuracy, performance, and robustness.
More from Reese
View post:
Computer Vision at the Edge Can Enable AI Apps - Embedded Computing Design
- What Is Machine Learning? | How It Works, Techniques ... [Last Updated On: September 5th, 2019] [Originally Added On: September 5th, 2019]
- Start Here with Machine Learning [Last Updated On: September 22nd, 2019] [Originally Added On: September 22nd, 2019]
- What is Machine Learning? | Emerj [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- Microsoft Azure Machine Learning Studio [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- Machine Learning Basics | What Is Machine Learning? | Introduction To Machine Learning | Simplilearn [Last Updated On: October 1st, 2019] [Originally Added On: October 1st, 2019]
- What is Machine Learning? A definition - Expert System [Last Updated On: October 2nd, 2019] [Originally Added On: October 2nd, 2019]
- Machine Learning | Stanford Online [Last Updated On: October 2nd, 2019] [Originally Added On: October 2nd, 2019]
- How to Learn Machine Learning, The Self-Starter Way [Last Updated On: October 17th, 2019] [Originally Added On: October 17th, 2019]
- definition - What is machine learning? - Stack Overflow [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Artificial Intelligence vs. Machine Learning vs. Deep ... [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Machine Learning in R for beginners (article) - DataCamp [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Machine Learning | Udacity [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Machine Learning Artificial Intelligence | McAfee [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- Machine Learning [Last Updated On: November 3rd, 2019] [Originally Added On: November 3rd, 2019]
- AI-based ML algorithms could increase detection of undiagnosed AF - Cardiac Rhythm News [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip - TechCrunch [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- Can the planet really afford the exorbitant power demands of machine learning? - The Guardian [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- New InfiniteIO Platform Reduces Latency and Accelerates Performance for Machine Learning, AI and Analytics - Business Wire [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- How to Use Machine Learning to Drive Real Value - eWeek [Last Updated On: November 19th, 2019] [Originally Added On: November 19th, 2019]
- Machine Learning As A Service Market to Soar from End-use Industries and Push Revenues in the 2025 - Downey Magazine [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Rad AI Raises $4M to Automate Repetitive Tasks for Radiologists Through Machine Learning - - HIT Consultant [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Machine Learning Improves Performance of the Advanced Light Source - Machine Design [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Synthetic Data: The Diamonds of Machine Learning - TDWI [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- The transformation of healthcare with AI and machine learning - ITProPortal [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Workday talks machine learning and the future of human capital management - ZDNet [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Machine Learning with R, Third Edition - Free Sample Chapters - Neowin [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning - SemiEngineering [Last Updated On: November 26th, 2019] [Originally Added On: November 26th, 2019]
- Podcast: How artificial intelligence, machine learning can help us realize the value of all that genetic data we're collecting - Genetic Literacy... [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- The Real Reason Your School Avoids Machine Learning - The Tech Edvocate [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Siri, Tell Fido To Stop Barking: What's Machine Learning, And What's The Future Of It? - 90.5 WESA [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Microsoft reveals how it caught mutating Monero mining malware with machine learning - The Next Web [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- The role of machine learning in IT service management - ITProPortal [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Global Director of Tech Exploration Discusses Artificial Intelligence and Machine Learning at Anheuser-Busch InBev - Seton Hall University News &... [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- The 10 Hottest AI And Machine Learning Startups Of 2019 - CRN: The Biggest Tech News For Partners And The IT Channel [Last Updated On: November 28th, 2019] [Originally Added On: November 28th, 2019]
- Startup jobs of the week: Marketing Communications Specialist, Oracle Architect, Machine Learning Scientist - BetaKit [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Here's why machine learning is critical to success for banks of the future - Tech Wire Asia [Last Updated On: December 2nd, 2019] [Originally Added On: December 2nd, 2019]
- 3 questions to ask before investing in machine learning for pop health - Healthcare IT News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Machine Learning Answers: If Caterpillar Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- AI and machine learning platforms will start to challenge conventional thinking - CRN.in [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning Answers: If Twitter Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning Answers: If Seagate Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning Answers: If BlackBerry Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Amazon Releases A New Tool To Improve Machine Learning Processes - Forbes [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Another free web course to gain machine-learning skills (thanks, Finland), NIST probes 'racist' face-recog and more - The Register [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Kubernetes and containers are the perfect fit for machine learning - JAXenter [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- TinyML as a Service and machine learning at the edge - Ericsson [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- AI and machine learning products - Cloud AI | Google Cloud [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning | Blog | Microsoft Azure [Last Updated On: December 23rd, 2019] [Originally Added On: December 23rd, 2019]
- Machine Learning in 2019 Was About Balancing Privacy and Progress - ITPro Today [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- CMSWire's Top 10 AI and Machine Learning Articles of 2019 - CMSWire [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- Here's why digital marketing is as lucrative a career as data science and machine learning - Business Insider India [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Dell's Latitude 9510 shakes up corporate laptops with 5G, machine learning, and thin bezels - PCWorld [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Cloud as the enabler of AI's competitive advantage - Finextra [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Forget Machine Learning, Constraint Solvers are What the Enterprise Needs - - RTInsights [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- Informed decisions through machine learning will keep it afloat & going - Sea News [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- The Problem with Hiring Algorithms - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- New Program Supports Machine Learning in the Chemical Sciences and Engineering - Newswise [Last Updated On: January 13th, 2020] [Originally Added On: January 13th, 2020]
- AI-System Flags the Under-Vaccinated in Israel - PrecisionVaccinations [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- New Contest: Train All The Things - Hackaday [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- AFTAs 2019: Best New Technology Introduced Over the Last 12 MonthsAI, Machine Learning and AnalyticsActiveViam - www.waterstechnology.com [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Educate Yourself on Machine Learning at this Las Vegas Event - Small Business Trends [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Seton Hall Announces New Courses in Text Mining and Machine Learning - Seton Hall University News & Events [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Looking at the most significant benefits of machine learning for software testing - The Burn-In [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Leveraging AI and Machine Learning to Advance Interoperability in Healthcare - - HIT Consultant [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Adventures With Artificial Intelligence and Machine Learning - Toolbox [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Five Reasons to Go to Machine Learning Week 2020 - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Uncover the Possibilities of AI and Machine Learning With This Bundle - Interesting Engineering [Last Updated On: January 22nd, 2020] [Originally Added On: January 22nd, 2020]
- Learning that Targets Millennial and Generation Z - HR Exchange Network [Last Updated On: January 23rd, 2020] [Originally Added On: January 23rd, 2020]
- Red Hat Survey Shows Hybrid Cloud, AI and Machine Learning are the Focus of Enterprises - Computer Business Review [Last Updated On: January 23rd, 2020] [Originally Added On: January 23rd, 2020]
- Vectorspace AI Datasets are Now Available to Power Machine Learning (ML) and Artificial Intelligence (AI) Systems in Collaboration with Elastic -... [Last Updated On: January 23rd, 2020] [Originally Added On: January 23rd, 2020]
- What is Machine Learning? | Types of Machine Learning ... [Last Updated On: January 23rd, 2020] [Originally Added On: January 23rd, 2020]
- How Machine Learning Will Lead to Better Maps - Popular Mechanics [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]
- Jenkins Creator Launches Startup To Speed Software Testing with Machine Learning -- ADTmag - ADT Magazine [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]
- An Open Source Alternative to AWS SageMaker - Datanami [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]
- Machine Learning Could Aid Diagnosis of Barrett's Esophagus, Avoid Invasive Testing - Medical Bag [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]
- OReilly and Formulatedby Unveil the Smart Cities & Mobility Ecosystems Conference - Yahoo Finance [Last Updated On: January 30th, 2020] [Originally Added On: January 30th, 2020]