BlazeFace: How to Run Real-time Object Detection in the Browser – Towards Data Science

A step-by-step guide to training a BlazeFace model, from the Python training pipeline to the JavaScript demo through model conversion. 11 min read

Thanks to libraries such as YOLO by Ultralytics, it is fairly easy today to make robust object detection models with as little as a few lines of code. Unfortunately, those solutions are not yet fast enough to work in a web browser on a real-time video stream at 30 frames per second (which is usually considered the real-time limit for video applications) on any device. More often than not, it will run at less than 10 fps on an average mobile device.

The most famous real-time object detection solution on web browser is Googles MediaPipe. This is a really convenient and versatile solution, as it can work on many devices and platforms easily. But what if you want to make your own solution?

In this post, we propose to build our own lightweight, fast and robust object detection model, that runs at more than 30 fps on almost any devices, based on the BlazeFace model. All the code used for this is available on my GitHub, in the blazeface folder.

The BlazeFace model, proposed by Google and originally used in MediaPipe for face detection, is really small and fast, while being robust enough for easy object detection tasks such as face detection. Unfortunately, to my knowledge, no training pipeline of this model is available online on GitHub; all I could find is this inference-only model architecture. Through this post, we will train our own BlazeFace model with a fully working pipeline and use it on browser with a working JavaScript code.

More specifically, we will go through the following steps:

Lets get started with the model training.

As usual when training a model, there are a few typical steps in a training pipeline:

Lets go through those steps together.

We are going to use a subset of the Open Images Dataset V7, proposed by Google. This dataset is made of about 9 million images with many annotations (including bounding boxes, segmentation masks, and many others). The dataset itself is quite large and contains many types of images.

For our specific use case, I decided to select images in the validation set fulfilling two specific conditions:

The script to download and build the dataset under those strict conditions is provided in the GitHub, so that anyone can reproduce it. The downloaded dataset with this script contains labels in the YOLO format (meaning box center, width and height). In the end, the downloaded dataset is made of about 3k images and 8k faces, that I have separated into train and validation set with a 80%-20% split ratio.

From this dataset, typical preprocessing it required before being able to train a model. The data preprocessing code I used is the following:

As we can see, the preprocessing is made of the following steps:

Optionally, this code allows for data augmentation using Albumentations. For the training, I used the following data augmentations:

Those augmentations will allow us to have a more robust, regularized model. After all those transformations and augmentations, the input data may look like the following sample:

As we can see, the preprocessed images have grey borders because of augmentation (with rotation or translation) or padding (because the original image did not have a square aspect ratio). They all contain faces, although the context might be really different depending on the image.

Important Note:

Face detection is a highly sensitive task with significant ethical and safety considerations. Bias in the dataset, such as underrepresentation or overrepresentation of certain facial characteristics, can lead to false negatives or false positives, potentially causing harm or offense. See below a dedicated section about ethical considerations.

Now that our data can be loaded and preprocessed, lets go to the next step: building the model.

In this section, we will build the model architecture of the original BlazeFace model, based on the original article and adapted from the BlazeFace repository containing inference code only.

The whole BlazeFace architecture is rather simple and is mostly made of what the papers author call a BlazeBlock, with various parameters.

The BlazeBlock can be defined with PyTorch as follows:

As we can see from this code, a BlazeBlock is simply made of the following layers:

N.B.: You can read the PyTorch documentation for more about these layers: Conv2D layer and BatchNorm2D layer.

This block is repeated many times with different input parameters, to go from a 128-pixel image up to a typical object detection prediction using tensor reshaping in the final stages. Feel free to have a look at the full code in the GitHub repository for more about the implementation of this architecture.

Before moving to the next section about training the model, note that there are actually two architectures:

As you can imagine, the 256-pixel architecture is slightly larger, but still lightweight and sometimes more robust. This architecture is also implemented in the provided code, so that you can use it if you want.

N.B.: The original BlazeFace model not only predicts a bounding box, but also six approximate face landmarks. Since I did not have such labels, I simplified the model architecture to predict only the bounding boxes.

Now that we can build a model, lets move on to the next step: training the model.

For anyone familiar with PyTorch, training models such as this one is usually quite simple and straightforward, as shown in this code:

As we can see, the idea is to loop over your data for a given number of epochs, one batch at a time, and do the following:

I am not getting into all the details for clarity in this post, but feel free to navigate through the code to get a better sense of the training part if needed.

After training on 100 epochs, I had the following results on the validation set:

As we can see on those results, even if the object detection is not perfect, it works pretty well for most cases (probably the IoU threshold was not optimal, leading sometimes to overlapping boxes). Keep in mind its a very light model; it cant exhibit the same performances as a YOLOv8, for example.

Before going to the next step about converting the model, lets have a short discussion about ethical and safety considerations.

Lets go over a few points about ethics and safety, since face detection can be a very sensitive topic:

To address these concerns, anyone willing to build a product on such topic should focus on:

N.B.: A useful approach to address these concerns is to examine what Google did for their own face detection and face landmarks models.

Again, the used dataset is intended solely for educational purposes. Anyone willing to use it should exercise caution and be mindful of its limitations when interpreting results. Lets now move to the next step with the model conversion.

Remember that our goal is to make our object detection model work in a web browser. Unfortunately, once we have a trained PyTorch model, we can not directly use it in a web browser. We first need to convert it.

Currently, to my knowledge, the most reliable way to run a deep learning model in a web browser is by using a TFLite model with TensorFlow.js. In other words, we need to convert our PyTorch model into a TFLite model.

N.B.: Some alternative ways are emerging, such as ExecuTorch, but they do not seem to be mature enough yet for web use.

As far as I know, there is no robust, reliable way to do so directly. But there are side ways, by going through ONNX. ONNX (which stands for Open Neural Network Exchange) is a standard for storing and running (using ONNX Runtime) machine learning models. Conveniently, there are available libraries for conversion from torch to ONNX, as well as from ONNX to TensorFlow models.

To summarize, the conversion workflow is made of the three following steps:

This is exactly what the following code does:

This code can be slightly more cryptic than the previous ones, as there are some specific optimizations and parameters used to make it work properly. One can also try to go one step further and quantize the TFLite model to make it even smaller. If you are interested in doing so, you can have a look at the official documentation.

N.B.: The conversion code is highly sensitive of the versions of the libraries. To ensure a smooth conversion, I would strongly recommend using the specified versions in the requirements.txt file on GitHub.

On my side, after TFLite conversion, I finally have a TFLite model of only about 400kB, which is lightweight and quite acceptable for web usage. Next step is to actually test it out in a web browser, and to make sure it works as expected.

On a side note, be aware that another solution is currently being developed by Google for PyTorch model conversion to TFLite format: AI Edge Torch. Unfortunately, this is quite new and I couldnt make it work for my use case. However, any feedback about this library is very welcome.

Now that we finally have a TFLite model, we are able to run it in a web browser using TensorFlow.js. If you are not familiar with JavaScript (since this is not usually a language used by data scientists and machine learning engineers) do not worry; all the code is provided and is rather easy to understand.

I wont comment all the code here, just the most relevant parts. If you look at the code on GitHub, you will see the following in the javascript folder:

If we take a step back, all we need to do in the JavaScript code is to loop over the frames of the camera feed (either a webcam on a computer or the front-facing camera on a mobile phone) and do the following:

We wont comment the image preprocessing since this would be redundant with the Python preprocessing, but feel free to have a look at the code. When it comes to making an inference with a TFLite model in JavaScript, its fairly easy:

The tricky part is actually the postprocessing. As you may know, the output of a SSD object detection model is not directly usable: this is not the bounding boxes locations. Here is the postprocessing code that I used:

In the code above, the model output is postprocessed with the following steps:

This is exactly what has been done in Python too to display the resulting bounding boxes, if it may help you get a better understanding of that part.

Finally, below is a screenshot of the resulting web browser demo:

As you can see, it properly detects the face in the image. I decided to use a static image from Unsplash, but the code on GitHub allows you to run it on your webcam, so feel free to test it yourself.

Before concluding, note that if you run this code on your own computer or smartphone, depending on your device you may not reach 30 fps (on my personal laptop having a rather old 2017 Intel Core i58250U, it runs at 36fps). If thats the case, a few tricks may help you get there. The easiest one is to run the model inference only once every N frames (N to be fine tuned depending on your application, of course). Indeed, in most cases, from one frame to the next, there are not many changes, and the boxes can remain almost unchanged.

I hope you enjoyed reading this post and thanks if you got this far. Even though doing object detection is fairly easy nowadays, doing it with limited resources can be quite challenging. Learning about BlazeFace and converting models for web browser gives some insights into how MediaPipe was built, and opens the way to other interesting applications such as blurring backgrounds in video call (like Google Meets or Microsoft Teams) in real time in the browser.

Read more from the original source:

BlazeFace: How to Run Real-time Object Detection in the Browser - Towards Data Science

Related Posts

Comments are closed.