Introduction to Deep Learning Libraries: PyTorch and Lightning AI – KDnuggets

Deep learning is a branch of the machine learning model based on neural networks. In the other machine model, the data processing to find the meaningful features is often done manually or relying on domain expertise; however, deep learning can mimic the human brain to discover the essential features, increasing the model performance.

There are many applications for deep learning models, including facial recognition, fraud detection, speech-to-text, text generation, and many more. Deep learning has become a standard approach in many advanced machine learning applications, and we have nothing to lose by learning about them.

To develop this deep learning model, there are various library frameworks we can rely upon rather than working from scratch. In this article, we will discuss two different libraries we can use to develop deep learning models: PyTorch and Lighting AI. Lets get into it.

PyTorch is an open-source library framework to train deep-learning neural networks. PyTorch was developed by the Meta group in 2016 and has grown in popularity. The rise of popularity was thanks to the PyTorch feature that combines the GPU backend library from Torch with Python language. This combination makes the package easy to follow by the user but still powerful in developing the deep learning model.

There are a few standout PyTorch features that are enabled by the libraries, including a nice front-end, distributed training, and a fast and flexible experimentation process. Because there are many PyTorch users, the community development and investment were also massive. That is why learning PyTorch would be beneficial in the long run.

PyTorch building block is a tensor, a multi-dimensional array used to encode all the input, output, and model parameters. You can imagine a tensor like the NumPy array but with the capability to run on GPU.

Lets try out the PyTorch library. Its recommended to perform the tutorial in the cloud, such as Google Colab if you dont have access to a GPU system (although it could still work with a CPU). But, If you want to start in the local, we need to install the library via this page. Select the appropriate system and specification you have.

For example, the code below is for pip installation if you have a CUDA-Capable system.

After the installation finishes, lets try some PyTorch capabilities to develop the deep learning model. We will do a simple image classification model with PyTorch in this tutorial based on their web tutorial. We would walk on the code and have an explanation of what happened within the code.

First, we would download the dataset with PyTorch. For this example, we would use the MNIST dataset, which is the number handwritten classification dataset.

We download both the MNIST train and test datasets to our root folder. Lets see what our dataset looks like.

Every image is a single-digit number between zero and nine, meaning we have ten labels. Next, lets develop an image classifier based on this dataset.

We need to transform the image dataset into a tensor to develop a deep learning model with PyTorch. As our image is a PIL object, we can use the PyTorch ToTensor function to perform the transformation. Additionally, we can automatically transform the image with the datasets function.

By passing the transformation function to the transform parameter, we can control what the data would be like. Next, we would wrap the data into the DataLoader object so the PyTorch model could access our image data.

In the code above, we create a DataLoader object for the train and test data. Each data batch iteration would return 64 features and labels in the object above. Additionally, the shape of our image is 28 * 28 (height * width).

Next, we would develop the Neural Network model object.

In the object above, we create a Neural Model with few layer structure. To develop the Neural Model object, we use the subclassing method with the nn.module function and create the neural network layers within the__init__.

We initially convert the 2D image data into pixel values inside the layer with the flatten function. Then, we use the sequential function to wrap our layer into a sequence of layers. Inside the sequential function, we have our model layer:

By sequence, what happens above is:

Lastly, the forward function is present for the actual input process for the model. Next, the model would need a loss function and optimization function.

For the next code, we just prepare the training and test preparation before we run the modeling activity.

Now we are ready to run our model training. We would decide how many epochs (iterations) we want to perform with our model. For this example, lets say we want it to run for five times.

The model now has finished their training and able to be used for any image prediction activity. The result could vary, so expect different results from the above image.

Its just a few things that PyTorch can do, but you can see that building a model with PyTorch is easy. If you are interested in the pre-trained model, PyTorch has a hub you can access.

Lighting AI is a company that provides various products to minimize the time to train the PyTorch deep learning model and simplify it. One of their open-source product is PyTorch Lighting, which is a library that offers a framework to train and deploy the PyTorch model.

Lighting offers a few features, including code flexibility, no boilerplate, minimal API, and improved team collaboration. Lighting also offers features such as multi-GPU utilization and swift, low-precision training. This made Lighting a good alternative to develop our PyTorch model.

Lets try out the model development with Lighting. To start, we need to install the package.

With the Lighting installed, we would also install another Lighting AI product called TorchMetrics to simplify the metric selection.

With all the libraries installed, we would try to develop the same model from our previous example using a Lighting wrapper. Below is the whole code for developing the model.

Lets break down what happen in the code above. The difference with the PyTorch model we developed previously is that the NNModel class now uses subclassing from the LightingModule. Additionally, we assign the accuracy metrics to assess using the TorchMetrics. Then, we added the training and testing step within the class and set up the optimization function.

With all the models set, we would run the model training using the transformed DataLoader object to train our model.

With the Lighting library, we can easily tweak the structure you need. For further reading, you could read their documentation.

PyTorch is a library for developing deep learning models, and it provides an easy framework for us to access many advanced APIs. Lighting AI also supports the library, which provides a framework to simplify the model development and enhance the development flexibility. This article introduced us to both the library's features and simple code implementation.Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and Data tips via social media and writing media.

Go here to see the original:
Introduction to Deep Learning Libraries: PyTorch and Lightning AI - KDnuggets

Related Posts

Comments are closed.