Fast.ai is a revolutionary library created by Jeremy Howard who was a former Kaggle no 1 Grandmaster. He has developed the Fast.ai course and fast.ai library to make deep learning available to anyone. In this article, we are illustrating how we have created a deep learning image classifier using Fast.ai library and deployed it on Gradio.
Step 1: Gathering the data
Gathering the data is made simple by fast.ai library function which uses duck-duck-go search engine API to gather data. We just have to provide a list of classes to the API and it will fetch the links and using the download_images function it will download those images.
Step 2: Create a Data Block and Image Augmentation
In this step, we created a data block to load images in it using DataBlock class. Further, the aug_transform function is used to do image augmentation.
Step 3: Train the model
In this step, we used the pre-trained Resnet-50 model and fine-tuned it, that is to tweak the weights according to our dataset. We observed there is 51% accuracy for this model.
Step 4: Export the model
The model was exported using learn.export() command. The model is saved as export.pkl file.
Step 5: Customize Gradio Interface
Created a file named app.py and add title, description, and examples to the Gradio interface. Used the enable_queue parameter in order to handle traffic on the app.
We put the requirements.txt file containing the following dependencies and added examples of face skin conditions
Run the code on the localhost and found the below screen.
Step 6: Create a workspace in Gradio
HuggingFace Spaces is a free-to-use platform for hosting machine learning demos and apps. In order to be able to create a HuggingFace Space, you need to have a HuggingFace account. You can sign up for free here. After signing up, you can create a Space by clicking “New Space” on the navigation menu (press on your profile image).
Step 7: Push the code to the HuggingFace git repository
After creating the workspace, we cloned the git repository by the following command.
git clone https://huggingface.co/spaces/pratikskarnik/face_problems_analyzer
Put app.py, export.pkl, requirements.txt, and sample images in the local repository and push code to git. There are chances that export.pkl file will be more than the size GitHub could push. In that case, install the GitHub LFS project using the below commands.
git lfs install
git lfs track "*.pkl"
git add .gitattributes
git commit -m "update .gitattributes so git lfs will track .pkl files"
Now we pushed the code to git
git commit -m "Initial commit"
Finally, the code was ready and automatic deployment took place.
Checking the app on incognito mode:
That is how our Machine Learning app was created.
Link to the app and code: https://huggingface.co/spaces/pratikskarnik/face_problems_analyzer