Person & Dog Segmentation (MaskRCNN)
In this article, we will create AI App module designed for person and dog segmentation using DLS.
Deep Learning Studio(DLS) will be used to train and test the network on the dataset provided. The software can be downloaded from deepcognition.ai by creating a free account.
For this example, Windows v3.0.1 will be used.
In this example, we will be using the person-dog dataset with MSCOCO format. COCO (Common Objects in Context), being one of the most popular image datasets out there, with applications like object detection, segmentation, and captioning. DLS provides coco-400-person-dog dataset by default in public datasets. The files included :
(a) Image folder
(b) Train/Val annotations
It is a well-labeled dataset and a moderate size.
We proceed to make our AI App Module. Click on the Projects tab. To create a new project, click on the Create Project button. A pop up asking for the project name, project type and description shows up. The project type for our model is AI App Module. You can set the name and description as per your preference. Click on the newly created project and you will see a screen with tab as show below:
New Project Creation
In the Task Specification tab (shown above), select the AI App Module tab as Mask RCNN Instance Segmentation by Deep Cognition that was provided by DLS.
Then select the segmentation type as Instance. Then select Neural Network as MaskRCNN.
Choose Model Name, you will get these three options with ResNet layer :
After selecting the Model Name you will see the tabs,for edit AI module settings as per your requirements. Here is a screenshot of how it should look :
Go to the next Data tab, here you can select coco-400-person-dog - public dataset with a small drop-down button next to the Dataset. You can split your data according to your need with Train-Validation-Test partition. Load dataset in Memory one batch at a time or full dataset at the same time. Shuffle your data by clicking on it.
We will now train the model from the Training tab. Select the device using which you want to compute the model parameters and click on Start Training. An ETA will be shown in the top left function along with other statistics shown on the tab. Following is a screenshot of DLS during model training :
Once the model has completed its training, we now go to the Results tab to check our result. On the Graphs option, select the RunX from the left and you will see the graphs for the training run. As you can see on the screenshot below, the training loss was close to 0.55, the training accuracy was close to 0.9993 and the validation accuracy was close to 0.99. This indicates the model is doing extremely well on the dataset.
On the Configuration option, you can see all the layers and hyperparameters that were used for the given training run.
On the Inference/Deploy tab, we will now test the model using the testing set. Select the Form Inference tab and select the Training Run and click on Start Inference. Upload or drag your file on canvas area. While the DLS software shows the result, we can download it to check for additional metrics, if needed. Following is the screenshot:
You will get an image with their predicted output. Just click on the predicted image if you want to see the annotations clearly.
1) Trained model and supporting files can be downloaded by clicking on the download tab and select the download Trained model with inference code button. This downloads the trained model, inference code and environment yaml for creating a conda environment for the project.
2) create a conda environment by running the following command.
conda env create -f <path to yaml file>
3) activate the conda environment.
conda activate <name of environment created>
4) run the following command for running inference.
python test.py --input <path to the input image> --run_dir <path to the folder where model is stored> --result_dir <path for the output dir>