In this example, we will create an AI App module for the detection of cargo ships present in satellite images with the help of DLS. Detecting objects in satellite images is a very interesting project which aims to automatically extract all the present objects seen in satellite images with significant results. With the help of Deep Learning Studio you can easily build an object detector.
In the DLS, first we will create segmentation dataset(MS-COCO format) with the help of CVAT and we will use this data to train and test the Mask RCNN algorithm, to increase the accuracy and speed of automatic ship detection.
Note : The software can be downloaded from official website page deepcognition.ai.
In this example, we will be using the airbus satellite ship dataset. The dataset contains 100 airbus images along with the annotations. You can download the dataset from here.
You can create a larger dataset by annotating images in CVAT. Follow the instructions for How to use CVAT in DLS
Once annotations are ready navigate to DLS Datasets page and check dataset DLS.
Let's create a new project.
Click on the Projects tab. To create a new project, click on the Create Project button. A pop up asking for the project name, project type and description shows up. The project type for our model is AI App Module. You can set the name and description as per your preference. Click on the newly created project and you will see a screen with tab as show below:
In the Task Specification tab (shown above), select the AI App Module tab as Mask RCNN Instance Segmentation by Deep Cognition that is provided by DLS.
Then select the segmentation type as Instance. Then select Neural Network as MaskRCNN.
Choose Model Name, you will get these three options with ResNet layer :
After selecting the Model Name you will see the tabs, for edit AI module settings as per your requirements. Here is a screenshot of how it should look :
Go to the next Data tab, here you can select CVAT dumped - private dataset with a small drop-down button next to the Dataset. You can split your data according to your need with Train-Validation-Test partition. Load dataset in Memory one batch at a time or full dataset at the same time. Shuffle your data by clicking on it.
We will now train the model from the Training tab. Select the device using which you want to compute the model parameters and click on Start Training. An ETA will be shown in the top left function along with other statistics shown on the tab. Following is a screenshot of DLS during model training :
Once the model has completed its training, we now go to the Results tab to check our result. On the Graphs option, select the RunX from the left and you will see the graphs for the training run. As you can see on the screenshot below, the training loss was close to 1.00, the training accuracy was close to 0.9878. This indicates the model is doing extremely well on the dataset.
On the Configuration option, you can see all the layers and hyperparameters that were used for the given training run.
On the Inference/Deploy tab, we will now test the model using the testing set. Select the Form Inference tab and select the Training Run. Upload or drag your file on canvas area and click on Start Inference. While the DLS software shows the result, we can download it to check for additional metrics, if needed. Following is the screenshot:
You will get an image with their predicted output. Just click on the predicted image if you want to see the annotations clearly.