Synthesizing data by using 3D object models
Synthesizing panoramic images data for object recognition without real images
In this project, we focus on synthesizing 360 panoramic image data from 3D models without real image.
We first collected 3D models of in-house objects such as table, curtain, chairs, air-conditioner, TV, etc from
online storage.
We then rendered these 3D model on a license software for visualizing 3D models.
We chose this software because it has a function that allowing rendering objects under virtual 360 degrees camera. Using this software, we generated virtual multi-view video and images of collected 3D objects.
Next, the videos and images will be processed by own developed tool allowing segment objects from background images.
This tool synthesized 360 panoramic image data by putting segmented object images on different background.
Finally, we trained some well-known object detectors (YoloV3, YoloV5, EfficientDet, Mask-RCNN, etc.) on the synthesized data then evaluated on 360 degree public dataset namely
360-Indoor
The evaluation results show that this method is very efficient, low-cost and promising approach for synthesizing 360 image dataset.