In order to successfully train a neural network, you must first prepare your data. We begin by collecting the images of the objects you’d like, for future recognition. This step helps us to teach the neural network how to identify what’s in the photo.
In the example below, the goal is to create a system, that recognizes the exact model of a remote, by taking an image on a smartphone.
However, with the different model numbers on the remotes, and their similarities, it significantly complicates the identification process. If you want to distinguish one remote from another, there are specialized catalogs and posters.
The neural network visualization, and further training, involve the objects’ recognition. Next, we prepare images of all the remotes, with different backgrounds, lighting, and positions. Then we mark each remote control picture, so the neural network can understand where there’s an object the systems needs to identify. Doing this manually takes a lot of time.
With the help of Blender, we’ve created a virtual studio, which generates markup images (material) for neural network training. Now, let's figure out how to gather data with the studio and manually.
We started with taking one photo of each remote control and picking up many different backgrounds. With the help of the script connected, to Blender, we automatically substitute the image of the remotes for each backdrop. In the unfolded studio, there’s a virtual camera (black triangle in the photo below). It approaches and changes the position. In the process of animation, the script renders each frame. In the end, we have a base of images (data) for further training. This process (obtaining images using a 3D program) is called “rendering.”
Here you can see exactly what Blender looks like, with the image of the remote control (in the centre), and the finished render— the console is set to the background and turned to the left.
A render is an image, obtained by rendering a three-dimensional model into an image.
In the process of rendering, our script does the automatic layout of materials (it is called labelling or boxing). As a result, we get a base of renders (all remotes on all backgrounds in different positions), and XML files corresponding to each renderer with the exact coordinates of the object in the image.
On the pic, you can see how a set of renders of one remote control looks like with different backgrounds and positions.
Our virtual studio immediately divides all received materials into two folders: test and train. The train folder is used to instruct the neural network, and the test folder (where every fifth render falls into) will be the base for self-testing of the neuron.
During the training, the system makes so-called checkpoints — how do neural networks work. For this activity, you need the material similar to training, but not identical.
This training method, is generally for any objects that a neural network will learn how to recognize, and it’s called artificial neural network training with synthetic data.
Renders automate and significantly speed up the process of preparing the necessary data for training the convolutional neural network. If there wasn’t a Blender-based virtual studio, we would prepare all materials “by hand.” For example, take many photos of each remote control in different conditions, mark them separately, and manually distribute them to the training and tested material. It would take a lot of resources, money and time.
The training process depicting below, is the creation of a neural network example:
The process of a neural network training is influenced by the quantity, diversity of materials, and the quality of the images themselves. The quality of the image, of the desired object, must be identical to what the network was trained to recognize. In our project, the controls will be recognized by photos, mostly on a smartphone. Therefore, for network training, along with high-quality images, we’ll also use low-quality ones.
After preparing the materials, we proceed to the training of the convolutional neural network. This topic we’ll cover in our next article.
If you’re interested in object recognition, contact us. We’ll help you apply our web development and expertise to help improve your business.