Global Journal of Computer Science and Technology, D: Neural & Artificial Intelligence, Volume 22 Issue 1
© 2022 Global Journals Neural Network Design using a Virtual Reality Platform Global Journal of Computer Science and Technology Volume XXII Issue I Version I 38 ( )D Year 2022 medium to connect the IVR model. The model components are thus: Immersible Virtual Reality, Artificial Neural Networks, Internet, and Database. For the need to have a system design for a realistic visualization, the authors used the IVR. It is the technology model which allows people to immerse themselves in the artificial environment. ANN is the architecture used to simulate the image and express the user's emotion in the VE. Users' voices, videos, and images are input to train the ANN. Those data are also used for security purposes to check the users' identities. The Internet connects people spread over different points into a single point of contact on the web. The ANN network is trained on the Internet with a sample of users. The Internet allows scalability for a large number of users and ensures reliable transport of a large amount of multimedia data. The database must contain a large number of quality video samples and possess a large number of facial recognition expressions to store the video and audio data of the participants. The model requires two implementation steps. The first is for the interface, and the user inserts his data into the model, followed by the loading of their image, voice. These data will be stored in the audiovisual database to the interface to configure the user with the model. Later, when the user enters the system, the interface will check their identity with the stored data. Finally, the interface must be designed using 3D graphics software [38]. In the second phase, the IVR will provide the chosen environment to those users participating in the network, and the ANN will provide simulated images of the users. As noted above, there is a growing need for non-experts to understand how deep learning works. For their complexity, neural networks are seen as black boxes. New techniques are experimented with to make the understanding of their functionality as accessible as possible. In this context, Meissler et al. [39] have developed a technique for visualizing convolutional networks in virtual reality. Their solution is mainly aimed at new designers and aims to provide them with a basic understanding of the functionality of networks. The solutions currently available are essentially usable either by developers of deep learning systems or by those who already have detailed knowledge of deep learning processes or by those who generally have an interest in interactive visualization but are not attracted by the representation capabilities of virtual reality. Their solution stems from using the advantages of virtual reality in visualization compared to traditional 2D or 3D tools available on a standard desktop screen. Their approach explores the graphic effects produced by the immersion feature. The visualization is intended to illustrate the structure and functionality of the network. For example, the user within the Unity platform can select different inputs, modify the functional structure by inserting or removing a layer and obtain their classification by CNN. The network model used is LeNet5 [40], whose structure is as follows: − 2 Convolutional layers. − 3 Fully connected layers. − 2 Average pooling layers. − Tanh is the activation function for the hidden layer. − SoftMax is the activation function for the output layer. − Cross-entropy as cost function. − Gradient descent as optimizer. − 60000 trainable parameters. The model was defined in Keras [41] using PYTHON. The convolutional and pooling layers were displayed as boxes whose dimensions were based on the layer's size. This type of representation should allow the user to see the structure of the model and the dimensional change of the data. Feature maps of each convolutional and pooling layer are rendered 2D images with matplotlib using the gray colormap. The experiments carried out on the participants who did not know the topic confirmed the model's validity. Some of them declared that the application was completely self-explanatory. The experiments wanted to demonstrate that virtual reality compared to traditional means of learning, does not distract people but, on the contrary, allows them to focus more on the subject and even longer due to the involvement produced by the virtual environment. VI. M ethodology Our solution is based on a VR platform where users can develop deep convolutional neural network models for image classification. This type of approach allows verifying the components of a network and becomes a good training tool for professionals who want to design deep neural networks. In our work, we want to show the opportunity to use the Unity platform (2019) to create a virtual reality environment. We believe it offers some advantages not found in various visualization systems in 2D and 3D desktop screens. Most of the existing platforms do not provide, for inexperienced users, simple interfaces through which it is possible to vary the network architecture and adjust the model parameters. For example, the VR platform allows the user to define the network architecture by specifying the sequence of the layers. The user can interactively verify its operation and decide based on the results obtained by modifying it. We have used this environment to verify the functioning and possibly improve the prediction level of a
RkJQdWJsaXNoZXIy NTg4NDg=