Global Journal of Computer Science and Technology, D: Neural & Artificial Intelligence, Volume 22 Issue 1

Global Journal of Computer Science and Technology Volume XXII Issue I Version I 37 ( )D Year 2022 © 2022 Global Journals Neural Network Design using a Virtual Reality Platform visualization approaches. Applications range from the entertainment industry to the rehabilitation field. In this scenario, one of the first applications was that of Belič [32], who developed a methodology in which the two technologies are used to model complex polycrystalline materials. The neural network is used for generating the grain of the material. It makes the manipulation of the grain easier and the representation of the same very compact. It represents the preliminary phase for realizing the polycrystalline material model for virtual reality. The models perform various system optimization or control activities by simulating reality. To make a realistic model of the observed material, the shape of the grains represented by the neural network must come as close as possible to the observed material. The grains are first generated, then, based on the properties of the observed material, the grain shape optimization process is employed to get closer to the observed sample. With the VR approach, information is obtained that is not accessible with traditional techniques (characterization and analysis techniques). The VR must have the model accompanied by information on the mechanical and electrical properties and on the shape of the grains. The use of virtual reality made it possible to obtain additional information such as life expectancy, the diffusion process, or the cracking of the material. VR is used to predict any anomalies produced in the material; it allows you to view the discrepancies detected concerning the expected model. Finally, the virtual environment corrected the model to provide better results. The ultimate goal is to obtain the virtual roughness of the grain as close as possible to the desired one. Finally, the authors used a multilayer neural network (ANN) for speech and motion recognition. The network architecture and weights have been imported into the Unity3D platform. Without wishing to be a replacement solution for a dedicated tracking system, it can represent a low-cost method of controlling the gestural execution of a subject. − Collecting training and testing data; − Extracting features from IMU inputs; − Building neural network-based classifier models; − Building test visualization in Unity. An interesting experience is developed by Nino et al. [33] on a VR mobile system based on Neural Network to an IMU gesture controller to simulate the feeling of embodiment [34] and presence. The application aims to demonstrate that combining a sound and motion controller with haptic feedback can improve immersion on mobile interfaces. The interface recognizes the user's gesture (vocal sound and movement) and displays it during execution. The visualization is managed through the gesture controller to improve presence and embodiment. To provide the user with a smooth display of the gesture produced, the authors have created a series of reference gestures that represent the best possible execution of that gesture. To make the set of these "ideal" gestures, a motion capture system was used to record the movement and voice produced by a shintaido expert while performing each gesture [35]. Each ideal gesture is played back for the user while performing the corresponding gesture. Through the dynamic control of the speed of reproduction of the gesture, the user features a degree of control over the avatar's movement. When the user has executed his movement, the captured gesture is classified, and the appropriate response is triggered. In the field of hand gesture recognition, there is an application developed by Fu and Yu [36] in which they use a neural network for the classification of gestures and, to evaluate the accuracy of recognition in real-time, they build a Unity scene using the data collected by the IMU. Hand gestures are an alternative to other natural methods of Communication. The possibility of recognizing the movement and the pose of the hands allows the observer to understand the intentions of who performs them. Hand gesture recognition is used in many applications such as sign language recognition, sign recognition for controlling robots, and augmented reality. The authors used the IMU as a data input source and structured the application into the following phases: VR Arduino is used as the data input source. It contains an IMU that provides 9 degrees of freedom sensor readings, including gyroscopes, accelerometers, and magnetometers. It is connected to the PC via USB using Teensy. The sensor data is recorded on the PC as a training data set and used directly in real-time tests. The user can hold the Arduino chip in his hand and perform any hand gesture patterns. A 3D LSTM convolution was used as a neural network to acquire better gestures and obtain high recognition accuracy. To determine which features work best with the model, they experimented with several combinations of quaternion and raw sensor inputs. The best performing characteristics were the combination of quaternion, gyroscope, and accelerometer data. IMU test data from VRduino are transferred directly into Unity via serial port to visualize the classification. The tests found that the classifier can predict the user's gesture with reasonable accuracy in real-time. Sait and Raza [37] presented a prototype based on an innovative hybrid technology in which are present VR and ANN models to build a VE environment in which users can meet and chat together and at the same time feel like they are in a real environment. From a conceptual point of view, the model starts from real- world situations and then, through the IVR and ANN model, creates a virtual world in which the participants are the managers of the model, using the Internet as a

RkJQdWJsaXNoZXIy NTg4NDg=