Global Journal of Computer Science and Technology, D: Neural & Artificial Intelligence, Volume 22 Issue 1

© 2022 Global Journals Neural Network Design using a Virtual Reality Platform Global Journal of Computer Science and Technology Volume XXII Issue I Version I 32 ( )D Year 2022 neural network training process is iterative. At the end of the training, the network's performance is analyzed: the results of this analysis provide information on whether the available data is sufficient. The data for training can be found already labeled on public archives or purchased. Alternatively, it is necessary to generate a faked data set and label them. Each type of application to be developed requires its own set of data, complete with all the characteristics used for a correct evaluation. 4) Network training and validation : training is the phase in which a neural network must acquire the knowledge to interpret specific input data. In this phase, it is necessary to provide examples in input/output pairs through which the network must learn to predict the expected output. The network is a mathematical model whose output is regulated by the weights of the signals received by each neuron. During the training phase, these weights are progressively calibrated so that more and more, the network's output in the face of a specific input approaches the desired one. This phase is implemented by comparing the initial classification with what is suggested by learning. We use an algorithm called Backpropagation is used to train neural networks. It compares the result obtained from a network with the output you want to have. We use the difference between the two results to change the weights of the connections between the network layers starting from the output layer. Then we proceed backward by modifying the weights of the hidden layers and finally those of the input layers. To do this, we use a function called cost appropriate to the problem we want to solve. A part of the data prepared is used for the training phase while the data for the test and validation phase are kept separate. Since pre-trained networks are available, we can use them to classify new objects using the transfer learning technique. 5) The process of evaluating new images using a neural network to make decisions is called inference. This step can collect additional test data used as training data for future iterations. There is a wide variety of deep neural networks (DNN). Convolutional neural networks (CNN) or Deep convolutional neural networks (DCNN) are the types most commonly used to identify patterns in images and video. The name "convolutional neural network" indicates that it employs a mathematical operation called convolution in place of general matrix multiplication in at least one of their layers [4]. They have evolved from traditional artificial neural networks, applying the connectivity pattern between neurons that resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted visual field region known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visible area. CNN's are regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks; each neuron in one layer is connected to all neurons in the next layer. Deep convolutional neural networks mainly focus on object detection, image classification, and recommendation systems and are sometimes used for natural language processing. In the absence of CNN to identify the objects in the images, it was necessary to resort to manual feature extraction methods, making mistakes and taking a considerable amount of time. CNN's are characterized by their ability to optimize filters (or Kernel) through machine learning, differentiating themselves from traditional algorithms in which filters must be manually sized. A feature common to all architectures of deep neural networks, including CNN's, is their high complexity due to many internal parameters. For this reason, numerous techniques have been developed to facilitate the design and understanding of the internal workings of a network [5]. The lack of interpretability and transparency of neural networks and the absence of explanations on the decisions taken can compromise the confidence in their applicability. Moreover, they are endorsed by the large datasets required to train most deep learning models. Therefore, the possibility of understanding the model's functioning allows one to overcome these problems. Finally, we can understand how deep learning models make decisions and what representations they have learned through deep learning visualization. In this way, we regain confidence in the model [6]. II. D ata V isualization Visualization of data represents another crucial phase in the design of neural networks. Designers want to visualize deep learning to understand how deep learning models make decisions and what representations they have learned to make the model as reliable as possible [7]. This notion of general understanding of the model is called interpretability or If we consider the training process of a DNN, its visualization facilitates this delicate design phase. Therefore, we need to optimize parameters to minimize the loss function through gradient descent when training neural networks. To ensure that the loss decreases over time, we need to monitor the entire training cycle and test the losses over the "epochs." Many visualization techniques are limited to showing whether a model is improving during its iterations, in the training process, rather than how it is training and why it makes certain decisions. For example, Martin Becker and al. developed a tool based explicability.

RkJQdWJsaXNoZXIy NTg4NDg=