Global Journal of Computer Science and Technology, D: Neural & Artificial Intelligence, Volume 23 Issue 2

recognition, due to issues like gradient fading and limited training data. d) Artificial Neural Networks Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by biological neural networks. They learn tasks through examples rather than explicit programming. ANNs, built with artificial neurons, process signals through connections (synapses) and can adjust the weights of these connections during learning. Neurons are typically organized in layers, and signals can pass through multiple layers before reaching the output layer. Originally aimed at emulating human brain functions, the focus shifted towards specific tasks, leading to techniques like back propagation. ANNs have been successfully applied in computer vision, speech recognition, machine translation, social network filtering, games, and medical diagnosis. While ANNs have far fewer neurons than the human brain, with millions of connections and thousands to millions of units, they can outperform humans in certain tasks such as playing "Go" or recognizing faces. Their ability to process vast amounts of data and learn from examples makes them powerful tools in various applications. e) Deep Neural Networks A deep neural network (DNN) 18 DNNs are often feedforward networks, where information flows from the input layer to the output layer. Weights between virtual neurons are initially assigned random numerical values and adjusted through algorithms to improve pattern recognition. Recurrent neural networks (RNNs) are employed in applications like language modeling, allowing bidirectional data flow, is an artificial neural network with more than two layers between the input and output layers. Neurons, synapses, weights, biases, and functions are common building blocks shared by all types of neural networks. DNNs can be trained and perform tasks like the human brain. They are capable of recognizing patterns, such as identifying dog breeds from images, by analyzing the probabilities associated with different outcomes. The term "deep" refers to the multiple layers involved in mathematical operations within the network. DNNs excel in modeling complex non-linear relationships, using layers to compile characteristics from lower layers and represent objects as compositions of primitives. They are particularly efficient in approximating sparse multivariate polynomials. Various DNN architectures exist, each with its own strengths and performance in specific fields, but comparisons require standardized datasets for fair evaluation. 18 Umberto Michelucci “Applied Deep Learning. A Case-based Approach to Understanding Deep Neural Networks” Apress, 2018. while convolutional deep neural networks (CNNs) are used in computer vision tasks. CNNs have also found application in acoustic modeling for automated speech recognition (ASR), among others. Some of the applications are: 1. Automatic speech recognition 2. Image Recognition 3. Visual art Processing 4. Natural Language Processing 5. Drug discovery and Toxicology 6. Customer relationship management 7. Recommendation systems 8. Bioinformatics 9. Medical image analysis 10. Mobile advertising 11. Image Restoration 12. Financial fraud detection 13. Relation to human cognitive and brain development 14. Commercial activity f) Natural Language Processing A branch of linguistics, computer science, and artificial intelligence called "natural language processing" (NLP) 19 g) Common NLP Tasks studies how computers and human language interact, with a focus on how to program computers to handle and analyze massive volumes of natural language data. The goal is to create a machine that can "understand" the contents of documents, including the subtle subtleties of language used in different contexts. Once the information and insights are accurately extracted from the documents, the technology can classify and arrange them. Speech recognition, natural language interpretation, and natural language synthesis are complex tasks in natural language processing. The most frequently investigated tasks in natural language processing are listed below. Some of these jobs have direct applications in the real world, while others are more frequently utilized as subtasks to help solve more significant challenges. Even though the tasks involved in natural language processing are interconnected, it is nevertheless possible to categorize them for ease of use. 1. Text and Speech Processing 2. Morphological Analysis: Lemmatization, Stemming 3. Syntactic Analysis: Parsing 4. Lexical Semantics: Vocabulary extraction, Disambiguation of words- WSD, Linking of entities 5. Relational Semantics 19 Sowmya Vajjala, Bodhisattwa Majumder, Anuj Gupta & Harshit Surana “Practical Natural Language Processing: A Comprehensive Guide to Building Real world NLP Systems”, O’ Reilly Media, Inc., 1st Edition, 2020. © 2023 Global Journals Global Journal of Computer Science and Technology Volume XXIII Issue II Version I 15 ( )D Year 2023 Journey of Artificial Intelligence Frontier: A Comprehensive Overview

RkJQdWJsaXNoZXIy NTg4NDg=