Global Journal of Computer Science and Technology, D: Neural & Artificial Intelligence, Volume 23 Issue 2

period when computer scientists struggled with a severe lack of government funding for AI research. Public interest in artificial intelligence fell during AI winters. A boom of AI (1980-1987). • In 1980, AI returned with "Expert System" after its winter hiatus. Expert systems that can make decisions like a human expert have been programmed. • The American Association of Artificial Intelligence had its inaugural national conference at Stanford University in the year 1980. The second AI winter (1987-1993). The second AI Winter period spanned the years 1987 to 1993. Again, due to excessive costs and ineffective results, investors and the government ceased sponsoring AI research. An extremely cost-effective expert system was XCON. The emergence of intelligent agents (1993-2011). • Year 1997: The first computer to defeat a global chess champion was IBM Deep Blue, which accomplished this feat in 1997 by defeating Gary Kasparov. • Year 2002: AI debuted in the house as a vacuum cleaner called Roomba. • The Year 2006: Up to 2006, AI was introduced to the business world. Additionally, businesses like Facebook, Twitter, and Netflix began utilizing AI. • Deep learning, big data, and artificial general intelligence (2011-present): • Year 2011: In 2011, IBM's Watson, a computer program that had to answer challenging questions and riddles, won the quiz show Jeopardy. Watson had demonstrated its ability to comprehend natural language and quickly find answers to challenging problems. • The Year 2012: Google introduced the "Google Now" function for Android apps, which can predict information for the user. • Year 2014: The chatbot "Eugene Goostman" achieved first place in the famed "Turing test" competition in 2014. • Year 2018: The IBM "Project Debater" excelled in a debate with two master debaters on difficult subjects. In a demonstration, Google's artificial intelligence program "Duplex" took on-call appointments for a hairdresser. At the same time, the person on the other end of the line was unaware that she was speaking with a machine. • Year 2019: With the introduction of fresh methods like unsupervised learning and reinforcement learning in 2019, deep learning made more advancements. In 2019, big data, which entails the analysis of enormous datasets, remained an essential part of AI. In 2019, researchers continued to work towards creating artificial general intelligence (AGI), or AI that can carry out any intellectual endeavor that a human can. • Year 2020: With the creation of new models and methods, deep learning—which entails training artificial neural networks with lots of data—has continued to advance. The rise of self-supervised learning, a method that enables neural networks to learn from data without explicit labels or supervision, was one significant discovery in 2020. In 2020, big data will still be a crucial part of AI, and the growing availability of data will fuel the field's innovation. The development of federated learning, a method that enables several parties to cooperatively train machine learning models without sharing their data, was one theme in 2020. In 2020, researchers' long- term objective was still artificial general intelligence (AGI), but progress was slowly being made. • Year 2021: With the creation of new models and methods, deep learning continued to advance. The growing application of transformer models, such as GPT-3, for various natural language processing applications was one noteworthy breakthrough in 2021. Big data continued to be a crucial part of AI in 2021, with the field's innovation being driven by the expanding availability of data. The application of machine learning in industries like banking and healthcare, where analyzing massive datasets can improve decision-making and patient outcomes, was one theme in 2021. In 2021, researchers' long- term objective remained the development of artificial general intelligence (AGI). • Year 2022: With the development of new models and methods, such as the improvement of self- supervised learning and the utilization of attention mechanisms, deep learning is anticipated to continue to evolve. In 2022, big data is anticipated to continue to be a vital part of AI, with the field's ongoing innovation being fueled by the expanding availability of data. In 2022, researchers' long-term objective is still anticipated to be the development of artificial general intelligence (AGI). Even if obtaining AGI remains a formidable obstacle, it is anticipated that current research in fields like cognitive architectures, reinforcement learning, and explainable AI will advance the discipline. Also, there will probably continue to be an emphasis on making sure that the advancement of AI technology is responsible and advantageous for society, focusing on concerns like prejudice and fairness. • Year 2023: Deep learning, big data, and artificial general intelligence are projected to continue to be significant areas of research and development in AI in 2023, based on current trends and prior developments. Global Journal of Computer Science and Technology Volume XXIII Issue II Version I 11 ( )D Year 2023 © 2023 Global Journals Journey of Artificial Intelligence Frontier: A Comprehensive Overview

RkJQdWJsaXNoZXIy NTg4NDg=