Global Journal of Computer Science and Technology, D: Neural & Artificial Intelligence, Volume 23 Issue 2

objects, while connectives and quantifiers are used for complex and quantified sentences. Building first-order logic knowledge bases requires careful domain analysis, vocabulary selection, and encoding of axioms to support intended inferences. The inference problem in first-order proofs involves the instantiation phase, which can be expedited using unification to find suitable variable substitutions. Generalized Modus Ponens is an inference rule that employs unification and is an effective method for first- order logic. Modus Ponens is a fundamental rule of inference stating that if P and P → Q are confirmed, we can infer Q. Forward chaining is used in production systems and deductive databases, executing in polynomial time and being complete for Datalog programs. Backward chaining is employed in logic programming languages like Prolog, utilizing compiler technology for faster inference, but it can encounter infinite loops that can be resolved by memoization. d) Knowledge Representation A general-purpose ontology is necessary for large-scale knowledge representation 25 e) Planning to organize and connect the several sample domains of information. A general-purpose ontology should, in theory, be able to handle any domain and cover a wide range of knowledge. An ontology in AI is a term for a common language among researchers, and it offers definitions of fundamental ideas and their connections that are machine-interpretable. With the aid of ontology-based AI, a system can infer things that resemble human conduct by using the contents and connections between them. Planning systems employ first-order or propositional representations of states and actions to address problems effectively. The STRIPS 26 language describes actions based on preconditions and effects, while initial and goal states are represented as conjunctions of positive literals. ADL (Action Description Language) 27 is an extension of STRIPS that allows for disjunctions, negation, and quantifiers, enabling robot- specific planning and scheduling. State-space search can be conducted in forward (progression) or backward (regression) directions. Heuristics can be generated by assuming subgoal independence and using various relaxations of the planning problem. Partial-Order- Planning (POP ) 28 25 Saroj Kaushik, “Artificial Intelligence”, Cengage Learning India, 2011. 26 https://www.aiforanyone.org/glossary/stanford-research-institute- problem-solver#:~:text=STRIPS%20is%20a%20formalism%20used, in%20AI%20applications%20since%20then. 27 https://www.adl.org/resources/blog/six-pressing-questions-we-must -ask-about-generative-ai 28 https://www.cs.utexas.edu/users/mooney/cs343/slides/pop.pdf algorithms maintain a partial ordering of actions and explore the space of plans without committing to a fully ordered sequence of actions, making them suitable for divide-and-conquer strategies. f) Probabilistic Reasoning over Time Bayesian Network s 29 g) Decision Process are probabilistic graphical models that represent variables and their conditional dependencies using a directed acyclic graph. Each node has a conditional distribution given its parents, allowing for precise depiction of conditional indepen- dence relationships. Hybrid Bayesian Networks combine discrete and continuous variables and use various canonical distributions. Exact inference in singly connected networks can be performed in linear time, but it is generally challenging in most cases. Relational Probability Models offer a rich representation language for structured statistical models, combining probability theory with ideas from first-order logic. Representational constraints ensure a precise probability distribution that can be represented by an equivalent Bayesian network. Truth-functional systems have been used in alternative reasoning systems but may have limitations for reasoning under uncertainty. Markov Decision Processes (MDPs) 30 h) Making Complex Decisions are stochastic models for sequential choice problems in ambiguous situations. They have a transition model that specifies probabilistic outcomes of actions and a reward function for each state. The next state only depends on the current state, independent of the past. The utility of a state sequence is the total of rewards received, possibly discounted over time. An MDP's solution is a policy that determines the agent's choice for each possible state. The value iteration algorithm iteratively resolves equations to compute state utilities, while policy iteration involves calculating utilities and refining the policy. Partially Observable Markov Decision Processes (POMDPs) 31 29 Trivedi M.C., “A Classical Approach to Artificial Intelligence”, Khanna Publishing House, Delhi, 2018. 30 Tom Taulli, “Artificial Intelligence Basics: A Non-Technical Introduction”, 2019 combine Markov Decision Processes with hidden Markov models to model system dynamics and unobservable states. Decision theoretic agents can be developed for POMDP environments using dynamic decision networks. Game theory is a mathematical branch used to model strategic interactions between rational agents in predefined contexts. Nash equilibrium strategy, where no player has an incentive to change their approach, is often used to resolve games. Mechanisms can be employed to define rules for agents to maximize global utility while considering individual rationality. These mechanisms 31 Peter Norvig and Stuart J. Russell, “Artificial Intelligence: A Modern Approach”, 1995 © 2023 Global Journals Global Journal of Computer Science and Technology Volume XXIII Issue II Version I 17 ( )D Year 2023 Journey of Artificial Intelligence Frontier: A Comprehensive Overview

RkJQdWJsaXNoZXIy NTg4NDg=