Global Journal of Science Frontier Research, F: Mathematics and Decision Science, Volume 22 Issue 4

Boosting Human Insight by Cooperative AI: Foundations of Shannon-Neumann Logic • Number of combinations: N comb = 2 N ques • Number of permutations: N perm = N ques ! Thus, the number of distinct classes of challenges SN-Logic can cope with, is effectively infinite ( N = 10 7 !) , yet, based on a few small, compact concept spaces (cardinality ≈ 10 2 ). In this sense, SN-Logic is economical (Occam’s razor). The computed complexity of SN-Logic is a theoretical upper bound , to determine the scope of SN-Logic. In practice the computational cost will be much lower, due to universal constraints (common to all challenge classes), because they are imposed by (mostly) challenge-independent forces: • causality: universal root causes of cognitive difficulties (e.g. confusion due to ambiguity, indecision due to missing information) and solution quality (e.g. accuracy, adaptability) • logic: valid inferences with sound semantics • planning: logically necessary chronology of solution steps • problem-solving: universal tactics to minimize obstacles (to avoid/reduce), and maximize solution quality (to target/increase/maximize) (e.g. divide- and-conquer, minimize ambiguity, maximize order, simplify) • information: a question is only informative, if it reduces uncertainty by elimi- nating alternatives, options, outcomes, possibilities, within a cognitive mind- set (intention) C , restricting the insightful questions to a manageable subset: q ∈ Q ∗ ( C ) ⊂ Q , with Card ( Q ∗ ( C )) << Card ( Q ) • utility: a question is only useful, if it helps H , overcome obstacles, given a cognitive intention C , restricting the insightful questions to a manageable subset: q ∈ Q ∗ ( C ) ⊂ Q , with Card ( Q ∗ ( C )) << Card ( Q ) These rules impose a lot of structure on the SN-agent’s insight grain tensor µ ( frame, topic, when, where, what, which ) , which is, in its fully general form, a high-dimensional rank-6 tensor, but is in practice, very sparse and decomposable into simpler tensors and convolution kernels. The structure imposed by the universal (challenge class-independent) constraints, is sufficient to construct factored (’vanilla’) tensors µ ∗ of much lower dimensions and lower rank: knowledge acquisition . A ’flavor’ is then learned to fine-tune the tensors to each class of challenge, via cooperative learning (not described in this pa- per). Given the complexity upper-bounds of SN-Logic, the fine-tuning possibilities are vast. A SN ’s fundamental problem, is to use the IQ-game, to guide a human player H , in when and where , to pose which types of questions about what topic, to gain a maximum amount of insight into a complex challenge. A standard normal form inferencing (analogous to conjunctive and disjunctive normal forms, in digital and predicate logic), is necessary for the AI to cope with the computational complexity of SN-Logic. The AI can efficiently search for predicate variables action ∈ S A , used as building-blocks for conceptual solutions. Given an evolving inferencing framework ( frame, topic ), SN-normal forms are the following: © 2022 Global Journals 1 Year 2022 8 Global Journal of Science Frontier Research Volume XXII Issue ersion I V IV ( F ) These numbers already compare favorably to a typical human problem-solver H , working by herself. But the real power of SN-Logic (its scope of applications), comes from the combinatorial possibilities: the possible combinations and permutations of insight-boosting questions, needed to solve each class of challenges: g) Symbolic AI (knowledge acquisition) vs Learning h) SN-Logic Normal Form N otes

RkJQdWJsaXNoZXIy NTg4NDg=