Within AI two distinct paradigms have emerged: statistical AI and symbolic AI. Statistical AI, exemplified by models such as ChatGPT, excels at natural language understanding and generation, leveraging vast amounts of data to make predictions and generate responses. Symbolic AI, exemplified by systems like Wolfram Alpha, excels at formal reasoning, knowledge representation, and symbolic computation. Each paradigm has its own strengths and weaknesses, and the integration of both approaches has the potential to unlock new capabilities and applications.I.e. Large Language Models (LLMs) statistically predict words. The answer that follows a question.
Even something as simple as: 347324672 + 8624312Taking the integral of a relatively simple function will also (often) fail.
is not possible.
With LLM-powered agents to enhance player experience. Our game features agents with long-term memory and autonomous goal pursuit, enabled by large language models that emulate their personalities, behaviors, thoughts, actions, and dialogues.
As we continue to develop Language Models (LLMs), it raises the question of whether they are just "stochastic parrots", or a stepping stone toward Artificial General Intelligence (AGI). In this talk, we will explore the similarities and differences between state-of-the-art LLMs and predictive processing, a neurologically plausible theory of the functioning of the human brain.Starting with a presentation of Karl Fristons ''The Free Energy Principle'', for describing the brain as ''a probabilistic, hierarchical predictive machine'' (See my presentation here. ''Philosophy of Mind'', Braga 2023). Kulveit went on to compare with machine learning.
TensorFlow is used for building neural networks. But in this talk, we aim to demonstrate that it is not only a powerful open-source machine-learning library, but a whole ecosystem of tools and, to a certain degree, a programming language on its own.
An AGI which is not a natural-like AGI is termed an alien AGI [1].Indeed:
It may be, that alien AGIs’ understanding of the world is so different from a human understanding that to allow alien AGIs to do tasks done originally by humans, is to eventually invite strange failures in the tasks [1].Certainly, what it means to actually understand is in itself a tricky concept.
Harnad notes that the symbol grounding problem essentially is how symbols get their meaning, and thus related to what actually is meaning. When symbols are being operated on in a cognition-like module, the symbols are being manipulated following rules which are based on the symbols shapes, so to speak, rather than their meaning [2].
Harnad (1994) does write, ''my guess is that the meanings of our symbols are grounded in the substrate of our robotic capacity to interact with the real world of objects, events and states of affairs that our symbols are systematically interpretable as being about'' [2].Certainly, without real understanding, (alien) artificial systems can look at problems,
A robot needs to cross a deep river.Through the use of natural-like AGI (causality, analogy, physics and so on),
It sees a river filled with leaves (which it has no previous experience with,
understanding or knowledge of other than to recognize them). It can only say that they are ''solid08'' objects [1].
If we give the same problem to chatGPT (here considered an alien system),(Alien) ChatGPT can't reason about unknown objects.
and tell it that the river is filled with ''leaves'', it can use its super-human
knowledge of much of what has ever been written, to conclude that it is dangerous
to cross the river [1].
But, if we tell chatGPT that the river is filled with ''solid08'' objects,Obviously!
and we describe what ''solid08'' objects are, it is not able to make a decision
about crossing the river, as it has not read about ''solid08'' objects before [1].
(There are) four main types of psychological agency, here described in terms of their evolutionary order of emergence.Where robot agents, that want to pass the Turing Test, must possess a similar human-like agency?
First was the goal-directed agency of ancient vertebrates, then came the intentional agency of ancient mammals, followed by the rational agency of ancient great apes, ending finally in the socially normative agency of ancient humans.
Each new form of psychological organization represented increased complexity in the planning, decision-making, and executive control of behavior. Each also led to new types of experience of the environment and, in some cases, of the organism's own psychological functioning, leading ultimately to humans' experience of an objective and normative world that governs all of their thoughts and actions [3].
Intentionality: The characteristic of consciousness, such that human minds can represent things, properties, or states of affairs [5].
In philosophy, intentionality is the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs. To say of an individual’s mental states that they have intentionality is to say that they are mental representations or that they have contents [6].Where human intentionality have:
General mathematical reasoning is computationally undecidable, but humans routinely solve new problems. Moreover, discoveries developed over centuries are taught to subsequent generations quickly [8].And true AGIs should certainly be able to do, at least, some mathematical reasoning?
We introduce Peano, a theorem-proving environment where the set of valid actions at any point is finite. We use Peano to formalize introductory algebra problems and axioms, obtaining well-defined search problems. We observe existing reinforcement learning methods for symbolic reasoning to be insufficient to solve harder problems. Adding the ability to induce reusable abstractions (''tactics'') from its own solutions allows an agent to make steady progress, solving all problems [8].
The recovered order has significant agreement with the expert-designed Khan Academy curriculum, and second-generation agents trained on the recovered curriculum learn significantly faster. These results illustrate the synergistic role of abstractions and curricula in the cultural transmission of mathematics [8], [9].Many good points.
This is so because, ..., ChatGPT doesn’t know that 1 is not greater than 1,Still, here we were asked to look at ''mathematical cognition,
and surely AGI subsumes command of elementary arithmetic on the natural numbers [10].
Simply observing that at the very least, going with mathematical cognition as a starting place for trying to establish a plumb-line standard modeling of AGI-level minds is rational, since if any part of cognition is likely to span minds in general it is mathematical cognition...Which then allow us to address the question, whether a certain architecture then is a ''human-level AGI mind'' [11]:
(rather than perception, motor control, natural language usage, etc)...[10].
Some rudiments of the cognitive meta-architecture M and of a formal procedure for determining, with M as touchstone, whether a given cognitive architecture Xi conforms to a minimal standard model of a human-level AGI mind [10].Bringsjord er al. argue that certain existing cognitive architectures might conform to such a ''minimal standard model of a human-level AGI mind''.
Humans have the ability to gain knowledge by reasoning (e.g., deductively) quantificationally and recursively over abstract concepts, including abstract concepts of a highly expressive, including infinitary, nature, expressed in arbitrarily complex natural and formal languages [12].But getting to a comprehensive understanding of what AGI's really are,
Computationalism-driven AI (= ''Strong'' AI) will yield Total Turing Test-passing zombies (in the philosopher's, not The Night of the Living Dead's, sense of 'zombie'), but will fail to reach its goal of replicating persons, for two general reasons. One, persons process information at a ''super'' - Turing (or hypercomputational) level; two, they enjoy certain properties (e.g., intentionality, self-consciousness, incorrigibilism, qualia, etc.) beyond the reach of any mere information-processing machine [13].Well, well. But an interesting talk for sure.
DoD is committed to developing and employing all weapon systems, including those with autonomous features and functions.This, and other similar announcements, have caused an increased interest in the topic of AI safety.
...
The Directive was established to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.
...
The design, development, deployment, and use of systems incorporating AI capabilities is consistent with the DoD AI Ethical Principles and the DoD Responsible AI (RAI) Strategy and Implementation Pathway [15].
Look at a common situation, where the user’s true goals are only expressed to an AI system through proxies. This initially leads to positive utility. But decreases to negative utility over time as the AI system over-optimizes for the proxy objective. See: ''Consequences of Misaligned AI'' by Simon Zhuang & Dylan Hadfield-Menell [16].I.e. a simple optimization for just 1 parameter (say ''Profit'' or ''Control'') is different from scenarios, where optimzation must take many parameters into account (E.g. in education, we will probably want to optimize many parameters. Promoting things like "Presence", "Curiosity", "Courage", "Resilience", ''Ethics'' & ''Personal leadership'' simultaneously).