Impressions and Links from
MlPrague 2023
& AGI 2023





I had the great pleasure of taking part in ML Prague 2023 (The practical conference about ML, AI and Deep Learning applications). June 2 - 4, 2023. Prague. And AGI 23. June 16 - 19, 2023. Stockholm.

Tried to follow as many talks as possible. But, well, these notes are, of course, in no way, shape or form complete...
Rather, these notes were written on conference nights, as my way of keeping track of the events that I attended. And as a way of storing links and references for future reference.

Below you will find impressions from the conferences, and links for further reading.
Disclaimer


1. MlPrague 2023.


MlPrague 2023

MlPrague 2023

MlPrague 2023

MlPrague 2023

1.1. Workshops. Friday, June 2nd.

1.1.1. Learning to Learn: Hands-on Tutorial on Using and Improving Few-Shot Language Models.


Workshop. MlPrague 2023

Workshop. MlPrague 2023

Workshop. MlPrague 2023






Github repo with links to the materials for the ''Learning to Learn'' workshop, here (Including slides).

Workshop. MlPrague 2023

1.2. Presentations Saturday. June 3rd.



MlPrague 2023
MlPrague 2023

1.2.1. Neural fields in aerial 3D reconstruction.

Martina Bekrová, Melown Technologies,
talked about using neural fields for view synthesis.

Earlier, training of neural fields was very computationally demanding, and it could take days to create one scene, and minutes for rendering each view.
But with Nvidia NeRF things are now much faster.
For getting started, see: NeRFs.

Very interesting, indeed.

1.2.2. ChatGPT and WolframAlpha, a tale of two AIs.

Jon McLoone, Wolfram Research,
talked about ''ChatGPT with WolframAlpha'':

Within AI two distinct paradigms have emerged: statistical AI and symbolic AI. Statistical AI, exemplified by models such as ChatGPT, excels at natural language understanding and generation, leveraging vast amounts of data to make predictions and generate responses. Symbolic AI, exemplified by systems like Wolfram Alpha, excels at formal reasoning, knowledge representation, and symbolic computation. Each paradigm has its own strengths and weaknesses, and the integration of both approaches has the potential to unlock new capabilities and applications.
I.e. Large Language Models (LLMs) statistically predict words. The answer that follows a question.
Whereas e.g. WolframAlpha has defined rules of behaviour for structured inputs.
Which gives different strengths and weaknesses.

MlPrague 2023

MlPrague 2023
LLMs are no good with complex rules:
Even something as simple as: 347324672 + 8624312
is not possible.
Taking the integral of a relatively simple function will also (often) fail.
If an input with a bunch of numbers is long enough,
chatGPT can't tell what the last number is...
Just as many simple lookups of factual knowledge can fail.

All of which, on the other hand, is fairly simple stuff for WolframAlpha.
So, combining chatGPT with WolframAlpha gives you what you need.
The possibility to handle unstructured data, LLMs strengths,
with calls to Wolfram for fact checking.

1.2.3. LLM-driven game characters.

Marek Rosa, GoodAI,
talked about ''LLM-driven game characters'':

With LLM-powered agents to enhance player experience. Our game features agents with long-term memory and autonomous goal pursuit, enabled by large language models that emulate their personalities, behaviors, thoughts, actions, and dialogues.
MlPrague 2023

MlPrague 2023
Given a goal, then the LLM agents can
plan and execute: Feedback: In order to create agents with a history, and long term objectives,
agents are given ''Long Term Memory'' (LTM): (Helping to overcome) current LLM challenges: Inspirational links: An awesome talk!

1.2.4. Bridging the Gap between Large Language Models and Human Intelligence.

Jan Kulveit, Future of Humanity Institute, Oxford University
talked about ''Large Language Models and Human Intelligence '':

As we continue to develop Language Models (LLMs), it raises the question of whether they are just "stochastic parrots", or a stepping stone toward Artificial General Intelligence (AGI). In this talk, we will explore the similarities and differences between state-of-the-art LLMs and predictive processing, a neurologically plausible theory of the functioning of the human brain.
Starting with a presentation of Karl Fristons ''The Free Energy Principle'', for describing the brain as ''a probabilistic, hierarchical predictive machine'' (See my presentation here. ''Philosophy of Mind'', Braga 2023). Kulveit went on to compare with machine learning.

Where we have that: On top of which one could add active inference.
That extends predictive processing to action and behavior.
In short: By acting in the world (try to minimize prediction error).

Still: LLMs are not synchronized with the external world in the way humans are.

Closing the gap, could make the AIs: Which of course might be dangerous, as, according to Kulveit::
All super interesting, indeed!

1.3. Presentations Sunday. June 4th.


Weekend entertainment at the Universum venue:
MlPrague 2023
Dinosaurs.


MlPrague 2023
Parties.

Still: The sunday sessions started at 9 AM, with:

1.3.1. Using TensorFlow for data processing.

Michal Kubišta, Dataclair,
talked about ''Using TensorFlow for data processing'':

TensorFlow is used for building neural networks. But in this talk, we aim to demonstrate that it is not only a powerful open-source machine-learning library, but a whole ecosystem of tools and, to a certain degree, a programming language on its own.
MlPrague 2023

A very usefull talk, indeed.

Followed by other very useful talks.
E.g. Uri Rosenbergs (Amazon) talk about ''Explainable AI for Computer Vision and NLP models''.

Again, a great day, indeed.

For earlier editions of MlPrague see:
MlPrague 2019
MlPrague 2021
MlPrague 2022


Simon Laub - Teaching AI, Economics-IT, March 2019
Indeed, all in all, super interesting, and certainly thoughts and material to consider for future classes in Deep Learning...
and beyond...

Berlin 2019 - Rise of AI conference

1.4. Prague Impressions. Pictures.


Prague. June 2023.

Prague. June 2023.

Prague. June 2023.
Tycho Brahe tomb.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.
More about Kafka,
from my 2019 Prague visit, here
(Including: Visit to his office at the "Worker's Accident Insurance Institute for the Kingdom of Bohemia", Cafe Louvre etc).

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.
''Hanging man'', Freud by David Černý.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.
More about Kafka,
from my 2019 Prague visit, here
(Including: Visit to his office at the "Worker's Accident Insurance Institute for the Kingdom of Bohemia", Cafe Louvre etc).

Prague. June 2023.
Youtube.

Prague. June 2023.

Prague. June 2023.
''In the fight between you and the world, back the world''.
Czech novelist Franz Kafka.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.
Il Commendatore in Prague.
Don Giovanni.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.

Prague. June 2023.
Jubilejní synagoga.

Prague. June 2023.

Prague. June 2023. Smetana Hall.

Prague. June 2023.
Municipal House.

Prague. June 2023.

Prague. June 2023.
Youtube.

Prague. June 2023.

Bruxelles Airport. June 2023.
From the return trip. Bruxelles airport...


2. AGI 23.

The 16th annual conference on Artificial General Intelligence.
June 16 - 19, 2023, Stockholm.
Full program: Schedule.

AGI 23. Stockholm. June 2023.
KTH (Kungliga Tekniska högskolan), Stockholm.


AGI 23. Stockholm. June 2023.
AGI 23. KTH, Stockholm. June 2023.

2.1 Presentations Saturday. June, 17th.

2.1.1. Time To Move On: AI And Cognitive Science Need A Divorce.

Tom Ziemke, LiU, talked about ''Time To Move On: AI And Cognitive Science Need A Divorce''.

With good points about the philosophical ''other minds'' problem.

I.e. the ''other minds'' problem is unsolvable.
We (humans) do not know what other (human) minds are really like (what they experience).

And, indeed, people have a tendency towards antropocentric anchoring.
Looking at, and interpreting, other (natural and artificial) minds as being similar to humans minds.
Which is rather problematic as we have ''no reason to assume that AGI will be human-like''.

Saying that cognition is computation hasn't helped eiher.
I.e. it indicates that natural and artificial minds are more similar than they probably are.

Very relevant points indeed.

2.1.2. Alien versus Natural-like Artificial General Intelligences.

Howard Schneider, talked about ''Alien versus Natural-like Artificial General Intelligences'', ACM (digital library).

He started with a definition:
An AGI which is not a natural-like AGI is termed an alien AGI [1].
Indeed:
It may be, that alien AGIs’ understanding of the world is so different from a human understanding that to allow alien AGIs to do tasks done originally by humans, is to eventually invite strange failures in the tasks [1].
Certainly, what it means to actually understand is in itself a tricky concept.
What is our understaning grounded in? What are symbols really grounded in?
How does a natural system understand, and ground symbols, and how could an artificial system understand (by grounding symbols...)...?

Schneider writes in ''An analogical inductive solution to the grounding problem'' [2]:
Harnad notes that the symbol grounding problem essentially is how symbols get their meaning, and thus related to what actually is meaning. When symbols are being operated on in a cognition-like module, the symbols are being manipulated following rules which are based on the symbols shapes, so to speak, rather than their meaning [2].
Harnad (1994) does write, ''my guess is that the meanings of our symbols are grounded in the substrate of our robotic capacity to interact with the real world of objects, events and states of affairs that our symbols are systematically interpretable as being about'' [2].
Certainly, without real understanding, (alien) artificial systems can look at problems,
that natural intelligence can easily solve, and come up with different (wrong) conclusions.
E.g.
A robot needs to cross a deep river.
It sees a river filled with leaves (which it has no previous experience with,
understanding or knowledge of other than to recognize them). It can only say that they are ''solid08'' objects [1].
Through the use of natural-like AGI (causality, analogy, physics and so on),
a human-like AGI is able to reason that it is dangerous to cross the river.
(Alien) ChatGPT doesn't work this way though:
If we give the same problem to chatGPT (here considered an alien system),
and tell it that the river is filled with ''leaves'', it can use its super-human
knowledge of much of what has ever been written, to conclude that it is dangerous
to cross the river [1].
(Alien) ChatGPT can't reason about unknown objects.
But, if we tell chatGPT that the river is filled with ''solid08'' objects,
and we describe what ''solid08'' objects are, it is not able to make a decision
about crossing the river, as it has not read about ''solid08'' objects before [1].
Obviously!

AGI 23. Stockholm. June 2023.
AGI 23. KTH, Stockholm.


AGI 23. Stockholm. June 2023.
AGI 23. KTH, Stockholm.

AGI 23. Stockholm. June 2023.
AGI 23. KTH, Stockholm.

2.1.3. The Relevance Of The Turing Test? Ascribing Intention Into Any Experience.

Martin Ingvar, Karolinska Institutet, talked about ''The Relevance Of The Turing Test? Ascribing Intention Into Any Experience''.

Agency in the human brain is a function that is gradually build.
(There are) four main types of psychological agency, here described in terms of their evolutionary order of emergence.
First was the goal-directed agency of ancient vertebrates, then came the intentional agency of ancient mammals, followed by the rational agency of ancient great apes, ending finally in the socially normative agency of ancient humans.
Each new form of psychological organization represented increased complexity in the planning, decision-making, and executive control of behavior. Each also led to new types of experience of the environment and, in some cases, of the organism's own psychological functioning, leading ultimately to humans' experience of an objective and normative world that governs all of their thoughts and actions [3].
Where robot agents, that want to pass the Turing Test, must possess a similar human-like agency?

And, we need to know what kind of agency other agents have, when we are asked to work with them...
What do they know, what do they understand, and what must they know in order to accomplish a certain task...

Indeed, it is not always a good idea to hook AI agents into a knowledge process,
if we do not agree upon what the terminology within the domain area is,
and if the AIs don't have the right understand of what to do...
(AI within healthcare could be an example here)...

Terminology is an agreement.
- If we do not agree upon terminology, then we do not agree on the decision.
- Sequences carries meaning
    (Each chunk contains subsets of meaning;
    if these unfold coherently, the reader makes sense of them;
    complexity is rendered into manageable bits [4].

Intentionality: The characteristic of consciousness, such that human minds can represent things, properties, or states of affairs [5].
In philosophy, intentionality is the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs. To say of an individual’s mental states that they have intentionality is to say that they are mental representations or that they have contents [6].
Where human intentionality have: Humans have desires (wanting, liking), and work towards late or immediate reward.
Humans consider intentions important.
Still, incorporating the concept of machine intentions into workflows,
where machines and human work together, is still a novel concept for most humans...!?

A great talk indeed.

AGI 23, Badge. Stockholm. June 2023.

2.2 Presentations Sunday. June, 18th.

2.2.1. Reasoning in Human and Machine Intelligence.

Noah Goodman, Stanford, talked about ''Reasoning in Human and Machine Intelligence''.

It took centuries to develop Algebra, and come up with the right abstractions.
But, now, even highschool students can master, at least some forms of, algebra.
Still, it will probably not come as a big surprise to anyone that it is difficult for computers (LLMs) to understand some of these concepts [7].

General mathematical reasoning is computationally undecidable, but humans routinely solve new problems. Moreover, discoveries developed over centuries are taught to subsequent generations quickly [8].
And true AGIs should certainly be able to do, at least, some mathematical reasoning?
How could we (humans) possibly proceed in constructing such abilities?
We introduce Peano, a theorem-proving environment where the set of valid actions at any point is finite. We use Peano to formalize introductory algebra problems and axioms, obtaining well-defined search problems. We observe existing reinforcement learning methods for symbolic reasoning to be insufficient to solve harder problems. Adding the ability to induce reusable abstractions (''tactics'') from its own solutions allows an agent to make steady progress, solving all problems [8].
The recovered order has significant agreement with the expert-designed Khan Academy curriculum, and second-generation agents trained on the recovered curriculum learn significantly faster. These results illustrate the synergistic role of abstractions and curricula in the cultural transmission of mathematics [8], [9].
Many good points.
With the right ''tactics'', you can indeed go far.
A great talk.

Stockholm. June 2023.
Reasoning in Human and Machine Intelligence'', N. Goodman.

Stockholm. June 2023.
Ben Goertzel. Stockholm. June 2023.
Scale is important. Large Language Models (LLMs) gave surprising (emergent) results when scaled. Could the same thing (emergence) happen by scaling other AI techniques?

2.2.2. The M Cognitive Meta-Architecture as Touchstone for Standard Modeling of AGI-Level Minds.

Selmer Bringsjord, Rensselaer, talked about ''The M Cognitive Meta-Architecture as Touchstone for Standard Modeling of AGI-Level Minds''.

We know that AGI has not arrived yet:
This is so because, ..., ChatGPT doesn’t know that 1 is not greater than 1,
and surely AGI subsumes command of elementary arithmetic on the natural numbers [10].
Still, here we were asked to look at ''mathematical cognition,
as the cornerstone of a test for standard modeling and simulation of AGI-level minds
''.
Bringsjord er al. write:
Simply observing that at the very least, going with mathematical cognition as a starting place for trying to establish a plumb-line standard modeling of AGI-level minds is rational, since if any part of cognition is likely to span minds in general it is mathematical cognition...
(rather than perception, motor control, natural language usage, etc)...[10].
Which then allow us to address the question, whether a certain architecture then is a ''human-level AGI mind'' [11]:
Some rudiments of the cognitive meta-architecture M and of a formal procedure for determining, with M as touchstone, whether a given cognitive architecture Xi conforms to a minimal standard model of a human-level AGI mind [10].
Bringsjord er al. argue that certain existing cognitive architectures might conform to such a ''minimal standard model of a human-level AGI mind''.
Still, ''A significant challenge awaits us when our procedure is expanded beyond mathematical cognition into other parts of AGI-level cognition''.
...
Clearly...
Still, humans are clever beings... As Bringsjord writes in the course ''Are Humans Rational'':
Humans have the ability to gain knowledge by reasoning (e.g., deductively) quantificationally and recursively over abstract concepts, including abstract concepts of a highly expressive, including infinitary, nature, expressed in arbitrarily complex natural and formal languages [12].
But getting to a comprehensive understanding of what AGI's really are,
will not be easy though. Loads of things to consider, indeed.
In ''What is Supermentalism?'' Bringsjord writes:
Computationalism-driven AI (= ''Strong'' AI) will yield Total Turing Test-passing zombies (in the philosopher's, not The Night of the Living Dead's, sense of 'zombie'), but will fail to reach its goal of replicating persons, for two general reasons. One, persons process information at a ''super'' - Turing (or hypercomputational) level; two, they enjoy certain properties (e.g., intentionality, self-consciousness, incorrigibilism, qualia, etc.) beyond the reach of any mere information-processing machine [13].
Well, well. But an interesting talk for sure.

Stockholm. June 2023.
Near KTH. Östermalm, Stockholm.

2.3 Workshops Monday. June, 19th.

2.3.1. Safety, Trust & Artificial Generalized Intelligence (STAGI 2023).

Ilya Parker, research consultant, guided us through the workshop ''Safety, Trust & Artificial Generalized Intelligence'' [14].
Using an ''unconference'' structure for the workshop, where we all had our say in the debate.

Stockholm. June 2023.
In relation to AGI, what words comes to mind, when you think of safety?


Stockholm. June 2023.
AGI 23. Workshop. ''Safety, Trust & Artificial Generalized Intelligence''. Stockholm. June 2023.

In January, 2023, the US Department of Defense announced DoD Directive 3000.09, Autonomy in Weapon Systems:
DoD is committed to developing and employing all weapon systems, including those with autonomous features and functions.
...
The Directive was established to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.
...
The design, development, deployment, and use of systems incorporating AI capabilities is consistent with the DoD AI Ethical Principles and the DoD Responsible AI (RAI) Strategy and Implementation Pathway [15].
This, and other similar announcements, have caused an increased interest in the topic of AI safety.

Here, in the context of AGI, we were asked to give a small presentation with our thoughts about ''AGI Safety''.
With the instruction, that ''the field of vision, usually gets narrower, when people gets scared, and cognition runs in survival mode''.
Here, we should instead try to take a broader perspective, and think ''out of the box''
(what kind of an ''immune system'' should an AGI have in order to stay safe?):

In my presentation, I focused on the ''Alignment problem''.
Are AI systems ''aligned'' with respect to: And argued that society should continue to have a robust infrastructure,
capable of dealing with (some) mis-aligned AI systems (that e.g. generate fake news).

My input to the debate was inspired by ''The value alignment problem in AI'':
Look at a common situation, where the user’s true goals are only expressed to an AI system through proxies. This initially leads to positive utility. But decreases to negative utility over time as the AI system over-optimizes for the proxy objective. See: ''Consequences of Misaligned AI'' by Simon Zhuang & Dylan Hadfield-Menell [16].
I.e. a simple optimization for just 1 parameter (say ''Profit'' or ''Control'') is different from scenarios, where optimzation must take many parameters into account (E.g. in education, we will probably want to optimize many parameters. Promoting things like "Presence", "Curiosity", "Courage", "Resilience", ''Ethics'' & ''Personal leadership'' simultaneously).
Summarized: Dealing with the ''full state'' of a scenario is a first step towards ''AI safety''.

Stream. Agi Safe workshop. Stockholm. June 2023.
Stream from the Workshop ''Stagi 2023'' on AGI safety, where I am on the panel commenting on ''alignment'', Youtube.





Panel, Stagi Workshop. AGI 23. Stockholm. June 2023.
''Stagi 2023'' (AGI safety).
AGI 23. KTH, Stockholm.

Aisb 20211 in York

Robot dogs are coming

1.4. Stockholm Impressions. Pictures.




Stockholm. June 2023.
Oslo. June 2023. Landing in Oslo.

Stockholm. June 2023.
Summer.


Stockholm. June 2023.
Stockholm. June 2023.

Stockholm. June 2023.
Stockholm. June 2023.


Stockholm. June 2023.

Olof Palme. Stockholm. June 2023.
Sveavägen, central Stockholm,
where Oluf Palme was assassinated in 1986.


Olof Palme. Stockholm. June 2023.

Olof Palme. Stockholm. June 2023.
Oluf Palme.


Olof Palme. Stockholm. June 2023.

Olof Palme. Stockholm. June 2023.
Sveavägen, central Stockholm.


Sergels Torg. Stockholm. June 2023.
Sergels Torg, central Stockholm.

Stockholm. June 2023.
Stockholm. June 2023.


Stockholm. June 2023.

Stockholm. June 2023.
Stockholm. June 2023.


Stockholm. June 2023.

Stockholm. June 2023.
Turing Tests. Stockholm. June 2023.


Stockholm. June 2023.
If you are not sure, how you are doing, as a robot, an opportunity to take a Turing Test, and prove that you are human?

Stortorget.Stockholm. June 2023.
Stortorget. Stockholm. June 2023.


Swedish Academy. Stockholm. June 2023.
Swedish Academy: The Swedish Academy is responsible for choosing the Nobel Prize laureates in Literature. - The Kungliga Vetenskaps-akademien, the Royal Swedish Academy of Sciences, is responsible for selecting the Nobel Prize laureates in physics and chemistry.

Stortorget.Stockholm. June 2023.
Stortorget. Stockholm. June 2023.
In 1520 (November 7-9), the site of Stockholms blodbad (The Stockholm massacre), where the danish king Christian II, also known as Christian the Tyrant, and his chief executioner, Jörgen Homuth, had 82 people led to the Stortorget to be beheaded. Leaving pools of blood running over the cobble stones, and throughout the town, Gamla Stan, as a message to the danish kings opposition. The event earned Christian II the nickname of Christian Tyrant in Sweden.


Stortorget. Stockholm. June 2023.

Stockholm. June 2023.


Stockholm. June 2023.

Stockholm. June 2023.


Stockholm. June 2023.

Dramaten. Stockholm. June 2023.
Dramaten, Kungliga Dramatiska Teatern, Stockhom.
Alumni: Greta Garbo, Ingrid Bergman, Max von Sydow etc.
Managing director (1963-66): Ingmar Bergman.


Dramaten. Stockholm. June 2023.

Dramaten. Stockholm. June 2023.
Dramaten.


Stockholm. June 2023.

Stockholm. June 2023.


Stockholm. June 2023.

Stockholm. June 2023.


Stockholm. June 2023.

Stockholm. June 2023.


Stockholm. June 2023.

Stockholm. June 2023.

3. Conclusion.

Indeed, the end of 2 wunderbar conferences. With many memorable talks.