Artificial Intelligence

Common sense tells us that our mind is a unified and singular thing, but this has been challenged by strands of artificial intelligence and psychology that aim to understand our thinking by breaking it down into many different parts.

Psychologists investigating intelligence and personality have long since moved away from the idea that there is just one kind of intelligence, or that we each have a single personality. We now know that there are many types of intelligence and that we are all multiple, thinking and acting differently in different settings at different times. This self multiplicity is now seen as an essential trait needed to thrive in modern culture.

This section will give you a brief introduction to Artificial Intelligence research and two famous AI systems that use decentralised models of mind.

Researchers working in artificial intelligence create computing systems that simulate human thinking, in order to create smarter computers and robots and to learn more about how the human mind thinks. Although early AI research was dominated by the design of systems that aimed to solve problems by follow logical rules, much AI research since the 70’s has suggested that many of the ways that we think and feel are in some way decentralised, featuring processes that are parallel and distributed. Instead of one unified set of rules controlled from above, these systems feature break thinking down into many separate processes, allowing thought to emerge from the bottom up.

Minsky (1994) popularised the idea that thinking can be simulated by interactions between multitudes of cognitive agents working in parallel, often in competition. Each of these agents is highly specialised to carry our a particular mental process, but are by themselves unintelligent. No one agent is capable of thought or complex understanding, but as a result of their working together, complex thinking emerges. According to this idea, simple thought processes that we take for granted such as coordinating building a tower of toy blocks, are composed of mini societies of agents. One simple example given is the AI program written for a robot built in the 60’s designed to build a tower of toy wooden blocks. The robot has a video camera to see with and a robot arm to manipulate the blocks with.

builder 1
Diagram of the ‘builder’ program designed to control a robot arm as it builds a tower of blocks.

 

builder
A breakdown of programs within the ‘Add’ program, itself within the builder program.

 

A more complex model is an example of a Connectionist AI system, designed to simulate parallel and distributed memory processes.

jetsAndSharks

This example is one of the simplest used to describe decentralised models of mind, others involve many more layers, that break down thought processes into smaller and smaller parts that don’t use the kind of language we speak everyday. These sub-symbolic processes are difficult to describe as the units are so abstracted from our experience of things.

Parallel, distributed and decentralised models of mind offer us a completely different way to describe how we think and feel. Contrary to the unified and sequential way that we experience our thoughts and the sense of control that this gives us, behind the scenes, our thoughts are the result of many different and competing processes, running in parallel, often without our knowledge or control.

This has profound implications for the way we understand our own and one others thoughts and identities.

In his book, ‘Turtles, Termites and Traffic Jams’ introduces the centralised mindset as the tendency to assume that things are happening because they are being controlled by a central power. For years we believed that flocking occurred when birds followed the directions of a leader birds and that termite queens managed their colony. StarLogo, a massively parallel version of Logo, was designed to support children to model complex systems. In ‘Turtles, Termites and Traffic Jams’ Resnick discussed various decentred views of mind. Describing trends in psychoanalysis and AI, he stated that the decentralisation zeitgeist was ‘creating an environment in which decentralised models of mind seem natural and sensible.’ Since then there has been no sign of any new tools being developed to support people to explore these ideas, possibly because decentralised cognitive architectures don’t feature the same easily understood agent, colony level behaviours that natural systems do. e.g. the behaviour of a fish and the behaviour of a school of fish.

This project aims to make use of decentralised behaviours found in nature and in artificial life and use them to explore creative new ways to describe decentralised cognition.

 

Next: Extended Thoughts