Over the last century human communication technology has improved exponentially in terms of reach, cost, and bandwidth. Today communication
- has zero marginal cost,
- is available on demand,
- allows nearly anyone to reach out to any possible audience,
- and is delivered via synchronous and asynchronous channels.
Whilst these improvements in communication have transformed our economic and social systems, they have also put into sharp focus the limits of the ability of human cultures adapt to new contexts, the limits of human language, and related limits of our ways of collaboration, thinking, and learning.
Our ability to shunt data around the planet has greatly improved, but our cognitive limits prevent us from better understanding the full implications of all our actions. The aggregate human technological capability to process information and make decisions is hitting the following constraints:
1. Inputs – The age of big junk data
The data flows that are available as inputs are of highly variable quality. Each transformation of measurements from the natural environment by humans and human created technologies into derived information introduces an element of interpretation and corresponding assumptions, many of which are implicit and not available for critical analysis.
Most of the information that humans are confronted with has already passed through several stages of transformation and is unlikely to be free of cultural bias. The more human societies rely on automation, the more derivative information “products” are in circulation as potential inputs for further decision making. Such culturally biased information artefacts are socio-technological constructions that increasingly insulate human perceptions and thought processes from events in the natural world and the planetary ecosystem.
To peel back some of the cultural bias and in order not to become overwhelmed by the flood of available data streams we need powerful and individually configurable information filtering tools.
2. Systems – Lack of systemic and intentional trust
Increasingly humans rely on ultra-large-scale software systems and data repositories that are operated by large corporations with a profit motive or by large government departments. The centralisation of information management services leads to systemic risks related to data security, data corruption, and misuse of data for purposes beyond agreed intent.
Data breaches, abuses of power by data custodians, and misuse of data have become regular occurrences, leading to a reinforcing feedback loop between low levels of trust in software systems and data repositories and low levels of trust in the organisations that operate such systems (corporations and big government).
To increase the level of systemic and intentional trust requires a new approach to the operation of software systems and data repositories that is capable of addressing the systemic risks. Replacing centralised data management with a distributed system and data architecture is an obvious step to improve overall resilience. A distributed approach to data management allows operational responsibilities to be put into the hands of more trustworthy organisations and people, such as local governments, local employee owned businesses, and individual citizens.
The need to shift towards trustworthy distributed data management highlights a need for a new breed of open source collaboration tools that are not entangled with the interests of corporations or the interests of big government.
Current so-called collaboration tools and social platforms are depressingly trivial. The market provides hundreds of simplistic services for sharing and working on informal information artefacts yet very little in terms of powerful tools for:
- Higher-level knowledge work
- Developing and validating shared understanding via structured conceptual models
At a recent panel discussion of well-known AI researchers and entrepreneurs David Chalmers suggested the objective of constructing human-like AGIs with a view of addressing the questions of human understandability, consciousness, and human / AGI collaboration.
3. Scale – Billions of viewpoints
Zero marginal cost communication has not only lead to organisations and individuals producing and sharing much more information, it has also multiplied the number of people and institutions that are accessible to an individual by many orders of magnitudes. Human cognitive abilities have evolved for interactions with up to at most a few hundred people (and more or less human like software agents).
Increasingly economic thinking based on scarcity makes no sense and is counter-productive. Manual labour and human professional expertise and domain knowledge are increasingly replaced by advanced automation. At the same time the automated systems we rely on are becoming less and less understandable by humans and by other software systems – resulting in new sets of questions and systemic risks.
In order to make sense of the world, and to incrementally identify the 100 to 200 people and systems that are best equipped to collaborate with us, we need new thinking tools.
We cannot simply apply the tools and models of past eras to the present situation. The changes sweeping the Earth right now are literally planetary in scale and so filled with complexity that few among us even have a semblance of knowing what is actually going on. This makes it very difficult to navigate the troubled waters of the 21st Century.
– Joe Brewer
The ability to construct elaborate social delusions is a uniquely human capability. The illusion of human superiority is the product of an anthropocentric viewpoint. Humans are roughly as much of control of the planet as a disease causing pathogen is in control of its host. The only difference is that there is only one host planet and there is no further host to exploit that could allow the pathogen to replicate its “clever” strategy.
One of the most powerful forces in history is human stupidity. But another powerful force is human wisdom. We have both. … There’s a close correlation between nationalism and climate change denialism. Nationalists are focused on their most immediate loyalties and commitments, to their people, to their country. Why can’t you be loyal to humankind as a whole?
– Yuval Harari
4. Outputs – Rapidly growing bodies of knowledge
Alongside all the big junk data sensor networks and scientists are continuously adding to growing bodies of valuable knowledge. Scientific data, related software tools, and related knowledge often still resides in arcane domain-specific or organisation-specific silos.
Whilst the internet has revolutionised international and inter-disciplinary collaboration, there is still significant room for further work to simplify the process of searching for specific knowledge and the process of sharing research results in a form that is at the same time easily understandable by humans and available for processing by software tools.
There is a need for new learning tools that enable humans and software agents to ingest and validate new knowledge.
Biologist Ernst Mayr has argued that, judging by the empirical record regarding species success, it is clearly better to be stupid than to be smart. Species with no brains or very small brains tend to survive for much longer periods and are more resilient than “smarter” species with larger brains. Noam Chomsky recently reminded his audience of Ernst Mayr’s insights as part of a lecture on the human reaction to climate change. The brain-oriented definition of intelligence used by humans seems to be unsuitable for assessing species survival – this should be food for thought for anyone who has high hopes for the current approaches to developing AGI systems. Perhaps much higher levels of intelligence in terms of survival value can be found in the genome and in the operating models of biological cells.
A few researchers have started to work on reverse-engineering human learning and cognitive development and, in parallel, engineering more human-like machine learning systems:
Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?
At a deeper level, how can a learner know that a given domain of entities and concepts should be represented by using a tree at all, as opposed to a low-dimensional space or some other form? Or, in causal learning, how do people come to correct framework theories such as knowledge of abstract disease and symptom classes of variables with causal links from diseases to symptoms?
The acquisition of abstract knowledge or new inductive constraints is primarily the province of cognitive development. For instance, children learning words initially assume a flat, mutually exclusive division of objects into nameable clusters; only later do they discover that categories should be organized into tree-structured hierarchies. Such discoveries are also pivotal in scientific progress: Mendeleev launched modern chemistry with his proposal of a periodic structure for the elements. Linnaeus famously proposed that relationships between biological species are best explained by a tree structure, rather than a simpler linear order or some other form.
Such structural insights have long been viewed by psychologists and philosophers of science as deeply mysterious in their mechanisms, more magical than computational. Conventional algorithms for unsupervised structure discovery in statistics and machine learning— hierarchical clustering, principal components analysis, multidimensional scaling, clique detection— assume a single fixed form of structure. Unlike human children or scientists, they cannot learn multiple forms of structure or discover new forms in novel data.
The biggest remaining obstacle is to understand how structured symbolic knowledge can be represented in neural circuits. Connectionist models sidestep these challenges by denying that brains actually encode such rich knowledge, but this runs counter to the strong consensus in cognitive science and artificial intelligence that symbols and structures are essential for thought. Uncovering their neural basis is arguably the greatest computational challenge in cognitive neuroscience more generally—our modern mind-body problem.
Biologists are also starting to look into the role of genetics in brain wiring, as animals and humans have innate behaviours whose features are consistent across generations, suggesting that some synaptic connections are genetically predetermined. This avenue of research underscores that neurodiversity is part of the evolutionary diversity of human brains and minds.
So what? – How can new tools make a difference?
The focus of the next CIIC workshop on 3 June is on human scale computing and the design of assistive technologies for humans and machines to improve filtering of information streams, collaboration, critical thinking, and learning.
This theme allows us to build on the results from the CIIC workshop in March, which focused on support for neurodiversity, individual behavioural patterns within typical work environments, and collaboration across organisational boundaries.
One set of possible concrete objectives for assistive technology is the following:
- Create and maintain exactly one [computing] system (repository of activated knowledge), not zero, and not two or more
- Teach the system what you value, what you believe to be facts, and the models that you use to assess new inputs – nothing more, and nothing less
- Collaborate with your system to interact with the world as often or as infrequently as it suits your cognitive lens and communication style
- Manage cognitive load in a timely manner, ideally before you become overwhelmed
- Adopt a human scale definition of organisation
A technology that provides such functionality not only acts as a mirror of your conscious knowledge and understanding, but your interaction patterns with the system also reflect your specific learning style and your cognitive preferences.
Human scale computing represents an opportunity not only to improve communication between humans and software systems, but also to improve communication and trusted collaboration between humans with different kinds of minds and cultural backgrounds.
Urgent need for acceptance of neurodiversity
This month was supposed to be autism acceptance month. A few days ago I learned about an aspie suicide that underscores the urgency of the need for cultural change.
Will H. Moore is dead because of society’s intolerance of neurodiversity and unwillingness to acknowledge all the culturally constructed social delusions that surround us. When reading some of the other posts on his blog it becomes clear that he was a very astute observer of human society, a very compassionate human being, and a political scientist and educator with a genuine desire to teach others how to think critically and think for themselves.
I am afraid that many more will have to die before society changes and is ready to replace the pathology paradigm with full recognition of the value of neurodiversity – not one month per year but every day. In the meantime the least we can do is to offer mutual support to each other and not submit to and perpetuate the pathology paradigm.