CIIC, 3 June 2017, Auckland

information overload

After a round of introductions and a recap of the Open Space format of CIIC (slides) participants focussed on the problem suggested by Jorn Bettin and by Marg Lovell:

Designing filtering, collaboration, thinking, and learning tools for the next 200 years

Adapting the cognitive load generated by technology to human cognitive limits. How can we make human scale computing a reality? The objective of human scale computing is to improve communication and collaboration:

  1. between humans,
  2. between humans and software systems,
  3. and between software systems.


This objective is another way of stating the goal of developing a language that is better than all human languages reliant on linear syntax. The challenges seem to be primarily economic and cultural. Our current technologies and communication tools hardly meet any of the human scale computing criteria to a satisfactory level:

  1. Human weaknesses such as in-group competition, deception, limited formal reasoning performance, group identities …
  2. Increasingly large system failures where the root cause can’t be identified or fixed
  3. Zero head of knowledge/understanding problems and related human created systems that no one understands
  4. Implicit and simplistic assumptions that are baked into software agents
  5. Implicit and unknown dependencies between software agents
  6. Lack of system modularity = lack of boundaries => inability to describe and understand system behaviour
  7. Highly limited/selective/biased input data for “intelligent” software agents
  8. Non-understandable “intelligent” software agents
  9. Unconscious “intelligent” software agents

This explanation of the motivation for human scale computing prompted Steffen Schaefer to point to a number of example use cases where current machine learning systems perform highly valuable tasks, in a growing number of cases performing these tasks better than any human ever could.

None of the other participants negated the value and the potential of AI in relation to processing of large amounts of data from complex systems or processes and deriving useful correlations and predictions. Instead participants expressed concerns about use cases in financial markets and the implicit cultural values that flow into the definition of the optimisation problems that AI is applied to.

Considerations around cultural values and the cultural transmission of knowledge lead into a tangential discussion of the problems resulting from hierarchical organisational structures. Several of the participants were aware of Frederic Laloux’s book Reinventing Organizations and Jorn also pointed to S23M as a local example of an organisation that has fully replaced hierarchical command and control with an advice process between peers.

Learning to understand each other

To focus the discussion on the design of filtering, collaboration, thinking, and learning tools Jorn outlined the challenge of establishing meaningful communication and collaboration between two agents (biological or human created) who start out without any shared knowledge, and who are equipped to encode, send, and receive information via one or more physical data channels (say via movements, light detection and movement detection, and via sound wave production and reception ). The receiver initially has no context that would allow decoding the meaning/intent encoded by the sender.

In a scenario of agents capable of vision and hearing, a technique such as pointing enables the agents to jointly focus attention on objects and visually detectable phenomena, and to associate a specific sound pattern with this focus of joint attention. It is easy to see how a series of such interactions over time lead to the emergence of a  language and communication protocol that is understood by both agents.

Superficially this challenge and scenario may be seem to be unique to biological agents. Upon closer examination however it also applies to any human created non-biological agents. Two software agents that are connected via a basic digital communication channel initially only share the ability to detect 0s and 1s (or Unicode characters, or some larger chunks of data). Increasing the physical granularity of the chunking that defines the communication channel does nothing for establishing meaningful communication (semantic interoperability) between the two agents.

By definition, for shared understanding to be established between two software agents in relation to any message or concept, the agents require a joint focus of attention (via the equivalent of pointing). The pointing would have to occur via a second data channel that provides a stream of “semantic identities”, which plays the same role as the stream of light waves in the example of biological agents with vision who are attempting to communicate via sounds. The two agents require two data streams that allow each agent to establish correlations that map perceived familiar objects and phenomena to chunks of the received communication stream from the other agent.

This leads to the conclusion that two agents with a communication channel are incapable of establishing any shared understanding of anything unless they are surrounded by a shared environment/context that provides additional data streams that are available for processing by both agents. This constraint describes the foundation of learning by example.

The atoms of learnings

It is worthwhile to point out the the atomic form of a learning system is a feedback loop. A learning system reacts to inputs in a non-trivial way and adapts its behaviour based on the context established by prior inputs, which in turn are influenced by earlier system outputs. Biological systems and human created learning systems are “simply” collections of feedback loops.

In the case of biological systems the communication protocol at a cellular level makes use of a mind-boggling number of molecules as semantic identities. In the case of digital systems the communication protocol at the most basic level makes use of sets of bits and bytes as semantic identities.

At higher levels biological systems include communication via sound based languages (birds, humans, and various other animals) and via visual body languages.  Just like human verbal languages have layered structures, communication protocols between digital systems make use of “stacks of” layers to handle non-trivial communication.

Biological communication systems have evolved over millions of years. Humans are born pre-wired for learning human languages, complete with multiple senses that provide continuous data streams from the environment, and with long limbs and clearly visible eyes that facilitate pointing. In other words biology has been very effective at figuring out a way that allows agents to establish feedback loops and to incrementally learn to communicate with each other.

Furthermore biology has equipped humans and other biological agents with an in-built code of feelings that possibly provide some of the earliest semantic identities that could be shared with and that were understandable by other agents via in-built mechanisms, without any need for pointing. In animals including humans emotions play the same role as the concepts of bits and bytes in digital communication protocols.

Digitising the process of learning

In contrast to the communication protocols between biological agents, human created software agents do not learn communication protocols, but they are pre-wired for communication by baking shared context into multiple agents in the form of operating systems and further software modules. The ability of traditional software systems to learn independently is very limited.

However, over the last decade the growth in computation and storage capacity and the growth in available communication bandwidth has enabled machine learning algorithms to be applied at increasing levels of scale. Digital learning systems tap into very large data streams, and are allowing increasingly complex problems to be solved.

Just as learning between two biological agents relies on the availability of multiple data streams for establishing feedback loops, machine learning depends on multiple data sets/streams: the system is fed a training data set for the purpose of learning by examples, it may be tuned via a validation data set, and it relies on a test data set for ensuring that it is capable of dealing with new inputs in the desired manner.

The future of digitised learning

Whilst digital learning systems are getting better every month, the current approach to machine learning bears some similarity to the simplistic behaviourist approach that dominated psychology for many decades, before brain imaging technology enabled neurologists to develop a new level of understanding of human behaviour. Some approaches to machine learning explicitly set out to emulate neural communication patters found in biological brains, but the performance of the learning system is typically assessed in terms of externally observable behaviour, and not in terms of internal state changes that may have ramifications at a much later point in time.

It is not clear yet how to construct conscious digital learning systems, i.e. systems that can not only establish correlations between data streams, but that are also capable of developing conceptual mental models of themselves and their environment that can be shared with and can be understood by other agents.  In particular current machine learning systems are not able to explain how they arrive at specific conclusions. At this point in time digital learning systems are carefully trained to deal with very specific use cases (data streams), and the motivations for these use cases are still hard-coded into the system design.

Beyond learning by examples via training and test data sets, advanced digital learning systems will need to master learning via formal reasoning (applying the rules of a “logic”) and learning by conscious model building (which involves reflection on acquired knowledge).

Machine learning will really become interesting once digital learning systems are given access to formal digital representations of their motivations. Together with capabilities for automated formal reasoning along the lines of current computer algebra systems, such a capability may enable a digital learning system to answer “why?” questions in a human understandable form.

Desirable characteristics of human scale systems

Towards the end of the workshop participants gathered to summarise results and outlined their particular expectations in relation to systems that would deserve the label human scale:


  1. Systems must become simple and give me the basic functions, and then enable me to decide which additional functions I want to use for collaboration with other people and systems.
  2. I like to know the values of the companies and people that provide the software that I use – such as the values of the people producing driver-less cars. What are their ethics?
  3. I don’t want to be locked into any proprietary data formats. I only use data sources that come in a format that is portable.


  1. Applications must be simplified. They are still very geeky.
  2. Having to “look” for functionality is a time waster.
  3. The applications are still not human. They are clearly machines. That might need to change in order for applications to become more “Marg friendly”.
  4. I want to use applications on my terms, and like to be able to choose who I share myself with.
  5. Currently explicitly asking a potential customer what they might like is not being done.
  6. The definition of customer is really important. There are lots of different types of customers, within organisations and beyond the organisational boundary.
  7. I am interested in multi-level value co-creation with various customers.
  8. Data needs to secure and needs to be persisted reliably.
  9. I would like an easy way for sharing knowledge, better than text.


  1. I would like to use simpler and more consolidated systems.
  2. There are so many places where we have to reiterate the same information. Systems should be easier to use.
  3. I should not have to compromise my privacy in order to access useful functionality.
  4. Systems should not be so intrusive to constantly give me updates and messages / reminders.
  5. Systems should work harder to provide a service that I want.
  6. Systems should give me the ability to collaborate without sacrificing data.


  1. I want to interact with systems via a simple language.
  2. Systems should be modular.
  3. I would like different tools for expressing myself.
  4. Use and reuse of information artefacts should be enabled and encouraged.


  1. Systems must be agent based.
  2. Systems must be trustworthy.
  3. I want a digital avatar that I can task with all business as usual activities.
  4. Organisations and systems need to be modelled on the same principles as living systems.


  1. An organisation should have a common language, that each of the tools plug into.
  2. Each employee should be given a choice of tools.
  3. I would like the ability to identify knowledge gaps, and to outsource specific tasks.

These expectations are valuable for informing a set of first principles for human scale computing, and they set the scene for further elaboration and the design of a language that is better than all human languages reliant on linear syntax.

Human scale computing has the potential to guide us towards the design of digital learning systems that deserve the label intelligent without any artificial prefix.