CIIC, 16 Sept 2017, Auckland

essence of humanity

Participants quickly agreed that the new problem statement submitted by Lena Waizenegger provides an interesting angle for exploring “the essence of humanity” and is also a perfect opportunity to build on the results from the previous CIIC Auckland workshop related to human scale computing:

My focus is on the interaction and collaboration processes of humans and intelligent machines. I want to investigate how work processes and work practices change when human employees and machines work collaboratively to fulfil specific tasks in the healthcare and construction sectors.

Invariants across culture, space and time

Jorn used a few introductory slides on the MODA + MODE human lens to set the scene with a small set of categories that are invariant across human cultures, space and time. Whilst the categories of the human lens are stable, concrete instances of these concepts and links between instances lead to models of concrete human social behaviour within specific cultural environments.

human lens

We identified three categories of agents with distinct characteristics:

  1. Humans
  2. Cognitive assistants for humans – to compensate for human cognitive limits and biases in an increasingly complex world, and as needed, to act as a digital avatar on behalf of a human. Cognitive assistants offer an extension and or delegation of human agency. In the simplest cases cognitive assistants can be embodied in a smart phone or a similar device with a digital screen, or in an augmented reality display. In the future it is conceivable that some cognitive assistants will directly interface with the human body or brain.
  3. Robots – to assist humans with physical labour, to provide companionship, and as needed, to embody an avatar of another human. Robots offer an extension and or delegation of tasks in the physical world.

Collaboration between agents

The following picture captures four conceptual layers that are involved in the interaction between two agents:

  1. Motivations – which represent the fundamental assumptions/beliefs about the world of an agent within the knowledge repository of the agent
  2. The physical body of the agent that acts as a container for knowledge and the sensors and actuators needed to interact with the environment and with other agents
  3. Sensors and actuators – which mediate information flows from/to the physical environment and the manipulation of artefacts in the environment
  4. External events in the environment that are produced or initiated by another agent

human-machine

Interaction between agents is indirect and always modulated by the filters of perception (limited by sensory capabilities) and interpretation (constrained by the extant beliefs of each agent).

The question of how work processes and cultural practices change when humans interact with each other via cognitive assistants and collaborate with robots is a core concern of human scale computing.

The group briefly touched on the potential of robots in the context of healthcare, including the potential for robots as companions. This topic deserves a dedicated workshop for discussion of concrete use cases. Some participants see the potential whilst others see potential risks.

Cultural bias may play a role in the attitude to robots. In Japanese culture robots are associated with positive attributes, and their purpose is clearly that of a helper, whereas in Western culture robots are associated with negative attributes, fuelled by Hollywood narratives and by the military focus of robotics research in the US.

Cognitive assistants can play a key role when humans engage in model building and knowledge sharing, for example by facilitating validation by instantiation, by alerting humans to potential semantic equivalences in different models, and by automatically translating concepts into the preferred languages or domain specific jargons of all collaborating agents. In the picture below, the red connections illustrate how a dialogue between two humans is mediated by cognitive assistants.

assisted communication

two

Variabilities in collaboration patterns

Following the lunch break the discussion explored how human motivations and interaction styles can vary between (sub)cultures or organisations, between individuals (neurodiversity), and across space (regional) and time (cultural evolution). Different cultures conceptualise the world in different ways, and similar effects can be observed in the small in the different ways in which organisations architect their business processes and software systems.

one

Whilst the human lens (above) identifies a set of invariant categories for describing human cultures, there are also significant individual differences in human cognitive lenses and human behaviour that are not the result of cultural programming.

society is disordered

Cultural and technological evolution

Cognitive assistants and other tools may be be needed to enable human societies to benefit from human creativity whilst minimising the risks.

the reason for hope and despair is one and the same.png

five

the system of cultural rituals

Note: participants may find the poster that Xaver Wiesmann presented this week at the Cultural Evolution Society conference to be of interest.

Establishing trusted collaboration

Trusted collaboration is notoriously hard to achieve, especially in organisations that are affected by high rates of staff turnover.

three

trusted collaboration

We started to explore concrete tools and techniques for improving the trustworthiness of data, for counteracting cognitive biases, and for improving the resilience of collaboration systems.

  • For example a next-generation Web technology could be designed such that all agents using the technology must tag every statement or data item they communicate / publish with a personal estimate of correctness or trustworthiness, expressed as a probability between 0 and 1, with a default of 0 (incorrect / fictitious information). The revolutionary change would not be the technological capability, which can easily be implemented, but the cultural shift in communication practices, which positions trustworthiness as a visible top level concern. Information that can not be trusted would become worthless.
  • Attempts at deception would become much harder to conceal, as cognitive assistants can easily inform users about implausible levels of confidence. In particular tools could easily compute aggregate trustworthiness / confidence ratings across logical chains of reasoning. The extent to which humans engage in wild speculation and wishful thinking would quickly become apparent.
  • The more cognitive assistants are being used, and the more humans start to rely on autonomous robots and digital agents as avatars for interacting with other agents, the more the architecture of knowledge repositories will shift to a distributed and decentralised design – resulting in a significant improvement in the resilience of collaboration.

colab.png

Towards the end of the day we agreed that the topic of trustworthy information and the design of new types of dialogues and interaction patterns specifically for validation and peer review is worthwhile exploring in much more detail at the next CIIC workshop. The following aspects and features were identified:

  1. Dialogue
  2. Validation via instantiation (provision of concrete examples)
  3. Validation by multiple agents (humans and machines)
  4. The role of cognitive assistants
  5. Appropriate / desirable level of configurability for each agent
  6. Mechanisms for aggregating supporting evidence and trustworthiness ratings
  7. Capturing of context (domain, time, location, …)
  8. Dangers and advantages of cultural differences
  9. Alignment of the protocols of human-human communication with the protocols for human-machine communication and machine-machine communication
  10. Ensuring that the output from machine learning systems is rendered in a form that is understandable by humans

See you at the next CIIC workshop on 2 December!