Mini-Manifesto for Designers in AI
Human-centered design methods play a critical role in the production of successful AI systems.
The Misconception of AI as Algorithm
It is a common misconception to view AI as “an algorithm” or to treat Artificial Intelligence as synonymous with Machine Learning. The propagation of these misconceptions leads to unfortunate strategic leadership decisions that treat the adoption of AI as a matter of acquiring the right algorithm. There are many examples of businesses that have tried to buy “AI” and add it to their offerings. For instance, in 2016 the CTO of Pearson, the world’s largest education company, announced that they didn’t need to hire data scientists anymore — they were replacing them with Machine Learning . This is a profound misunderstanding of data science and the role of machine learning in digital products. Yet, these misconceptions have been propagated by commercial AI providers and bought by companies in education, healthcare, transportation, etc. While there are surely counter-examples, it is almost never possible to simply attach a “black box” algorithm to an existing product or service. Nevertheless, contemporary training programs for AI/ML engineering often begin with the assumption that the data needed for Machine Learning is already available. In practice, this is never the case.
Role of Design in AI
The purpose of this mini-manifesto is to point out the critical role that human-centered design plays in the production of successful AI systems. Human-Centered designers are trained in several research methods that are critical to the successful implementation of AI, such as modelling human-technical systems (e.g., contextual design methods) and needfinding methods for finding the “right” problem to solve. Other methods, like system mapping, can help identify available sources of data and the potential actions that an algorithm might autonomously affect.
To put it simply, here is a non-exhaustive list of the role of designers in the successful implementations of AI systems:
Aligning to Needs: Designers can document the needs of different stakeholders and ensure that their needs, interests and values are supported by the AI system.
UX to Motivate Human Action: Many AI systems depend upon successfully changing human behavior through information displays (e.g., recommendation systems). Designers can play a role in making these user interfaces motivating and acceptable .
For instance, the display of options for Gmail auto-replies will only work if people use them. Artificial intelligence that isn’t used isn’t very intelligent.
UX to Motivate Human Data Collection: Designers can ensure that systems are successfully capturing the “right” data that is necessary to support the needs of the AI system. This often involves creating compelling UX to facilitate the human labeling of complex data; for instance, Gmail’s “Mark as Spam” UI elements or Netflix ratings.
UX for AI Governance: Designers can produce robust and accessible interfaces for human configuration, monitoring and governance of AI systems. AI systems are never fully autonomous — they will always have some interface with people — and designers can help ensure this interface is a good one.
System Activity Automation: Designers can document existing system activities and workflows to identify needs and opportunities for automation through AI. Like scientific management practices of old, a close understanding of the particular activities and workflows of intelligent human actors can support the automation of their decision-making heuristics into technical processes and artifacts. This can be as simple as creating better checklists or as complicated as creating robots. Both can support successful AI systems. Yes, checklists can be AI.
Aligning Metrics to Goals and Values: Perhaps most importantly, designers can play a critical role in setting the specific outcome measures for AI optimization — the so-called “objective function”. This is the value center of AI — this number defines what the system wants. If we want AI that can increase X then the system needs to be able to measure X — and we’d better be sure that X is actually what we want to increase. It can be incredibly challenging to quantify our values and settle on what precisely we wish to optimize. For instance, while Amazon wants to optimize revenue growth, short term revenue gains (which are easy to measure) aren’t worth long-term threats to the brand (which are hard to measure). The ability to negotiate specific outcome metrics is a skill that designers need to develop, as part of an overall set of skills around data-driven design. It takes a nuanced, humanistic view of value and rational thinking skills about measurement: namely, anything can be measured, but always with error.
What is artificial intelligence? It’s not magic. AI is primarily a continuation of the use of artifacts in automation. The first autopilot was invented in 1914: computers are clearly not required for artificial intelligence. AI is a moving target; what was considered AI 20 years ago is now considered just-another-feature. Once it works, people will say it isn’t AI: e.g., search, recommendations, maps, etc. Deep learning is not required for artificial intelligence. Artificial intelligence is not a human-like entity nor should it be treated like one. Designing artificial intelligence always involves the coordination of autonomous processes in the context of existing human systems. Just as we treat human intelligence as being distributed across artifacts and people, so too should we treat artificial intelligence as a distributed system. Building artificial intelligence can be delusional — usually, what we really want is to build more intelligent systems. What does that mean? “Intelligence measures an agent’s ability to achieve goals in a wide range of environments” (Legg & Hutter, 2007). We don’t need to complicate it any further: intelligence refers simply to the ability of a system to be successful. More intelligent systems should be more successful. If they aren’t, that’s not very smart, is it? And sometimes, simple is smart.
How do manifestos conclude? Abruptly!
Roles that Designers Play in AI Production
The designer’s role is to make systems more intelligent. But what do designers do when they design for AI?
Need for Designers in AI research:
To help find the right problems to solve.
To discover and characterize human needs
To ensure that AI metrics are aligned with real human values.
Ensure collecting the right data
Providing motivating UX to collect human-labeled data
To consider what might go wrong
To design UX for AI monitoring, configuration and governance
Understanding the workflows of intelligent actors to automate their decision-making into technical processes and artifacts
Useful Models for Understanding AI and Cybernetic Systems
AI algorithms will likely remain black boxes, understood predominantly through metaphor and experience. Most of design materials are understood by their useful properties, not by their molecular construction.
What Designers Need to Know
What specifically do designers need to know about AI in order to productively contribute to the design of AI systems? It is helpful to understand the simplicity of a cybernetic system. A cybernetic system, at its most primitive, is like a thermostat. A thermostat regulates temperature by having a means of measuring temperature, modifying temperature, accepting human input for the goal temperature and an autonomous mechanism that can aligning the measurement to the goal.
Even in powerful reinforcement learning algorithms, the story isn’t much different. An RL algorithm needs to have input about the current state of the environment, it needs to have the ability to affect the environment and it needs to have a reward signal. Designers can contribute to systems involving reinforcement learning by mapping out the space of potential actions that an algorithm might be able to autonomously affect, mapping out the space of possible input data in the known environment and by helping to define the specific reward function. The importance of setting appropriate metrics for success (the reward signal) is of critical importance — and one that is naturally suited for designers. Designers are those most likely to be documenting the needs and values of different stakeholders in a system; they should be prepared to negotiate the specific metrics of success, taking into account these different needs and the implications of optimising some metrics over others.