• Derek Lomas

Designing AI for Wellbeing

When I was in college, I had a professor that described the scenario of an artificial intelligence system that had the objective of maximizing the production of paperclips. The AI was designed to figure out what actions to take that would, in the end, produce the maximum number of paperclips. It eventually replaced the human race in order to produce more paper clips.

What’s the moral? Either we want to avoid super-powerful AI — or we want to make sure to optimize something better than paperclip production. What’s the right optimization target, then? That is, precisely, the critical philosophical question when it comes to AI design

One challenging thing about working with machine intelligence is that it can only operate in terms of numerical optimization. It’s really only capable of increasing a number or decreasing it. So, AI doesn’t just recognize faces — it optimizes facial recognition accuracy scores.

So, AI has a certain data need — an optimization metric. What we’d ideally do is align AI needs to human needs, so that they mutually benefit. To do that, we need to translate human needs so they are compatible with the optimization metrics that AI needs to operate.

Sam Harris makes a compelling argument that “wellbeing” should be the central objective of humankind. Wellbeing is a resilient and coherent concept that has an ancient philosophical history — and, Harris claims, wellbeing is a subject that can be increasingly understood scientifically.

There seems, therefore, to be a special opportunity to design AI systems that can understand and optimize wellbeing. How, though, can such an objective be built into AI systems? While much is unknown, it is clear that AI will require effective measures of wellbeing.

What then, is required to design effective measures of wellbeing? Ultimately, it calls for the integration of experiential values — what qualitatively feels right — with a set of rationally-derived quantifications of value. To achieve this integration, we look to human centered design. While psychology builds models that quantify the qualitative, HCD uses those models to create new and valuable experiences. Human centered design may need to grow in order to contribute to the design of AI systems for wellbeing — and we want to help make this happen.

2 views0 comments

Recent Posts

See All