• Derek Lomas

Assessing Human Wellbeing Using Artificial Intelligence

How might we design artificial intelligence (AI) that contributes to human wellbeing? This is a long term challenge that has some short term implications. AI is all around us — everything from a Google search to our Facebook feed to the fraud detection on most digital purchases. The next decade will see an explosion of data-driven automated systems for optimizing almost anything. We want to make sure that the development of AI will be good for our humanity.

AI ethicists have put together various principles and theories about the moral behavior of AI systems. For instance, that AI should be designed to account for hidden bias or to ensure sufficient human control. But, this doesn’t answer the question of what AI systems should be optimizing for in the first place.

An enormous amount of money is put into the design of AI systems to optimize advertising revenue. Why, then, aren’t we designing AI systems to improve human wellbeing? AI systems designed to optimize wellbeing may be science fiction at the moment, but we can predict what some of the challenges are they will face. Most critically, AI systems need optimization metrics (also called “objective functions” or “evaluation criteria”) to operate.

AI systems need a quantitative outcome or “reward” to optimize. If we want future AI systems to optimize human wellbeing, we are going to need more data. A lot more data. To design systems that optimize for human wellbeing, we are going to need to get better measures of human wellbeing. This is the underlying motivation describing why we think it is so important to design better measurement experiences for gathering wellbeing data.

What we haven’t yet explored is the possibility of using AI itself to help measure wellbeing. For instance, perhaps adaptive surveys could be designed to gather more information in less time. Or, perhaps AI could be used to analyze voice sentiment from samples of telephone surveys. Or, perhaps AI could analyze our digital behavior and use natural language processing techniques to predict our moods from our writing. (Is that creepy? Seems a little creepy. Good to keep an eye on that).

One possibility is using GPT-2, an AI text generation tool, to create automatic stories with a person. The person writes a phrase about how they are doing. The computer then provides 2-3 possible continuations of this, and the person chooses what they like. Then, the computer generates another 2-3 continuations of the text, and again a person chooses. Sentiment analysis might be more easily used in this context to analyze a person’s mood. We could then evaluate whether the data predicts other subjective wellbeing metrics.

The person’s prior answers could also be used to “color” the emotional representations in the text generation, following similar work on giving emotional valence to AI text generation.

A Rapid Prototype

I tried to prototype this interaction this with GPT2 via the site TalkToTransformer. I supposed a person might enter: “I want to see my friends”. How would GPT2 respond in a dynamic manner, to provide choices? These were the first three choices provided:

1

2


Say the person chose option 1 “I want to see my friends, get home early, and then we can have a real fight.” How does GPT2 respond?

1

Conclusion: At least in the near term, this style of GPT2 text development seems more useful for entertainment than assessment. I haven’t yet seen this specific style of interaction with GPT2. Creating it would be straightforward, but if it has been done, we might be able to pair users with a wellbeing assessment to see if their story choice sentiment analysis can predict their emotional state. This could also contribute to research on emotional valence in AI writing.

4 views0 comments

Recent Posts

See All