Overview
An AI's blackbox nature is a major concern in high-value use cases such as personalised healthcare. The "health expert" shifts from a professional to an AI algorithm, and by default its creators. This project explores two types of explanations or Explainable AI (XAI) to support the perception of autonomy in a sleep-tracking application.
Challenge
How can communication sciences help us design interaction features that allow an AI to assist in decision-making without taking away control from the human user? What kind of explanations are best suitable to explain the relationship between the input and output in an AI's logic. Does the intimacy of data being collection have an effect on need to understand an AI's decisions?