Tyranny of consistency, design manager’s toolkit, when to use dialog boxes

“Current AI systems seek to mitigate AI mistakes by requiring human oversight, by keeping the human in the loop and relying on them to detect and fix the AI output if needed.

If an AI, like ChatGPT, is right most of the time, it is really hard as a human to judge if it is right or wrong in a particular case. And just adding a small disclaimer pointing out that the AI “may produce inaccurate information,” as done by ChatGPT apparently doesn’t prevent people from falling for the AI’s hallucinations. It also puts the human in the unpleasant position of taking all responsibility for an output or action that was mainly determined by the AI.”

Designing for safe and trustworthy AI
By Cara Storath

Categorized as UX Tagged

Leave a comment

Your email address will not be published.