With an expected investment of 6.6 billions of dollars before 2021, Artificial Intelligence (AI) will guide the revolution of healthcare: machine learning technologies can analyze genomic data, patient characteristics, and scientific literature to provide diagnosis and identification of treatment. Comparison studies report more and more examples of these software outperforming human doctors. However, it is still the human health professional who has to interpret AI outcomes and, more importantly, to take medical decisions and to share them with patients and caregivers. Yet, doctors face a dramatic increase in medical controversies: for this reason they tend to prescribe more medical exams than needed as a preventive strategy (a phenomenon known as "defensive medicine"), and are more cautious and doubtful in everyday practice. In this context, an important issue is that AIs are not able to explain their own elaboration processes but, as machine learning black boxes, they only provide final outcomes and little information (e.g., updated definitions or scientific literature). It is important to analyze doctors’ attitudes and behavior towards AIs for healthcare, in order to provide guidelines for an optimal design of human-AI interface. In the present study, 30 medical doctors saw four versions (randomly ordered) of the same diagnosis made by anAI and responded to questions adapted from Source Credibility research. Results show that health professionals consider more useful, credible and trustworthy an AI that does differential diagnosis (e.g., says why other options were excluded), or one that explains its own elaboration as a human-like thinking process; on the other hand, they have significantly more negative opinions towards AIs that just provide definitions or scientific references. Indications for the development of effective human-AI interaction in healthcare are given.

Trusting decisions: Doctors’ attitudes towards explanation by artificial intelligence

Stefano Triberti;
2023-01-01

Abstract

With an expected investment of 6.6 billions of dollars before 2021, Artificial Intelligence (AI) will guide the revolution of healthcare: machine learning technologies can analyze genomic data, patient characteristics, and scientific literature to provide diagnosis and identification of treatment. Comparison studies report more and more examples of these software outperforming human doctors. However, it is still the human health professional who has to interpret AI outcomes and, more importantly, to take medical decisions and to share them with patients and caregivers. Yet, doctors face a dramatic increase in medical controversies: for this reason they tend to prescribe more medical exams than needed as a preventive strategy (a phenomenon known as "defensive medicine"), and are more cautious and doubtful in everyday practice. In this context, an important issue is that AIs are not able to explain their own elaboration processes but, as machine learning black boxes, they only provide final outcomes and little information (e.g., updated definitions or scientific literature). It is important to analyze doctors’ attitudes and behavior towards AIs for healthcare, in order to provide guidelines for an optimal design of human-AI interface. In the present study, 30 medical doctors saw four versions (randomly ordered) of the same diagnosis made by anAI and responded to questions adapted from Source Credibility research. Results show that health professionals consider more useful, credible and trustworthy an AI that does differential diagnosis (e.g., says why other options were excluded), or one that explains its own elaboration as a human-like thinking process; on the other hand, they have significantly more negative opinions towards AIs that just provide definitions or scientific references. Indications for the development of effective human-AI interaction in healthcare are given.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12607/40387
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact