A riddle, wrapped in a mystery, inside an enigma: How semantic black boxes and opaque artificial intelligence confuse medical decision-making

The use of artificial intelligence (AI) in healthcare comes with opportunities but also numerous challenges. A specific challenge that remains underexplored is the lack of clear and distinct definitions of the concepts used in and/or produced by these algorithms, and how their real world meaning is...

Ausführliche Beschreibung

Gespeichert in:  
Bibliographische Detailangaben
VerfasserInnen: Pierce, Robin (VerfasserIn) ; Sterckx, Sigrid (VerfasserIn) ; Van Biesen, Wim (VerfasserIn)
Medienart: Elektronisch Aufsatz
Sprache:Englisch
Verfügbarkeit prüfen: HBZ Gateway
Journals Online & Print:
Lade...
Fernleihe:Fernleihe für die Fachinformationsdienste
Veröffentlicht: Wiley-Blackwell 2022
In: Bioethics
Jahr: 2022, Band: 36, Heft: 2, Seiten: 113-120
RelBib Classification:NCH Medizinische Ethik
NCJ Wissenschaftsethik
ZG Medienwissenschaft; Digitalität; Kommunikationswissenschaft
weitere Schlagwörter:B Algorithms
B medical semantics
B medical AI
B Decision Support
B e-alerts
B Clinical care
Online Zugang: Volltext (kostenfrei)
Volltext (kostenfrei)
Beschreibung
Zusammenfassung:The use of artificial intelligence (AI) in healthcare comes with opportunities but also numerous challenges. A specific challenge that remains underexplored is the lack of clear and distinct definitions of the concepts used in and/or produced by these algorithms, and how their real world meaning is translated into machine language and vice versa, how their output is understood by the end user. This “semantic” black box adds to the “mathematical” black box present in many AI systems in which the underlying “reasoning” process is often opaque. In this way, whereas it is often claimed that the use of AI in medical applications will deliver “objective” information, the true relevance or meaning to the end-user is frequently obscured. This is highly problematic as AI devices are used not only for diagnostic and decision support by healthcare professionals, but also can be used to deliver information to patients, for example to create visual aids for use in shared decision-making. This paper provides an examination of the range and extent of this problem and its implications, on the basis of cases from the field of intensive care nephrology. We explore how the problematic terminology used in human communication about the detection, diagnosis, treatment, and prognosis of concepts of intensive care nephrology becomes a much more complicated affair when deployed in the form of algorithmic automation, with implications extending throughout clinical care, affecting norms and practices long considered fundamental to good clinical care.
ISSN:1467-8519
Enthält:Enthalten in: Bioethics
Persistent identifiers:DOI: 10.1111/bioe.12924