Project Title
Trust in Human-ML Interactions: A Review and Case Example in Health Care
Organization
Schwartz Reisman Institute for Technology and Society
About
I brought a health systems perspective to this working group, which examined how trust in human-ML interactions was conceptualized across different disciplines.
As applications of advanced digital technologies in health systems continue to rise, trust has emerged as an essential consideration in their development and use. Understandings of trust in human-ML interactions in particular are understood and approached in many different ways. In the report I outline three contrasting traditions that characterize multi-disciplinary perspectives on trust in human-ML interactions in public health and health care.
Approach
The objective of this conceptually-oriented analysis was to consider different approaches to, and understandings of, trust in human-ML systems across different disciplines associated with health systems research. The review process was restricted to academic journal articles and involved keyword searches in Google Scholar and journal-specific searches across key disciplines. Reference checking and citation tracking of highly-cited sources was also performed, resulting in the inclusion of additional sources. Sources were selected for inclusion based on a principle of pluralism, which was intended to promote diversity of included sources across key disciplines in analysis.
Results Snapshot
Three contrasting traditions characterize multi-disciplinary perspectives on trust in human-ML interactions in public health and health care.
Cognitivist: Cognitivist approaches to trust tend to privilege the role of mental processes in understanding, promoting, or achieving trust. ML systems are understood as strictly technical systems that exist independently of, but shape and are shaped by, human thoughts and activities. Interactions between humans and ML systems are symmetrical, meaning that they can be modelled according to the influence of discrete individual, technical, or social variables.
Social-relational: Social-relational approaches to trust situate trust in relation to broader collections of actors, objects, ideas, and institutions. Trust is considered situational, contingent, and often difficult to generalize beyond specific cases. ML systems are inextricably linked to human activities and must be considered together.
Critical: Critical approaches to trust question the role, significance, and understanding of trust as a concept. While also often relational in their understandings of trust, critical approaches more explicitly situate the concept of trust, and trusting relationships, in relation to broader distributions of power and resources. Interactions between humans and ML systems are not just cognitive, or context-dependent, but examples of world-making practices that convene humans, technologies, and other objects and ideas.
Across all of these traditions are different emphases on trust in: a) ML systems themselves (e.g., software, or hardware), b) individuals who create, engage with, sponsor, or use ML systems (e.g., developers, patients, providers, policymakers, administrators), and c) health services or systems in which ML systems are embedded (e.g., primary care, tertiary care, insurance, public health, administration).
Case Example: AI Scribes
AI scribes, also known as ambient virtual scribes, or AutoScribes, rely on natural language processing (NLP) and machine learning (ML) to automate clerical medical tasks. These tasks include, but are not limited to:
Transcribing conversations between health care providers and patients
Summarizing clinical events and encounters in structured clinical notes (e.g., SOAP)
Generating imaging or pathology reports
Synthesizing evidence-based answers to medical questions
As health care systems continue to grapple with health care provider burnout and retention, these tools have been positioned as a possible solution to challenges associated with electronic medical record (EMR)-related administrative tasks.
There are a number of social, ethical, legal, and technical considerations associated with AI scribes that require attention. These include, but are not limited to:
Value: To date, evidence to support claims that AI scribes will reduce health care provider workload, both overall, and with respect to documentation requirements, is limited and mixed. AI scribes are also primarily oriented to those responsible for generating clinical notes, and therefore, would not affect the workload of other health care professionals experiencing burnout.
Accuracy: Relatedly, concerns about the accuracy of AI scribes with different populations have been reported. Speech detection must be accurate and precise for those with different accents, dialects, or disabilities affecting speech. To do so models will need to be trained on diverse samples, however, this may require larger amounts of data that can be difficult to attain in health care settings due to data protection and privacy regimes.
Professional Liability and Informed Consent: Ultimately it will be up to health care providers to determine whether details of a clinical encounter were missed by an AI scribe, and to ensure that those gaps are rectified. This may potentially open up new practice liabilities, and providers will need to ensure that consent is obtained prior to use of any system.
Privacy, Security, and Responsible Data Governance: AI scribes generate new forms of digital data that need to be systematically considered as part of broader AI and data governance regimes. Individual health care providers do not have the resources or the power to negotiate directly with vendors, and as such, it will be important to establish processes, standards, and guardrails to educate providers about their respective data protection and privacy requirements, or other obligations associated with their regulatory colleges.
Project Status
Final report in development.