Machines that can learn and correct themselves already perform better than doctors at some tasks, says Jörg Goldhahn, but Vanessa Rampton and Giatgen A. Spinas maintain that machines will never be able to replicate the inter-relational quality of the therapeutic nature of the doctor–patient relationship
Yes – Jörg Goldhahn
Artificial intelligence (AI) systems simulate human intelligence by learning, reasoning, and self correction. Already this technology shows the potential to be more accurate than physicians at making diagnoses in specialties such as radiology, dermatology, and intensive care; at generating prognostic models; and at performing surgical interventions . And in 2017 a robot passed China’s national medical exam, exceeding the minimum required by 96 points .
More precise, reliable, and comprehensive
Even if machines are not yet universally better than doctors, the challenge to make them better is technical rather than fundamental because of the near unlimited capacity for data processing and subsequent learning and self-correction. This “deep learning” is part of “machine learning,” where systems learn constantly without the potential cultural and institutional difficulties intrinsic to human learning, such as schools of thought or cultural preferences. These systems continually integrate new knowledge and perfect themselves with speed that humans cannot match. Even complex clinical reasoning can be simulated, including ethical and economic concerns.
Increasing amounts of more comprehensive health data from apps, personal monitoring devices, electronic medical records, and social media platforms are being integrated into harmonised systems such as the Swiss Personalised Health Network . The aim is to give machines as complete a picture as possible of people’s health over their life and maximum knowledge about their disease.
The notion that today’s physicians could approximate this knowledge by keeping abreast of current medical research while maintaining close contact with their patients is an illusion, not least because of the sheer volume of data. Here too, machines have the advantage: natural language processing enables them to “read” rapidly expanding scientific literature and further teach themselves, for example, about drug interactions .
The key challenges for today’s healthcare systems are economic: costs are rising everywhere. Introducing AI- driven systems could be cheaper than hiring and training new staff . AI systems are also universally available and can even monitor patients remotely. This is important because demand for doctors in much of the world is growing more quickly than supply .
Less biased, less unstable, still caring
The ability to form relationships with patients is often portrayed as the trump card in favour of human physicians, but this may also be their Achilles’ heel. Trust is important for patients’ perception of the quality of their care . But the object of this trust need not be a human; machines and systems can be more trustworthy if they can be regarded as unbiased and without conflicts of interest . Of course, AI systems may be subject to the biases of their designers, but this can be overcome by independent reviews and subsequent iterations.
To say that patients always require empathy from human doctors is to ignore important differences between patients: many, particularly younger, patients with minor complaints simply want an accurate diagnosis and treatment that works . In other words: they may rate correct diagnosis higher than empathy or continuity of care. In some very personal situations the services of a robot could help patients avoid feeling shame.
Even patients who crave interaction, such as those with serious or terminal diagnoses, may find that their needs are better met by machines. Recent studies show that conversational agent systems have the potential to track conditions and suggest care  and can even guide humans through the end of life .
Doctors as we now know them will become obsolete eventually. In the meantime, we should expect stepwise introduction of AI technology in promising areas, such as image analysis or pattern recognition, followed by proof of concept and demonstration of added value for patients and society. This will lead to broader use of AI in more specialties and, sooner than we think, human doctors will merely assist AI systems. These systems will not be perfect, but they will be constantly perfecting themselves and will outperform human physicians in many ways.
No – Vanessa Rampton, Giatgen A. Spinas
Machines will increasingly be able to perform tasks that were previously the prerogative of human doctors, including diagnosis, treatment, and prognosis. Although they will augment the capacities of physicians, machines will never replace them entirely. In particular, physicians will remain better at dealing with the patient as a whole person, which involves knowledge of social relationships and normativity. As the Harvard professor Francis Peabody observed in 1927, the task of the doctor is to transform “that case of mitral stenosis in the second bed on the left” into the complex problem of “Henry Jones, lying awake nights while he worries about his wife and children” .
Humans can complete this transformation because they can relate to the patient as a fellow person and can gain holistic knowledge of the patient’s illness as related to his or her life. Such knowledge involves ideals such as trust, respect, courage, and responsibility that are not easily accessible to machines.
Illness is an ill-defined problem
Technical knowledge cannot entirely describe the sickness situation of any single patient. A deliberative patient–physician relationship characterised by associative and lateral thinking is important for healing, particularly for complex conditions and when there is a high risk of adverse effects, because individual patients’ preferences differ . There are no algorithms for such situations, which change depending on emotions, non-verbal communication, values, personal preferences, prevailing social circumstances, and so on. Those working at the cutting edge of AI in medicine acknowledge that AI approaches are not designed to replace human doctors entirely .
The use of AI in medicine, predicated on the belief that symptoms are measurable, reaches its limits when confronted with the emotional, social, and non-quantifiable factors that contribute to illness. These factors are important: symptoms with no identified physiological cause are the fifth most common reason US patients visit doctors . Questions like “Why me?” and “Why now?” matter to patients: contributions from narrative ethics show that patients benefit when physicians can interpret the meaning they ascribe to different aspects of their lives . It can be crucial for patients to feel that they have been heard by someone who understands the seriousness of the problem and whom they can trust .
Linked to this is a more fundamental insight: as Peabody put it, healing illness requires far more than “healing specific body parts.” By definition illness has a subjective aspect that cannot be “cured” by a technological intervention independently of its human context . Curing an organism from a disease is not the same as establishing its health, as health refers to a complex state of affairs that includes individual experience: being healthy implies feeling healthy. Robots cannot understand our concern with relating illness to the task of living a life, which is related to the human context and subjective factors of disease.
Medicine is an art
Throughout history, the therapeutic effect of doctor–patient relationships has been acknowledged, irrespective of any treatment prescribed . This is because the physician–patient relationship is a relationship between mortal beings vulnerable to illness and death. Computers aren’t able to care for patients in the sense of showing devotion or concern for the other as a person, because they are not people and do not care about anything.
Sophisticated robots might show empathy as a matter of form, just as humans might behave nicely in social situations yet remain emotionally disengaged because they are only performing a social role . But concern – like caring and respect – is a behaviour exhibited by a person who shares common ground with another person. Such relationships can be illustrated by friendship: B cannot be a friend of A if A is not a friend of B’s .
A likely future scenario will be AI systems augmenting knowledge production and processing, and doctors helping patients find an equilibrium that acknowledges the limitations of the human condition, something that is inaccessible to AI. Coping with illness often does not include curing illness, and here doctors are irreplaceable.
Commissioned; externally peer reviewed.
V. Rampton, vanessa.rampton[at]mail.mcgill.ca
1 Sahiner B, Pezeshk A, Hadjiiski LM, et al. Deep learning in medical imaging and radiation therapy. Med Phys
2018. 10.1002/mp.13264. 30367497
3 Swiss Personalised Health Network. 2017. https://www.sphn.ch/en.html
4 Lim S, Lee K, Kang J. Drug–drug interaction extraction from the literature using a recursive neural network. PLoS One
2018;13: e0190926. 10.1371/journal.pone.0190926 29373599
6 IHS Markit. Association of American Medical Colleges. Complexities of physician supply and demand: projections from 2016 to 2030. 2018. https://www.aamc.org/data/workforce/ reports/439206/physicianshortageandprojections.html
7 Brennan N, Barnes R, Calnan M, Corrigan O, Dieppe P, Entwistle V. Trust in the health-care provider–patient relationship: a systematic mapping review of the evidence base. Int J Qual Health Care
2013;25:682–8. 10.1093/intqhc/mzt063 24068242
8 Litvin CB, Ornstein SM, Wessell AM, Nemeth LS, Nietert PJ. Adoption of a clinical decision support system to promote judicious use of antibiotics for acute respiratory infections in primary care. Int J Med Inform
2012;81:521–6. 10.1016/j.ijmedinf.2012.03.002 22483528
9 Wong C, Harrison C, Britt H, Henderson J. Patient use of the internet for health information. Aust Fam Physician
10 Laranjo L, Dunn AG, Tong HL, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc
2018;25: 1248–58. 10.1093/jamia/ocy072 30010941
11 Paasche-Orlow M, Bickmore TW. Conversational Agents to Improve Quality of Life in Palliative Care. National Institutes of Health (NIH) Project, Boston Medical Center. 2016. http://grantome.com/grant/NIH/R01-NR016131-01
12 Peabody F. The care of the patient. JAMA
13 Katz J. The silent world of doctor and patient. Yale University Press, 2002.
14 Interview with Joachim Buhmann. Ich fühle mich von künstlicher Intelligenz überhaupt nicht bedroht. Forbes 2017 Feb 9. https://www.forbes.at/artikel/eth-joachim-buhmann. html
15 Creed F, Henningsen P, Fink P. Medically unexplained symptoms, somatisation and bodily distress. Developing better clinical services.
Cambridge University Press, 2011:vi. 10.1017/CBO9780511977862
16 Fioretti C, Mazzocco K, Riva S, Oliveri S, Masiero M, Pravettoni G. Research studies on patients’ illness experience using the Narrative Medicine approach: a systematic review. BMJ Open
2016;6: e011220. 10.1136/bmjopen-2016-011220 27417197
17 Gawande A. Tell me where it hurts. New Yorker 2018 Jan 23:36–45.
18 Hofmann B. Disease, illness, and sickness. In: Solomon M, Simon JR, Kincaid H, eds. The Routledge companion to philosophy of medicine. Routledge, 2017:16–26.
19 Di Blasi Z, Harkness E, Ernst E, Georgiou A, Kleijnen J. Influence of context effects on health outcomes: a systematic review. Lancet
2001;357:757–62. 10.1016/S0140-6736(00)04169-6 11253970
20 Wingert L. Unsere Moral und die humane Lebensform. In: Sturma D, ed. Ethik und Natur. Suhrkamp (forthcoming).
21 Wingert L. Gemeinsinn und Moral: Grundzüge einer intersubjektivistischen Moralkonzeption. Suhrkamp, 1993.
Veröffentlicht unter der Copyright-Lizenz.
"Attribution - Non-Commercial - NoDerivatives 4.0"
Keine kommerzielle Weiterverwendung ohne Genehmigung.