Published on in Vol 11 , No 2 (2022) :Jul-Dec

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/38655, first published .
Levels of Autonomous Radiology

Levels of Autonomous Radiology

Levels of Autonomous Radiology

Viewpoint

1DeepTek Medical Imaging Pvt Ltd, Pune, India

2Dr DY Patil Hospital, DY Patil University, Pune, India

Corresponding Author:

Richa Pant, PhD

DeepTek Medical Imaging Pvt Ltd

3rd Floor, Ideas to Impact, 3, Baner Rd

Pallod Farms, Baner

Pune, 411045

India

Phone: 91 72760 60080

Email: richa.pant@deeptek.ai


Radiology, being one of the younger disciplines of medicine with a history of just over a century, has witnessed tremendous technological advancements and has revolutionized the way we practice medicine today. In the last few decades, medical imaging modalities have generated seismic amounts of medical data. The development and adoption of artificial intelligence applications using this data will lead to the next phase of evolution in radiology. It will include automating laborious manual tasks such as annotations, report generation, etc, along with the initial radiological assessment of patients and imaging features to aid radiologists in their diagnostic and treatment planning workflow. We propose a level-wise classification for the progression of automation in radiology, explaining artificial intelligence assistance at each level with the corresponding challenges and solutions. We hope that such discussions can help us address challenges in a structured way and take the necessary steps to ensure the smooth adoption of new technologies in radiology.

Interact J Med Res 2022;11(2):e38655

doi:10.2196/38655

Keywords



Advancements in artificial intelligence (AI) and machine learning have enabled the automation of time-consuming and manual tasks across different industries [1]. With substantial developments in the digital acquisition of data and improvements in machine learning and computing infrastructures, AI applications are also expanding into disciplines that were previously considered the exclusive province of human expertise [2]. From automobiles to the health care sector, the world is actively adopting AI to transform these respective industries.

The confluence of information and communication technologies with automotive technologies has resulted in vehicle autonomy. This growth is expected to continue in the future due to increasing consumer demand, reduction in the cost of vehicle components, and improved reliability [3]. The Society of Automotive Engineers has classified the progression of driving automation into 6 levels [4], ranging from No Automation (Level 0) to Full Automation (Level 5). The levels of driving automation are characterized by the specific roles played by each of the 3 principal players, that is, the human user (driver), the driving automation system, and other vehicle components. As vehicle autonomy increases with each level of automation, driver intervention is reduced [4].

Similar to the automobile industry, AI is progressively transforming the landscape of health care and biomedical research. A simulated deployment of natural language processing–based classification algorithm has been shown to enable automated assignment of computed topographic and magnetic resonance radiology protocols with minimal errors, resulting in a high-quality and efficient radiology workflow [5]. More recently, applications of diagnostic imaging systems have expanded the capabilities of AI in the previously unexplored and more complex health care sector [2]. In radiology, AI applications are being widely adopted for assisted image acquisition, postprocessing, automated diagnosis, and report generation. Automation in this field is still in its infancy, and several clinical and ethical challenges must be addressed before further progress can be made [6].

In this perspective, we attempt to categorize and map the advancements and challenges of automation in radiology into 6 levels, similar to driving automation, with radiologists, AI systems, and advanced technologies playing important roles at each level. The subsequent parts of the paper briefly discuss each level, its technical challenges, plausible solutions, and enabling factors required for transitioning into the next level.


The advancement of AI in the health sector has substantially bridged the gap between computation and radiology, paving the way for automation in radiology practice. We describe the 6 levels of automation in radiology using a taxonomy similar to that used in driving automation. We further attempt to provide a futuristic vision of the challenges that the radiology field may encounter as we progress toward the complete automation of this field. Figure 1 illustrates different levels of automation in radiology, including the challenges at each stage and the factors that enable the progression between levels.

Figure 1. Flowchart depicting the various levels of automation in radiology practice. At each level, the role of the radiologist and artificial intelligence (AI) is outlined, along with the enabling factors required to mitigate the potential challenges for progression to the next level. PACS: picture archiving and communication systems.
View this figure

Level 0, also known as No Automation, is the stage where a radiologist manually performs every task from image acquisition and radiographic film processing to diagnostic analysis without the assistance of AI. We are well past this stage as the recent advances in medical imaging modalities have enabled digital storage and processing of the scans along with some automated assistance to aid in the imaging workflow.


At Level 1 automation, a radiologist performs most tasks manually with assistance from machines. Recent technological advancements have digitized medical scans, making it easier for radiologists to store, maintain, and distribute data. Furthermore, newer solutions include features such as contrast-brightness adjustment, assisted stitching of scans, assisted focus adjustment, etc, which simplify the imaging workflow and enable detailed radiological analysis. With everything digitized, these modalities generate enormous amounts of data, and the biggest challenge at this stage is the proper maintenance and storage of data [7]. This is where technologies such as picture archiving and communication systems have provided an economical solution to compress and store data for easy retrieval and sharing [8]. With the advancement in automation, the radiology field is currently experiencing a major paradigm shift in the principles and practices of many computer-based tools used in clinical practice [9].


Partial automation in radiology refers to the use of computer-assisted diagnostic modalities to automate prioritization. However, the automation at level 2 requires radiologist supervision, and the diagnostic decision is not final without the radiologist’s approval. With the advancement of picture archiving and communication systems technology, radiology practices frequently consider upgrades and renovations to further improve efficiency. For example, radiomics is an emerging subfield of machine learning that converts radiographic images into mineable high-dimensional data by providing additional features to analyze and characterize the disease. Machine learning algorithms can be used to extract features from radiographic images that can help make prognostic decisions [10]. Feature extraction includes the texture, intensity, shape, and geometry of the region of interest [11]. Besides feature extraction from images, clinical and molecular profile data could sometimes be essential to comprehend complex diseases and ensure the right diagnosis to deliver the best possible treatment [12]. The amalgamation of machine learning, radiomics, and clinical information has the potential to improve its application in precision medicine and clinical practices. Since these technologies are still in their nascent stages of development, radiologists will most likely use them as ancillary tools in making final decisions.

The progress at this level of automation is slow and can be attributed to three major factors:

  1. Lack of high-quality data: There is a limited amount of good quality medical data because the annotation and documented diagnosis by an expert are time-consuming and expensive processes [13]. This becomes a challenge when developing an AI system that can generalize well across unseen data, because the performance of machine learning models is significantly influenced by the size, characteristics, and quality of the data used to train them. The problem of insufficient training data, particularly in cases of rare diseases, can be addressed through data augmentation, in which synthetic data are generated to increase the prevalence of the target category, making the models more robust for analyzing independent information on the test sets [14]. Generative adversarial networks are the most commonly used neural network models for generating and augmenting synthetic images for rare diseases, such as rheumatoid arthritis and sickle cell diseases. Although these techniques allow models to be trained on sparse data sets and produce promising results, the adoption of generative adversarial networks in medical imaging is still in its early stages [15].
  2. Stringent data laws: Medical data are often governed by several data security laws, regulations, and compliances, making it extremely difficult to share and use this data outside a clinical setting [6]. Collaborations between hospitals and tech companies are critical to bypass the barriers of data-sharing laws and make the best use of rich medical data to develop advanced solutions for automated and accurate diagnoses.
  3. Cost of technology adoption: Current algorithms for analyzing radiological scans are computationally resource intensive, which significantly increases the cost of adopting these technologies in clinical practices. Therefore, it is important to develop low-power and cost-effective solutions that can be easily adopted by medical organizations. Edge devices can be used as low-cost prescreening tools as they can be deployed remotely and deliver instant results without consuming much bandwidth [16].

Unlike partial automation, where the final decision is entirely dependent on the radiologist, the systems at Level 3: Conditional Automation are robust enough to diagnose and make decisions under a predefined set of conditions (ie, those used to train the model) without radiologist supervision. If these conditions are not met, a radiologist must be available to override the AI analysis. The efficiency of human-AI collaboration in clinical radiology is dependent on clinicians properly comprehending and trusting the AI system [17]. One of the major requirements to enable such human-AI interfaces in radiological workflows is an effective and consistent mapping of explainability with causability [18]. Specialized explainer systems of explainable AI (widely acknowledged as an important feature of practical deployment of AI models) aim at explaining AI inferences to human users [19]. Explainability in radiology can be improved by using localization models, which can highlight the region of suspected abnormality (region of interest) in the scan, instead of using classification models, which only indicate the presence or absence of an abnormality [20]. Although an explainable system does not refer to an explicit human model and only indicates or highlights the decision-relevant parts of the AI model (ie, parts that contributed to a specific prediction), causability refers primarily to a human-understandable model. Causability is the degree to which an explanation of a statement to a human expert achieves a defined level of causal understanding while also being effective, efficient, and satisfactory within the context of use [18].

Radiology scans often suffer from high interreader variability that arises when 2 or more readers disagree on the results of a scan. This may lead to uncertainty in the ground truth labels. The problem of ambiguous ground truth can be mitigated by using expert adjudication [21] or multiphasic review [22] to create high-quality labels, which may help yield better models than other approaches in improving model performance on original labels [20]. Additionally, imaging protocols, manufacturers of imaging modalities, and the process of storing and processing medical data differ between organizations, which impedes the use of data from different sources for AI applications [23]. These factors result in the development of AI systems on a limited distribution of data, making them highly susceptible to failure if certain conditions, such as demographics, race, gender, time, etc, are not met. For example, Dou et al [24] developed a COVID-19 detection model using data sets from internal hospitals in Hong Kong. The model performed extremely well in identifying abnormalities in Chinese data sets but underperformed in German data sets with different population demographics [24]. Cross-center training of the AI model for different demographics and distinct cohort features would help the model learn from multiple sources and mitigate the problem of generalizability.


Advancing from level 3, the AI systems at Level 4: High Automation would make decisions without the assistance of a radiologist. Human intervention would only be required in complex cases where the AI requests it. Such systems would require extensive clinical validations before they could be reliably used. As summarized by Kulkarni et al [20], these systems would need to undergo internal as well as external validations to evaluate the system’s performance on unseen data. The Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement [25] specifies guidelines for reporting the development and validation of such diagnostic or prognostic models. Since these AI systems must work independently of conditions, they must generalize across a wide variety of data from different sources without inducing any bias from the training data. For example, Obermeyer et al [26] exposed a shortcoming in a widely used algorithm in the health care system that identified Black patients as being healthier than equally sick White patients. The racial bias exhibited by this system led to an unequal distribution of health care benefits to Black patients.

Elgendi et al [27] observed that adopting simple data augmentation and image processing techniques such as normalization, histogram matching, and image reorientation can aid in standardizing images from different sources. The standardization of annotation, data processing, and storage protocols are also vital for this data to be efficiently used for the development of AI systems. To learn and understand the differences and nuances of abnormalities in images from different regions, these AI systems would need to be developed on radiographic image data from various sources around the world.

The sharing of medical data has its own logistic and legal challenges as several government policies and compliances such as the Health Insurance Portability and Accountability Act [28] restrict the cross-border sharing of medical data. This is where privacy-preserving distributed learning techniques such as federated learning [29] and split learning [30] could play an important role in training the AI models at the source without moving the data to a centralized location. In the current state of development, the adoption of these distributed learning techniques is challenging because of the high costs involved in software development and infrastructure maintenance at multiple locations [31]. Despite these challenges, distributed learning appears to be a viable and promising approach to develop AI systems on multiple centralized data sets without the egress of sensitive medical data [32].


Level 5, referred to as Full Automation, is the ultimate stage of automation in radiology, where an AI application would be capable of end-to-end analysis of a case, from the initial diagnosis to automatic report generation. With the standardization of diagnostic reporting protocols and the recent advances in natural language generation models such as Generative Pre-trained Transformer 3 [33], results can be automatically reported in a structured format.

With such a high level of automation, it is crucial to maintain these AI systems at their optimal performance levels; however, their efficiency often deteriorates over time [34]. This phenomenon is referred to as model decay. One of the reasons for model decay is covariate shift [35], where the distribution of the input data is different from the training data. Another reason for such a decay could be prior probability shift [36], where the distribution of the target or the prevalence of an abnormality in a population changes. The change in the definition of the relation between the input and target data, referred to as concept drift [37], could also contribute to model decay. These changes may occur gradually over time or suddenly when the AI system is deployed in a different location with a different population. Therefore, it is crucial to continuously monitor these AI systems and fine-tune them as required to maintain optimal performance [20].

The complete automation of radiology in clinical practice will be challenged by medico-legal concerns about assigning liability in cases of AI misdiagnosis. A challenging legal question is whether doctors, radiologists, and health care providers would still be held accountable to bear ultimate legal responsibilities when they are no longer liable for the interpretation of radiological studies or would the data scientists and the manufacturers involved in the development and implementation of AI systems be held responsible [38]. It is important to focus on ethical questions concerning the implications of full automation for patient-centered medical care. In any event, responsibility must be assigned to humans for their involvement in this extremely complex field of AI in medicine [39]. Another challenge at this stage would be to address the fear among radiologists of AI systems taking over their jobs [40]. However, jobs will not be lost, but rather, roles will be redefined. With the influx of new data, radiologists would be the information specialists capable of piloting AI and guiding medical data to improve patient care [41]. AI will undoubtedly be an integral part of medicine, particularly radiology, but the final decision will be made by human radiologists because only a human expert’s knowledge and subject expertise can enable a reliable diagnosis [42]. We believe that AI systems will become smart assistants for radiologists, capable of automatically performing mundane tasks, such as preliminary diagnosis, annotations, report generation, etc, under radiologist supervision. This will not only reduce the workload of radiologists but also allow them to collaborate with clinicians and actively participate in other aspects of patient care.


The advancement in AI is bringing the field of radiology to a higher level of automation. We propose a level-wise classification system for automation in radiology to elucidate the step-by-step adoption of AI in clinical practice. We also highlight the concerns and challenges that must be addressed as radiology advances toward complete automation. This includes the development of AI models that are transparent, interpretable, trustworthy, and resilient to adversarial attacks in medical settings. Developers of AI algorithms must be cautious of potential risks such as unintended discriminatory bias, inadvertent fitting of confounders, model decay, the constraints of generalization to unseen populations, and the imminent repercussions of new algorithms on clinical outcomes.

There are numerous ethical issues and unanticipated repercussions associated with the introduction of high-level automation in health care. To address these issues, regulatory standards for the development, management, and acquisition of technology and AI; public-private institutional collaborations; and ethical and responsible application of AI in the health care sector are required [43]. Most people envision AI fully replacing the driver or completely bypassing the doctor when they think about complete automation in the automobile or health care industry, respectively. Although there could be many good reasons to entirely replace drivers with autonomous vehicles, this approach could be detrimental in the health care sector. We must acknowledge the distinct advantages of augmentations over complete automation in health care practices [44]. In this regard, “expert-in-the-loop” ideology facilitates the collaboration between AI scientists, software developers, and expert radiologists. This substantially improves the quality and quantity of expert clinical feedback and guidance at every stage of development. As we move closer to the complete automation of radiological analysis, such collaborations are crucial for expediting the automation process.

Authors' Contributions

SG contributed to the presented idea, reviewed literature, and wrote the original draft of the manuscript. VK conceived the manuscript framework and assisted in editing the manuscript. RP contributed to the presented idea, reviewed literature, wrote and edited the manuscript, and helped shape the contents of the manuscript. AK validated the manuscript from the perspective of clinical and radiology workflows.

Conflicts of Interest

None declared.

  1. Sarfraz Z, Sarfraz A, Iftikar HM, Akhund R. Is COVID-19 pushing us to the Fifth Industrial Revolution (Society 5.0)? Pak J Med Sci 2021;37(2):591-594 [FREE Full text] [CrossRef] [Medline]
  2. Yu K, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng 2018 Oct;2(10):719-731. [CrossRef] [Medline]
  3. Khan AM, Bacchus A, Erwin S. Policy challenges of increasing automation in driving. IATSS Research 2012 Mar;35(2):79-89 [FREE Full text] [CrossRef]
  4. SAE International. Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. SAE Mobilus. 2021 Apr 30.   URL: https://saemobilus.sae.org/content/j3016_202104 [accessed 2021-10-20]
  5. Kalra A, Chakraborty A, Fine B, Reicher J. Machine learning for automation of radiology protocols for quality and efficiency improvement. J Am Coll Radiol 2020 Sep;17(9):1149-1158. [CrossRef] [Medline]
  6. Recht MP, Dewey M, Dreyer K, Langlotz C, Niessen W, Prainsack B, et al. Integrating artificial intelligence into the clinical practice of radiology: challenges and recommendations. Eur Radiol 2020 Jun;30(6):3576-3584. [CrossRef] [Medline]
  7. Aiello M, Cavaliere C, D'Albore A, Salvatore M. The challenges of diagnostic imaging in the era of big data. J Clin Med 2019 Mar 06;8(3):316 [FREE Full text] [CrossRef] [Medline]
  8. Choplin RH, Boehme JM, Maynard CD. Picture archiving and communication systems: an overview. Radiographics 1992 Jan;12(1):127-129 [FREE Full text] [CrossRef] [Medline]
  9. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer 2018 Aug;18(8):500-510 [FREE Full text] [CrossRef] [Medline]
  10. Zhang B, He X, Ouyang F, Gu D, Dong Y, Zhang L, et al. Radiomic machine-learning classifiers for prognostic biomarkers of advanced nasopharyngeal carcinoma. Cancer Lett 2017 Sep 10;403:21-27 [FREE Full text] [CrossRef] [Medline]
  11. Zhang C, Gu J, Zhu Y, Meng Z, Tong T, Li D, et al. AI in spotting high-risk characteristics of medical imaging and molecular pathology. Precis Clin Med 2021 Dec;4(4):271-286 [FREE Full text] [CrossRef] [Medline]
  12. Holzinger A, Haibe-Kains B, Jurisica I. Why imaging data alone is not enough: AI-based integration of imaging, omics, and clinical data. Eur J Nucl Med Mol Imaging 2019 Dec 15;46(13):2722-2730 [FREE Full text] [CrossRef] [Medline]
  13. Willemink MJ, Koszek WA, Hardell C, Wu J, Fleischmann D, Harvey H, et al. Preparing medical imaging data for machine learning. Radiology 2020 Apr;295(1):4-15 [FREE Full text] [CrossRef] [Medline]
  14. Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional neural networks for radiologic images: a radiologist's guide. Radiology 2019 Mar;290(3):590-606. [CrossRef] [Medline]
  15. Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: a review. Med Image Anal 2019 Dec;58:101552. [CrossRef] [Medline]
  16. Kharat A, Duddalwar V, Saoji K, Gaikwad A, Kulkarni V, Naik G, et al. Role of Edge Device and Cloud Machine Learning in Point-of-Care Solutions Using Imaging Diagnostics for Population Screening. arXiv. Preprint posted online on June 18, 2020. [CrossRef]
  17. Gunning D, Aha DW. DARPA’s explainable artificial intelligence (XAI) program. AI Mag 2019 Jun 24;40(2):44-58. [CrossRef]
  18. Holzinger A, Muller H. Toward human–AI interfaces to support explainability and causability in medical AI. Computer 2021 Oct;54(10):78-86 [FREE Full text] [CrossRef]
  19. Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 2020 Jun;58:82-115. [CrossRef]
  20. Kulkarni V, Gawali M, Kharat A. Key technology considerations in developing and deploying machine learning models in clinical radiology practice. JMIR Med Inform 2021 Sep 09;9(9):e28776 [FREE Full text] [CrossRef] [Medline]
  21. Majkowska A, Mittal S, Steiner DF, Reicher JJ, McKinney SM, Duggan GE, et al. Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology 2020 Feb;294(2):421-431. [CrossRef] [Medline]
  22. Armato SG, McLennan G, Bidaut L, McNitt-Gray MF, Meyer CR, Reeves AP, et al. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans. Med Phys 2011 Feb;38(2):915-931 [FREE Full text] [CrossRef] [Medline]
  23. Thrall JH, Li X, Li Q, Cruz C, Do S, Dreyer K, et al. Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. J Am Coll Radiol 2018 Mar;15(3 Pt B):504-508. [CrossRef] [Medline]
  24. Dou Q, So TY, Jiang M, Liu Q, Vardhanabhuti V, Kaissis G, et al. Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study. NPJ Digit Med 2021 Mar 29;4(1):60 [FREE Full text] [CrossRef] [Medline]
  25. Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. BMJ 2015 Jan 07;350:g7594 [FREE Full text] [CrossRef] [Medline]
  26. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019 Oct 25;366(6464):447-453 [FREE Full text] [CrossRef] [Medline]
  27. Elgendi M, Nasir MU, Tang Q, Smith D, Grenier J, Batte C, et al. The effectiveness of image augmentation in deep learning networks for detecting COVID-19: a geometric transformation perspective. Front Med (Lausanne) 2021 Mar 1;8:629134 [FREE Full text] [CrossRef] [Medline]
  28. Annas GJ. HIPAA regulations - a new era of medical-record privacy? N Engl J Med 2003 Apr 10;348(15):1486-1490. [CrossRef] [Medline]
  29. McMahan HB, Moore E, Ramage D, Hampson S, y Arcas BA. Communication-efficient learning of deep networks from decentralized data. arXiv. Preprint posted online on February 17, 2016. [CrossRef]
  30. Vepakomma P, Gupta O, Swedish T, Raskar R. Split learning for health: Distributed deep learning without sharing raw patient data. arXiv. Preprint posted online on December 3, 2018. [CrossRef]
  31. Gawali M, Arvind CS, Suryavanshi S, Madaan H, Gaikwad A, Bhanu Prakash KN, et al. Comparison of privacy-preserving distributed deep learning methods in healthcare. 2021 Jul 06 Presented at: MIUA 2021: Medical Image Understanding and Analysis; July 12-14, 2021; Oxford, UK p. 457-471. [CrossRef]
  32. Rieke N, Hancox J, Li W, Milletarì F, Roth HR, Albarqouni S, et al. The future of digital health with federated learning. NPJ Digit Med 2020;3:119 [FREE Full text] [CrossRef] [Medline]
  33. Korngiebel DM, Mooney SD. Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. NPJ Digit Med 2021 Jun 03;4(1):93 [FREE Full text] [CrossRef] [Medline]
  34. Widmer G, Kubat M. Learning in the presence of concept drift and hidden contexts. Mach Learn 1996 Apr;23(1):69-101. [CrossRef]
  35. Dharani G, Nair NG, Satpathy P, Christopher J. Covariate shift: a review and analysis on classifiers. 2019 Presented at: 2019 Global Conference for Advancement in Technology (GCAT); October 18-20, 2019; Bangalore, India p. 1-6. [CrossRef]
  36. Biswas A, Mukherjee S. Ensuring fairness under prior probability shifts. 2021 Jul 30 Presented at: AIES '21: 2021 AAAI/ACM Conference on AI, Ethics, and Society; May 19-21, 2021; Virtual event, USA p. 414-424. [CrossRef]
  37. Žliobaitė I. Learning under concept drift: an overview. arXiv. Preprint posted online on October 22, 2010. [CrossRef]
  38. European Society of Radiology (ESR). What the radiologist should know about artificial intelligence - an ESR white paper. Insights Imaging 2019 Apr 04;10(1):44 [FREE Full text] [CrossRef] [Medline]
  39. Verdicchio M, Perin A. When doctors and AI interact: on human responsibility for artificial risks. Philos Technol 2022 Feb 19;35(1):11 [FREE Full text] [CrossRef] [Medline]
  40. Pakdemirli E. Artificial intelligence in radiology: friend or foe? where are we now and where are we heading? Acta Radiol Open 2019 Feb;8(2):2058460119830222 [FREE Full text] [CrossRef] [Medline]
  41. Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA 2016 Dec 13;316(22):2353-2354. [CrossRef] [Medline]
  42. Sorantin E, Grasser MG, Hemmelmayr A, Tschauner S, Hrzic F, Weiss V, et al. The augmented radiologist: artificial intelligence in the practice of radiology. Pediatr Radiol 2022 Oct 19;52(11):2074-2086 [FREE Full text] [CrossRef] [Medline]
  43. Sheikh A, Anderson M, Albala S, Casadei B, Franklin BD, Richards M, et al. Health information technology and digital innovation for national learning health and care systems. Lancet Digit Health 2021 Jun;3(6):e383-e396 [FREE Full text] [CrossRef] [Medline]
  44. Norden JG, Shah NR. What AI in health care can learn from the long road to autonomous vehicles. NEJM Catalyst 2022 Mar 7 [FREE Full text]


AI: artificial intelligence
TRIPOD: Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis


Edited by T Leung; submitted 11.04.22; peer-reviewed by FM Calisto, Y Cao, A Holzinger; comments to author 09.08.22; revised version received 09.09.22; accepted 13.09.22; published 07.12.22

Copyright

©Suraj Ghuwalewala, Viraj Kulkarni, Richa Pant, Amit Kharat. Originally published in the Interactive Journal of Medical Research (https://www.i-jmr.org/), 07.12.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Interactive Journal of Medical Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.i-jmr.org/, as well as this copyright and license information must be included.