This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Interactive Journal of Medical Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.i-jmr.org/, as well as this copyright and license information must be included.
Radiology, being one of the younger disciplines of medicine with a history of just over a century, has witnessed tremendous technological advancements and has revolutionized the way we practice medicine today. In the last few decades, medical imaging modalities have generated seismic amounts of medical data. The development and adoption of artificial intelligence applications using this data will lead to the next phase of evolution in radiology. It will include automating laborious manual tasks such as annotations, report generation, etc, along with the initial radiological assessment of patients and imaging features to aid radiologists in their diagnostic and treatment planning workflow. We propose a level-wise classification for the progression of automation in radiology, explaining artificial intelligence assistance at each level with the corresponding challenges and solutions. We hope that such discussions can help us address challenges in a structured way and take the necessary steps to ensure the smooth adoption of new technologies in radiology.
artificial intelligenceautomationmachine learningradiologyexplainabilitymodel decaygeneralizabilityfairness and biasdistributed learningautonomous radiologyAI assistanceIntroduction
Advancements in artificial intelligence (AI) and machine learning have enabled the automation of time-consuming and manual tasks across different industries [1]. With substantial developments in the digital acquisition of data and improvements in machine learning and computing infrastructures, AI applications are also expanding into disciplines that were previously considered the exclusive province of human expertise [2]. From automobiles to the health care sector, the world is actively adopting AI to transform these respective industries.
The confluence of information and communication technologies with automotive technologies has resulted in vehicle autonomy. This growth is expected to continue in the future due to increasing consumer demand, reduction in the cost of vehicle components, and improved reliability [3]. The Society of Automotive Engineers has classified the progression of driving automation into 6 levels [4], ranging from No Automation (Level 0) to Full Automation (Level 5). The levels of driving automation are characterized by the specific roles played by each of the 3 principal players, that is, the human user (driver), the driving automation system, and other vehicle components. As vehicle autonomy increases with each level of automation, driver intervention is reduced [4].
Similar to the automobile industry, AI is progressively transforming the landscape of health care and biomedical research. A simulated deployment of natural language processing–based classification algorithm has been shown to enable automated assignment of computed topographic and magnetic resonance radiology protocols with minimal errors, resulting in a high-quality and efficient radiology workflow [5]. More recently, applications of diagnostic imaging systems have expanded the capabilities of AI in the previously unexplored and more complex health care sector [2]. In radiology, AI applications are being widely adopted for assisted image acquisition, postprocessing, automated diagnosis, and report generation. Automation in this field is still in its infancy, and several clinical and ethical challenges must be addressed before further progress can be made [6].
In this perspective, we attempt to categorize and map the advancements and challenges of automation in radiology into 6 levels, similar to driving automation, with radiologists, AI systems, and advanced technologies playing important roles at each level. The subsequent parts of the paper briefly discuss each level, its technical challenges, plausible solutions, and enabling factors required for transitioning into the next level.
Levels of Automation in Radiology
The advancement of AI in the health sector has substantially bridged the gap between computation and radiology, paving the way for automation in radiology practice. We describe the 6 levels of automation in radiology using a taxonomy similar to that used in driving automation. We further attempt to provide a futuristic vision of the challenges that the radiology field may encounter as we progress toward the complete automation of this field. Figure 1 illustrates different levels of automation in radiology, including the challenges at each stage and the factors that enable the progression between levels.
Flowchart depicting the various levels of automation in radiology practice. At each level, the role of the radiologist and artificial intelligence (AI) is outlined, along with the enabling factors required to mitigate the potential challenges for progression to the next level. PACS: picture archiving and communication systems.
Level 0: No Automation
Level 0, also known as No Automation, is the stage where a radiologist manually performs every task from image acquisition and radiographic film processing to diagnostic analysis without the assistance of AI. We are well past this stage as the recent advances in medical imaging modalities have enabled digital storage and processing of the scans along with some automated assistance to aid in the imaging workflow.
Level 1: Radiologist Assistance
At Level 1 automation, a radiologist performs most tasks manually with assistance from machines. Recent technological advancements have digitized medical scans, making it easier for radiologists to store, maintain, and distribute data. Furthermore, newer solutions include features such as contrast-brightness adjustment, assisted stitching of scans, assisted focus adjustment, etc, which simplify the imaging workflow and enable detailed radiological analysis. With everything digitized, these modalities generate enormous amounts of data, and the biggest challenge at this stage is the proper maintenance and storage of data [7]. This is where technologies such as picture archiving and communication systems have provided an economical solution to compress and store data for easy retrieval and sharing [8]. With the advancement in automation, the radiology field is currently experiencing a major paradigm shift in the principles and practices of many computer-based tools used in clinical practice [9].
Level 2: Partial Automation
Partial automation in radiology refers to the use of computer-assisted diagnostic modalities to automate prioritization. However, the automation at level 2 requires radiologist supervision, and the diagnostic decision is not final without the radiologist’s approval. With the advancement of picture archiving and communication systems technology, radiology practices frequently consider upgrades and renovations to further improve efficiency. For example, radiomics is an emerging subfield of machine learning that converts radiographic images into mineable high-dimensional data by providing additional features to analyze and characterize the disease. Machine learning algorithms can be used to extract features from radiographic images that can help make prognostic decisions [10]. Feature extraction includes the texture, intensity, shape, and geometry of the region of interest [11]. Besides feature extraction from images, clinical and molecular profile data could sometimes be essential to comprehend complex diseases and ensure the right diagnosis to deliver the best possible treatment [12]. The amalgamation of machine learning, radiomics, and clinical information has the potential to improve its application in precision medicine and clinical practices. Since these technologies are still in their nascent stages of development, radiologists will most likely use them as ancillary tools in making final decisions.
The progress at this level of automation is slow and can be attributed to three major factors:
Lack of high-quality data: There is a limited amount of good quality medical data because the annotation and documented diagnosis by an expert are time-consuming and expensive processes [13]. This becomes a challenge when developing an AI system that can generalize well across unseen data, because the performance of machine learning models is significantly influenced by the size, characteristics, and quality of the data used to train them. The problem of insufficient training data, particularly in cases of rare diseases, can be addressed through data augmentation, in which synthetic data are generated to increase the prevalence of the target category, making the models more robust for analyzing independent information on the test sets [14]. Generative adversarial networks are the most commonly used neural network models for generating and augmenting synthetic images for rare diseases, such as rheumatoid arthritis and sickle cell diseases. Although these techniques allow models to be trained on sparse data sets and produce promising results, the adoption of generative adversarial networks in medical imaging is still in its early stages [15].
Stringent data laws: Medical data are often governed by several data security laws, regulations, and compliances, making it extremely difficult to share and use this data outside a clinical setting [6]. Collaborations between hospitals and tech companies are critical to bypass the barriers of data-sharing laws and make the best use of rich medical data to develop advanced solutions for automated and accurate diagnoses.
Cost of technology adoption: Current algorithms for analyzing radiological scans are computationally resource intensive, which significantly increases the cost of adopting these technologies in clinical practices. Therefore, it is important to develop low-power and cost-effective solutions that can be easily adopted by medical organizations. Edge devices can be used as low-cost prescreening tools as they can be deployed remotely and deliver instant results without consuming much bandwidth [16].
Level 3: Conditional Automation
Unlike partial automation, where the final decision is entirely dependent on the radiologist, the systems at Level 3: Conditional Automation are robust enough to diagnose and make decisions under a predefined set of conditions (ie, those used to train the model) without radiologist supervision. If these conditions are not met, a radiologist must be available to override the AI analysis. The efficiency of human-AI collaboration in clinical radiology is dependent on clinicians properly comprehending and trusting the AI system [17]. One of the major requirements to enable such human-AI interfaces in radiological workflows is an effective and consistent mapping of explainability with causability [18]. Specialized explainer systems of explainable AI (widely acknowledged as an important feature of practical deployment of AI models) aim at explaining AI inferences to human users [19]. Explainability in radiology can be improved by using localization models, which can highlight the region of suspected abnormality (region of interest) in the scan, instead of using classification models, which only indicate the presence or absence of an abnormality [20]. Although an explainable system does not refer to an explicit human model and only indicates or highlights the decision-relevant parts of the AI model (ie, parts that contributed to a specific prediction), causability refers primarily to a human-understandable model. Causability is the degree to which an explanation of a statement to a human expert achieves a defined level of causal understanding while also being effective, efficient, and satisfactory within the context of use [18].
Radiology scans often suffer from high interreader variability that arises when 2 or more readers disagree on the results of a scan. This may lead to uncertainty in the ground truth labels. The problem of ambiguous ground truth can be mitigated by using expert adjudication [21] or multiphasic review [22] to create high-quality labels, which may help yield better models than other approaches in improving model performance on original labels [20]. Additionally, imaging protocols, manufacturers of imaging modalities, and the process of storing and processing medical data differ between organizations, which impedes the use of data from different sources for AI applications [23]. These factors result in the development of AI systems on a limited distribution of data, making them highly susceptible to failure if certain conditions, such as demographics, race, gender, time, etc, are not met. For example, Dou et al [24] developed a COVID-19 detection model using data sets from internal hospitals in Hong Kong. The model performed extremely well in identifying abnormalities in Chinese data sets but underperformed in German data sets with different population demographics [24]. Cross-center training of the AI model for different demographics and distinct cohort features would help the model learn from multiple sources and mitigate the problem of generalizability.
Level 4: High Automation
Advancing from level 3, the AI systems at Level 4: High Automation would make decisions without the assistance of a radiologist. Human intervention would only be required in complex cases where the AI requests it. Such systems would require extensive clinical validations before they could be reliably used. As summarized by Kulkarni et al [20], these systems would need to undergo internal as well as external validations to evaluate the system’s performance on unseen data. The Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement [25] specifies guidelines for reporting the development and validation of such diagnostic or prognostic models. Since these AI systems must work independently of conditions, they must generalize across a wide variety of data from different sources without inducing any bias from the training data. For example, Obermeyer et al [26] exposed a shortcoming in a widely used algorithm in the health care system that identified Black patients as being healthier than equally sick White patients. The racial bias exhibited by this system led to an unequal distribution of health care benefits to Black patients.
Elgendi et al [27] observed that adopting simple data augmentation and image processing techniques such as normalization, histogram matching, and image reorientation can aid in standardizing images from different sources. The standardization of annotation, data processing, and storage protocols are also vital for this data to be efficiently used for the development of AI systems. To learn and understand the differences and nuances of abnormalities in images from different regions, these AI systems would need to be developed on radiographic image data from various sources around the world.
The sharing of medical data has its own logistic and legal challenges as several government policies and compliances such as the Health Insurance Portability and Accountability Act [28] restrict the cross-border sharing of medical data. This is where privacy-preserving distributed learning techniques such as federated learning [29] and split learning [30] could play an important role in training the AI models at the source without moving the data to a centralized location. In the current state of development, the adoption of these distributed learning techniques is challenging because of the high costs involved in software development and infrastructure maintenance at multiple locations [31]. Despite these challenges, distributed learning appears to be a viable and promising approach to develop AI systems on multiple centralized data sets without the egress of sensitive medical data [32].
Level 5: Full Automation
Level 5, referred to as Full Automation, is the ultimate stage of automation in radiology, where an AI application would be capable of end-to-end analysis of a case, from the initial diagnosis to automatic report generation. With the standardization of diagnostic reporting protocols and the recent advances in natural language generation models such as Generative Pre-trained Transformer 3 [33], results can be automatically reported in a structured format.
With such a high level of automation, it is crucial to maintain these AI systems at their optimal performance levels; however, their efficiency often deteriorates over time [34]. This phenomenon is referred to as model decay. One of the reasons for model decay is covariate shift [35], where the distribution of the input data is different from the training data. Another reason for such a decay could be prior probability shift [36], where the distribution of the target or the prevalence of an abnormality in a population changes. The change in the definition of the relation between the input and target data, referred to as concept drift [37], could also contribute to model decay. These changes may occur gradually over time or suddenly when the AI system is deployed in a different location with a different population. Therefore, it is crucial to continuously monitor these AI systems and fine-tune them as required to maintain optimal performance [20].
The complete automation of radiology in clinical practice will be challenged by medico-legal concerns about assigning liability in cases of AI misdiagnosis. A challenging legal question is whether doctors, radiologists, and health care providers would still be held accountable to bear ultimate legal responsibilities when they are no longer liable for the interpretation of radiological studies or would the data scientists and the manufacturers involved in the development and implementation of AI systems be held responsible [38]. It is important to focus on ethical questions concerning the implications of full automation for patient-centered medical care. In any event, responsibility must be assigned to humans for their involvement in this extremely complex field of AI in medicine [39]. Another challenge at this stage would be to address the fear among radiologists of AI systems taking over their jobs [40]. However, jobs will not be lost, but rather, roles will be redefined. With the influx of new data, radiologists would be the information specialists capable of piloting AI and guiding medical data to improve patient care [41]. AI will undoubtedly be an integral part of medicine, particularly radiology, but the final decision will be made by human radiologists because only a human expert’s knowledge and subject expertise can enable a reliable diagnosis [42]. We believe that AI systems will become smart assistants for radiologists, capable of automatically performing mundane tasks, such as preliminary diagnosis, annotations, report generation, etc, under radiologist supervision. This will not only reduce the workload of radiologists but also allow them to collaborate with clinicians and actively participate in other aspects of patient care.
Conclusion
The advancement in AI is bringing the field of radiology to a higher level of automation. We propose a level-wise classification system for automation in radiology to elucidate the step-by-step adoption of AI in clinical practice. We also highlight the concerns and challenges that must be addressed as radiology advances toward complete automation. This includes the development of AI models that are transparent, interpretable, trustworthy, and resilient to adversarial attacks in medical settings. Developers of AI algorithms must be cautious of potential risks such as unintended discriminatory bias, inadvertent fitting of confounders, model decay, the constraints of generalization to unseen populations, and the imminent repercussions of new algorithms on clinical outcomes.
There are numerous ethical issues and unanticipated repercussions associated with the introduction of high-level automation in health care. To address these issues, regulatory standards for the development, management, and acquisition of technology and AI; public-private institutional collaborations; and ethical and responsible application of AI in the health care sector are required [43]. Most people envision AI fully replacing the driver or completely bypassing the doctor when they think about complete automation in the automobile or health care industry, respectively. Although there could be many good reasons to entirely replace drivers with autonomous vehicles, this approach could be detrimental in the health care sector. We must acknowledge the distinct advantages of augmentations over complete automation in health care practices [44]. In this regard, “expert-in-the-loop” ideology facilitates the collaboration between AI scientists, software developers, and expert radiologists. This substantially improves the quality and quantity of expert clinical feedback and guidance at every stage of development. As we move closer to the complete automation of radiological analysis, such collaborations are crucial for expediting the automation process.
AbbreviationsAI
artificial intelligence
TRIPOD
Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis
SG contributed to the presented idea, reviewed literature, and wrote the original draft of the manuscript. VK conceived the manuscript framework and assisted in editing the manuscript. RP contributed to the presented idea, reviewed literature, wrote and edited the manuscript, and helped shape the contents of the manuscript. AK validated the manuscript from the perspective of clinical and radiology workflows.
None declared.
SarfrazZSarfrazAIftikarHMAkhundRIs COVID-19 pushing us to the Fifth Industrial Revolution (Society 5.0)?202137259159410.12669/pjms.37.2.338733679956PJMS-37-591PMC7931290YuKBeamALKohaneISArtificial intelligence in healthcare20181021071973110.1038/s41551-018-0305-z3101565110.1038/s41551-018-0305-zKhanAMBacchusAErwinSPolicy challenges of increasing automation in driving201203352798910.1016/j.iatssr.2012.02.002SAE InternationalTaxonomy and definitions for terms related to driving automation systems for on-road motor vehicles202104302021-10-20https://saemobilus.sae.org/content/j3016_202104KalraAChakrabortyAFineBReicherJMachine learning for automation of radiology protocols for quality and efficiency improvement2020091791149115810.1016/j.jacr.2020.03.01232278847S1546-1440(20)30288-XRechtMPDeweyMDreyerKLanglotzCNiessenWPrainsackBSmithJJIntegrating artificial intelligence into the clinical practice of radiology: challenges and recommendations2020063063576358410.1007/s00330-020-06672-53206456510.1007/s00330-020-06672-5AielloMCavaliereCD'AlboreAntonioSalvatoreMThe challenges of diagnostic imaging in the era of big data201903068331610.3390/jcm803031630845692jcm8030316PMC6463157ChoplinRHBoehmeJMMaynardCDPicture archiving and communication systems: an overview199201121127910.1148/radiographics.12.1.17344581734458HosnyAParmarCQuackenbushJSchwartzLHAertsHJWLArtificial intelligence in radiology20180818850051010.1038/s41568-018-0016-52977717510.1038/s41568-018-0016-5PMC6268174ZhangBHeXOuyangFGuDDongYZhangLMoXHuangWTianJZhangSRadiomic machine-learning classifiers for prognostic biomarkers of advanced nasopharyngeal carcinoma20170910403212710.1016/j.canlet.2017.06.00428610955S0304-3835(17)30380-4ZhangCGuJZhuYMengZTongTLiDLiuZDuYWangKTianJAI in spotting high-risk characteristics of medical imaging and molecular pathology2021124427128610.1093/pcmedi/pbab02635692858pbab026PMC8982528HolzingerAHaibe-KainsBJurisicaIWhy imaging data alone is not enough: AI-based integration of imaging, omics, and clinical data2019121546132722273010.1007/s00259-019-04382-93120342110.1007/s00259-019-04382-9WilleminkMJKoszekWAHardellCWuJFleischmannDHarveyHFolioLRSummersRMRubinDLLungrenMPPreparing medical imaging data for machine learning202004295141510.1148/radiol.202019222432068507PMC7104701SofferSBen-CohenAShimonOAmitaiMMGreenspanHKlangEConvolutional neural networks for radiologic images: a radiologist's guide201903290359060610.1148/radiol.201818054730694159YiXWaliaEBabynPGenerative adversarial network in medical imaging: a review2019125810155210.1016/j.media.2019.10155231521965S1361-8415(18)30843-0KharatADuddalwarVSaojiKGaikwadAKulkarniVNaikGLokwaniRKasliwalSKondalSGupteTPantARole of Edge Device and Cloud Machine Learning in Point-of-Care Solutions Using Imaging Diagnostics for Population ScreeningPreprint posted online on June 18, 202010.48550/arXiv.2006.13808GunningDAhaDWDARPA’s explainable artificial intelligence (XAI) program20190624402445810.1609/aimag.v40i2.2850HolzingerAMullerHToward human–AI interfaces to support explainability and causability in medical AI2021105410788610.1109/mc.2021.3092610Barredo ArrietaADíaz-RodríguezNDel SerJBennetotATabikSBarbadoAGarciaSGil-LopezSMolinaDBenjaminsRChatilaRHerreraFExplainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI202006588211510.1016/j.inffus.2019.12.012KulkarniVGawaliMKharatAKey technology considerations in developing and deploying machine learning models in clinical radiology practice2021090999e2877610.2196/2877634499049v9i9e28776PMC8461525MajkowskaAMittalSSteinerDFReicherJJMcKinneySMDugganGEEswaranKCameron ChenPLiuYKalidindiSRDingACorradoGSTseDShettySChest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation202002294242143110.1148/radiol.201919129331793848ArmatoSGMcLennanGBidautLMcNitt-GrayMFMeyerCRReevesAPZhaoBAberleDRHenschkeCIHoffmanEAKazerooniEAMacMahonHVan BeekeEJRYankelevitzDBiancardiAMBlandPHBrownMSEngelmannRMLaderachGEMaxDPaisRCQingDPYRobertsRYSmithARStarkeyABatrahPCaligiuriPFarooqiAGladishGWJudeCMMundenRFPetkovskaIQuintLESchwartzLHSundaramBDoddLEFenimoreCGurDPetrickNFreymannJKirbyJHughesBCasteeleAVGupteSSallammMHeathMDKuhnMHDharaiyaEBurnsRFrydDSSalganicoffMAnandVShreterUVastaghSCroftBYThe Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans2011023829153110.1118/1.352820421452728PMC3041807ThrallJHLiXLiQCruzCDoSDreyerKBrinkJArtificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success201803153 Pt B50450810.1016/j.jacr.2017.12.02629402533S1546-1440(17)31671-XDouQSoTYJiangMLiuQVardhanabhutiVKaissisGLiZSiWLeeHHCYuKFengZDongLBurianEJungmannFBrarenRMakowskiMKainzBRueckertDGlockerBYuSCHHengPAFederated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study20210329416010.1038/s41746-021-00431-63378252610.1038/s41746-021-00431-6PMC8007806CollinsGSReitsmaJBAltmanDGMoonsKGMTransparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement20150107350g759410.1136/bmj.g759425569120ObermeyerZPowersBVogeliCMullainathanSDissecting racial bias in an algorithm used to manage the health of populations20191025366646444745310.1126/science.aax234231649194366/6464/447ElgendiMNasirMUTangQSmithDGrenierJBatteCSpielerBLeslieWDMenonCFletcherRRHowardNWardRParkerWNicolaouSThe effectiveness of image augmentation in deep learning networks for detecting COVID-19: a geometric transformation perspective202131862913410.3389/fmed.2021.62913433732718PMC7956964AnnasGJHIPAA regulations - a new era of medical-record privacy?200304103481514869010.1056/NEJMlim03502712686707348/15/1486McMahanHBMooreERamageDHampsonSy ArcasBACommunication-efficient learning of deep networks from decentralized dataPreprint posted online on February 17, 201610.48550/arXiv.1602.05629VepakommaPGuptaOSwedishTRaskarRSplit learning for health: Distributed deep learning without sharing raw patient dataPreprint posted online on December 3, 201810.48550/arXiv.1812.00564GawaliMArvindCSSuryavanshiSMadaanHGaikwadABhanu PrakashKNKulkarniVPantAComparison of privacy-preserving distributed deep learning methods in healthcare20210706MIUA 2021: Medical Image Understanding and AnalysisJuly 12-14, 2021Oxford, UK45747110.1007/978-3-030-80432-9_34RiekeNHancoxJLiWMilletarìFaustoRothHRAlbarqouniSBakasSGaltierMNLandmanBAMaier-HeinKOurselinSShellerMSummersRMTraskAXuDBaustMCardosoMJThe future of digital health with federated learning2020311910.1038/s41746-020-00323-13301537210.1038/s41746-020-00323-1PMC7490367KorngiebelDMMooneySDConsidering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery20210603419310.1038/s41746-021-00464-x3408368910.1038/s41746-021-00464-xPMC8175735WidmerGKubatMLearning in the presence of concept drift and hidden contexts199642316910110.1007/bf00116900DharaniGNairNGSatpathyPChristopherJCovariate shift: a review and analysis on classifiers20192019 Global Conference for Advancement in Technology (GCAT)October 18-20, 2019Bangalore, India1610.1109/gcat47503.2019.8978471BiswasAMukherjeeSEnsuring fairness under prior probability shifts20210730AIES '21: 2021 AAAI/ACM Conference on AI, Ethics, and SocietyMay 19-21, 2021Virtual event, USA41442410.1145/3461702.3462596ŽliobaitėILearning under concept drift: an overviewPreprint posted online on October 22, 201010.48550/arXiv.1010.4784European Society of Radiology (ESR)What the radiologist should know about artificial intelligence - an ESR white paper201904041014410.1186/s13244-019-0738-23094986510.1186/s13244-019-0738-2PMC6449411VerdicchioMPerinAWhen doctors and AI interact: on human responsibility for artificial risks202202193511110.1007/s13347-022-00506-635223383506PMC8857871PakdemirliEArtificial intelligence in radiology: friend or foe? where are we now and where are we heading?20190282205846011983022210.1177/20584601198302223081528010.1177_2058460119830222PMC6385326JhaSTopolEJAdapting to artificial intelligence: radiologists and pathologists as information specialists20161213316222353235410.1001/jama.2016.17438278989752588764SorantinEGrasserMGHemmelmayrATschaunerSHrzicFWeissVLacekovaJHolzingerAThe augmented radiologist: artificial intelligence in the practice of radiology2022101952112074208610.1007/s00247-021-05177-73466408810.1007/s00247-021-05177-7PMC9537212SheikhAAndersonMAlbalaSCasadeiBFranklinBDRichardsMTaylorDTibbleHMossialosEHealth information technology and digital innovation for national learning health and care systems20210636e383e39610.1016/S2589-7500(21)00005-433967002S2589-7500(21)00005-4NordenJGShahNRWhat AI in health care can learn from the long road to autonomous vehicles2022037