FEU is created by the European Liberal Forum (ELF)

Facebook
Twitter
LinkedIn
Email

Citation suggestion: Francesco Cappelletti & Francesco Goretti, FC & FG (2023). Ongoing Healthcare Revolution: Building Trust in AI-Driven Medical Applications. Future Europe, 3(1), 93–99.

Abstract

Artificial Intelligence (AI) is speedily growing in prominence in digital Europe, with applications ranging from facial recognition software, smart homes, and self-driving cars to content streaming, text prediction, and digital assistants. One of the most promising fields for AI is unarguably medicine, where machine learning can be used to improve disease detection, medical imaging, and more precise treatments. However, despite the potential benefits of AI, there is still a trust deficit among citizens, with concerns about how we will interact with AI in the future, how safe its use is, and what other AI applications could benefit society. It is critical to avoid sacrificing technological advancement, better care, and the right to privacy. As AI has enormous potential to improve the lives of people, particularly in the field of medicine, a more informed and productive public debate about the benefits and risks associated with AI is needed. This paper thus seeks to provide insights into the overall scope of AI and its future applications, the current and future implications of using AI systems and techniques like Machine and Deep Learning.  

Introduction 

Artificial intelligence (AI) systems are pervasive in modern life, powering a wide range of tasks such as spam detection, face recognition, text prediction, streaming of content, and web browsing. These systems employ the machine learning (ML) paradigm, which has resulted in faster, more efficient processes, lower costs, and reduced human effort. 

The success of these algorithms in many fields is due to faster and more efficient processes that reduce costs and human effort. AI-based systems have seen significant improvements year after year in accuracy, elaboration capability, and speed, with task-specific algorithms, developed to improve the functionality of platforms or services.  

Unfortunately, the interpretability of systems (i.e., how to interpret the decisions of ML systems) can be sacrificed in exchange for other features or to improve functionality. While users may not need to understand how an algorithm works in everyday tasks, this is not the case when ML is used in more sensitive domains, such as transportation, legal applications, and healthcare. In these cases, privacy is jeopardised, and the misuse of AI technology can have disastrous consequences. Thus, users may perceive AI systems as less trustworthy, potentially undermining trust in AI. It is critical to understand the potential risks associated with AI-based systems and to ensure that interpretability is not sacrificed in favour of other features. Prioritising transparency and accountability can foster greater trust in AI and its applications, ensuring that the benefits of AI are fully realised while mitigating the risks. 

Artificial Intelligence, Machine Learning, and Deep Learning 

Text Box

AI encompasses a wide range of algorithms that allow computers to mimic human intelligence. It includes everything from basic if-then rules and decision trees to ML and deep learning. Machine learning (ML) is a subset of AI that includes computer algorithms that are trained to classify, structure, or predict data without being programmed to do so. ML is classified into supervised and unsupervised learning. Deep learning (DL) is a subset of ML that includes algorithms for training software to train itself by exposing multi-layered neural networks to massive amounts of data. The main difference between DL and ML is that DL algorithms eliminate the need for human intervention even during the feature extraction phase, which is carried out automatically and requires for example, only the training instances (signals, images, vectors, etc…). There are Artificial Neural Networks specialised for various tasks, such as Convolutional Neural Networks for image recognition, object or image classification, and face recognition (IBM, 2020). These programmes are commonly used in computer vision. In most cases, DL-based systems outperform traditional ML algorithms, especially when dealing with large amounts of Data. 

Supporting human decision-making: medical applications 

The current era is one of unprecedented dedication to healthcare, with strong political support for significant reforms, such as the digital and data-related transformation of healthcare. The European data and AI landscape is rapidly changing, as evidenced by the publication of numerous transformation policies and legislative proposals aimed at data governance, access, and sharing (as in the case of the European Health Data Space, EHDS), anti-trust and competition law, and digital transformation such as AI, digital twins, and quantum computing. These developments will have a long-term impact on the European healthcare landscape, representing a once-in-a-lifetime opportunity to improve the quality, accessibility, and affordability of healthcare in Europe.  

This transformation will be heavily reliant on the development and deployment of cutting-edge technologies such as AI, digital twins, and quantum computing, which will revolutionise healthcare delivery and research. It is critical to ensure that these new technologies are developed and deployed responsibly, with a focus on transparency, accountability, and patient privacy. By embracing the opportunities presented by these technologies while also managing the potential risks, we can build a more efficient, effective, and equitable healthcare system. 

The implementation and advancement of new medical technologies and procedures over the years have allowed average life expectancy to rise significantly. AI is widely used in medicine, to improve healthcare and therapeutic capabilities. During the Covid-19 pandemic, for example, algorithms were designed and implemented in AI-supported applications for diagnosis and prevention. The use of AI for trend analysis and the creation of models and projections for the pandemic resulted in a broad implementation of technology efficiency (Whang et al., 2021). The use of computer-based knowledge can also reduce or prevent incidents caused by human errors, which are still a risk in the medical field (Helo & Moulton, 2017). In this context, the introduction of computers and robotics has already resulted in significant benefits, such as robotic-assisted surgery, a field that has seen significant development in recent decades for microsurgery and nanotechnologies, wearable robotics, and rehabilitation (Dupont et al., 2021). This has resulted in numerous advancements and opportunities, including the deployment of fully automated robots capable of performing surgery autonomously (Saeidi et al., 2022). 

The main application of AI in healthcare is Clinical Decision Support Systems (CDSS), which are algorithms that assist physicians in various tasks, such as highlighting specific areas in biomedical images or other practical applications. The overarching goal of decision support systems aimed at assisting clinicians with computer-based clinical knowledge is to improve the quality of treatments or procedures by inserting additional data into an existing model, allowing for more precise, advanced, or tailored analysis of a medical case. The data to be analysed may vary, including the life situations and cultural backgrounds of patients during the decision process (Heyen, 2021). Not only can these tools improve medical practice, but in some cases, AI applications can even match the evaluation performance of medics. For example, in the field of medical imaging and radiology (Rodriguez-Ruiz et al., 2019), it has been evidenced that DL is particularly efficient when dealing with medical imaging-based diagnosis, where machines demonstrate a high degree of accuracy while reducing the burden of manually analysing dozens of images.  

AI research has also proven effective in the design of new drugs, resulting in a significant reduction in the cost and time required to bring new medicines to the market (Payel Das et al., 2020). Furthermore, Choudhury and Asan emphasised how relying on properly deployed AI-enabled CDSS can improve the safety of patients by improving drug management and increasing clinical error detection. 

AI in medicines concerns the use of AI/ML systems/algorithms in the development of medicine at various stages throughout the development lifecycle. This could include the: 

  • manufacturing phase (for example, to support production efficiencies),
  • the preclinical phase (for example, in the predication of molecule properties), 
  • the clinical trial phase (for example, to support the design of clinical trials such as patient population identification or analysis of trial data) for regulatory purposes, 
  • or the post-marketing phase (for example, to support safety signal detection).  
Figure 1: AI/ML is being used in the development of medicines across the entire lifecycle 

The International Coalition of Medicines Regulatory Authorities (ICMRA) has released a report with recommendations to assist regulators in addressing the challenges that the use of AI poses for global medicine regulation (ICMRA, 2021).  

The ICMRA report on AI lists the emerging applications of AI in the development of medicine (see section 1.3.1 ‘AI in medicines development and use’) and lists the following:  

  • Target profile identification and validation: using AI to associate genotypes with a disease, predicting chemical interactions and therefore ‘drugability’ of targets (such as for COVID-19).  
  • Compound screening and lead identification: Compound design to achieve desirable properties and its synthesis reaction plans.  
  • Preclinical development: Biomarker identification and response biosignatures.  
  • Clinical development: Digital endpoints, determination of the cellular microenvironment and response through cellular phenotyping and analysis of digital pathology, and even clinical data in clinical trials to provide decision support systems to investigators.  
  • Regulatory application: Regulatory intelligence and dossier preparation – extracting data and pre-filling forms.  
  • Post-marketing requirements: AI to extract and process adverse event reports (Schmider et al., 2019)

Scary algorithms? 

AI implementation is expected to increase in future digital health systems over the next few years. However, to ensure the safe and effective use of AI in medicine, standards, and a common framework must be established. AI has the potential to help professionals with their tasks by ensuring consistency over time and producing faster and more precise results. However, many everyday users of AI systems are unaware of the safety of algorithms when applied to a new set of applications. To address this issue, a trustworthy set of best practices and rules must be established to make AI-based systems safe and reliable in terms of data, privacy, performance, security, and safety. Depending on how it is implemented, an AI-based system can be considered risky.  

The European Union presented an AI Strategy in 2018, outlining potential risks and policies to be implemented for specific AI governance. The General Data Protection Regulation (EU 2016/679) has addressed the privacy-related risks of AI and CDSSs to some extent. Recently, the European Commission developed a risk-based approach with categories ranging from ‘minimal’ to ‘unacceptable’ risk to determine whether the use of AI poses a risk for users. It is critical to prioritise the safe and ethical deployment of AI in healthcare to maximise its benefits while minimising potential risks. By establishing clear standards and guidelines, we can ensure that AI is implemented in a safe, effective, and trustworthy manner for both healthcare professionals and patients. (European Commission, 2022). Member States are implementing their AI strategies (Larsson et al., 2020), relying on the White Paper on AI published in 2020 by the European Commission (COM(2020) 65 final) and the recommendations of the AI High-Level Expert Group (European Commission, 2019). 

Notably, these risks are dispersed across various fields depending on the applications. For example, in the fields of justice and autonomous vehicles, safety and bias issues can lead to unbalanced classification or injuries, respectively (Lo Piano, 2020). The example of the Correctional Offender Management Profiling for Alternative Sanctions system, which predicts the risk of violent recidivism (Brennan et al., 2008), has been criticised for alleged bias based on the race of accused people. Given the growing number of technology applications, the need for a solid framework for the implementation and control of AI systems is critical.  

As expected, when it comes to AI in healthcare, implementing CDSS can help to reduce the risks associated with human errors. However, ensuring the quality of a system’s implementation is critical because CDSSs influence healthcare practitioners’ decision-making, potentially harming individual or public health. Furthermore, the sensitivity of the data used can violate human rights in certain contexts (Sikma et al., 2020). In this regard, the literature in this field emphasises the need for much more research to develop a standardised framework and evaluation measure to ensure patient safety (Choudhury & Asan, 2020).  

Other concerns about the use of AI in medicine revolve around ethical considerations. Because ML algorithms are data-dependent, they cannot base their generalisation properties on features other than those provided during the training phase. For example, if an algorithm is trained to make a decision based on a person’s age and gender, it will not accept any deviation unless the training phase is repeated with new data. This raises concerns about the processing of personal data. When it comes to AI systems that deal with privacy, the question raises more concerns. In some cases, personal data is part of the features from which the machine must learn and is required to train or feed the algorithm, and thus cannot be removed without reducing efficiency. In this sense, the implications and risks may vary greatly depending on the application under consideration, though technical solutions to ensure the reliability of an AI system, such as encryption of sensitive data or reshaping studies to consider larger groups of instances rather than single subjects, already exist. 

Ethical implications for thinking machines 

AI systems provide numerous benefits to our society; however, some potential users may remain sceptical. Elderly people, for example, may find it unsettling that their doctor is basing their therapy on computer-generated suggestions. Teenagers may be concerned if their friends’ recommended playlists contain the same songs, and households with voice-controlled speakers may be concerned if social media ads offer discounts for a travel destination they discussed at dinner the night before. These fears may stem from a naive interpretation of technology, which leads users to believe that machines have control over their emotions and freedom.  

However, it is important to remember that AI is intended to assist humans in decision-making processes and to make tasks more efficient, accurate, and reliable. AI systems are tools that can improve our lives by enhancing our capabilities; however, it is critical to ensure that their use is transparent, ethical, and responsible. By addressing the potential risks and concerns associated with AI and educating the public on its benefits and limitations, we can help promote greater trust and confidence in these systems. As AI continues to evolve and become more integrated into our daily lives, it is critical to prioritise transparency, accountability, and privacy to ensure that these technologies are used in a way that benefits society as a whole. 

Beyond sci-fi, technicians and people involved in the development of AI technologies are aware of all the risks associated with the use of ML. The most common occur during the design phase, resulting in biased data, overfitting or underfitting training models, and unbalanced datasets. These types of ‘errors’ are more likely when the procedure for gathering data and controlling its distribution and homogeneity is lacking. The main issue with risks to individuals and their rights is that algorithms can be manipulated. This occurs when certain outputs or features are ‘flavoured’, resulting in a boosted weight in the decision pattern. To be compliant with a human-centred AI, such a system should adhere to rigid design patterns and strategies that are free of malicious activity. It is not a matter of being afraid of technology itself, but rather of how it is used. Programmers and developers must have standards to follow.  

People and potential users must be aware of the pros and cons of how algorithms can help them take advantage of the technology while also being aware of its potential drawbacks. This should motivate a critical, yet open-minded, examination of the outcomes and services provided by these systems. For example, video-on-demand platforms provide users with ‘personalised’ recommendations. Knowing that popularity plays a crucial role in classifying the platform’s content as ‘potentially interesting’, a user can choose to avoid the suggested video in favour of a more detailed and precise search.  

Notably, CDSS is, as the acronym suggests, support systems rather than doctors ‘made of circuits’. Demonstrating the efficacy of a therapy path after the use of a support system may inspire patients to be open to this mixed approach and gradually build trust in it. However, all of these aspects must be accompanied by a solid and transparent privacy policy to ensure that the data that forms the basis of ML is kept safe, ciphered, and extremely difficult to access. 

Many ML-based systems are already used and have proved efficient while still carrying the above-mentioned critical aspects. CDSSs have been used in both Europe and the US (European Commission, 2018; Thomas, 2022) since 2006, and EU institutions have licensed software as medical devices (European Commission, 2007). Since 2017, this category has included software that does not directly affect the body, such as AI and other systems (European Commission, 2017). Specific regulations have been developed for the classification of medical software, and to be legally allowed into the European clinical market, software applications must comply with specific technical standards (Harvey, 2017). 

Despite the EU legal framework’s endorsement of such software and the growing interest in AI-driven systems and their benefits, public trust in AI applications in the health environment remains low (Kerasidou et al., 2020). Interpretability, ethics, responsibility, and other factors must be considered in smart system legislation. Furthermore, bringing such technologies closer to people and making them more aware of the impact, usefulness, and safety measures taken during the design and development of these systems in risky fields is critical (Lockey et al., 2021). 

As expected, European institutions have recently taken significant steps toward releasing a unified plan for AI, including new rules and regulations (Aifa, 2021; EMA, 2021; WHO, 2021). The rules imposed will be associated with any risk category associated with the system. Applications that fall into the ‘unacceptable risk’ category will be rejected entirely. Applications classified as ‘high-risk’ must adhere to stricter guidelines to be allowed on the market. Examples of such measures include risk assessment, mitigation systems, and future-projected requests such as ‘clear and adequate information to the user’ or ‘appropriate human oversight measures’. Applications with limited- or minimal- risk must adhere to a stricter set of rules and conditions. 

Conclusions and recommendations 

While our societies continue to grapple with the global issues brought on by the COVID-19 pandemic, telemedicine, home AI-based systems, and home monitoring can provide many solutions, assist in more easily overcoming certain challenges, and relieve the hospitals’ burden during times of crisis. Doctors can communicate with patients and monitor their health using software and web solutions. Introducing AI in this field could also help relieve the burden of family doctors as well as the challenges associated with hospitalisation during a health crisis. However, for people to accept AI at home, there is a critical need to build public trust in such technologies. Every step toward increasing European citizens’ openness to AI and assisting them in understanding what it means for our future is a step toward the realisation of ethical and safe integration of technology into our daily lives. 

AI applications will have a growing impact on our societies in the near future, but there is a need to build trust in these systems by implementing specific and widely accepted standards, as well as strict (yet flexible and future-proof, not overly prescriptive) legislation. Given that the use of AI in medicine development is still in its early stages, we believe it would be premature to develop guidance; however, this could be developed in the future once the use of AI in this context has become more established. This entails, first and foremost, promoting EU-wide harmonisation while avoiding overregulation to assist system developers and industries in meeting their goals of adequately implementing these systems. Additional regulatory requirements for medicinal products should not be added simply because AI/ML approaches were used in development. Rather, requirements and regulatory oversight are deemed necessary because a tangible link between the algorithm and regulatory decision-making can be established, with an impact on the benefit-risk of the medicine. 

The best compromise is to define standards, protocols, and documentation on the general operation of the software, its performance, limitations, risk, and fundamental information about the algorithms used. Furthermore, the transparency and interpretability of the system and the used data sets are critical in building trust in AI, and because data is at the heart of the technology, broad Data Governance is a critical aspect that should be constantly updated and assessed. Overall, current EU data regulations should consider the need to incorporate AI into the legal framework, particularly when updating existing regulations (such as the future GDPR). European regulators should engage in early dialogue with other global regulators to ensure, where possible, that any new guidance is as harmonised as possible to avoid unnecessary complexity and divergent requirements. 

Once this is accomplished, the user, after being adequately educated about these general concepts (translated into understandable language) will be able to better understand the operation of an AI-based system. However, given the proliferation of AI applications, many of these are invisible to users. In the near future, citizens may have little choice about whether to use an AI system or not. As a result, awareness and a clear set of practices and procedures to ensure individual rights are becoming increasingly important.  

While absolute safety is impossible to achieve, the technical criteria underlying the systems must (and can) become increasingly strict in managing the risks associated with the use of a given system. In the most critical sectors, such as healthcare, the clarity behind algorithms, the technologies applied, and the way data is used should be assessed by experts and control personnel, both in the design phase and during the implementation.  

Finally, even if a technology is considered neutral, its (mis)use can result in some fundamental rights violations. Due to the amount of information and speed of thinking machines, AI can increase the risks. However, no trade-off should ever be made between technological advancement, better care, and the right to privacy. Instead, AI-based systems should be implemented and technically designed in the EU based on specific conditions as well as clear harmonised guidelines and certifications. As a result, the designers of these ‘scary’ algorithms can easily make ethical use of them while increasing the efficiency of medical practice. As the saying goes, ‘health comes before anything’. 


References 

Brennan, T., Dieterich, W.  § Ehret, B. (2008). ‘Evaluating the predictive validity of the Compas risk and needs assessment system, criminal justice and behavior’. Criminal Justice and Behavior, 36(1), 21–40. 

Choudhury, A.  § Asan, O.  (2020). ‘Role of artificial intelligence in patient safety outcomes: systematic literature review’.  J.M.I.R. Medical Informatics, 8(7), e18599. 

Dupont, P.E., Nelson, B.J., Goldfarb, M. et al. (2021). ‘A decade retrospective of medical robotics research from 2010 to 2020’. Science Robotics, 6(60), eabi8017.  

European Commission (2007). Directive 2007/47/EC of the european parliament and the council amending council directive 90/385/EEC on the approximation of the laws of the Member States relating to active implantable medical devices, Council Directive 93/42/EEC concerning medical devices and Directive 98/8/EC concerning the placing of biocidal products on the market, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32007L0047&from=EN.  

European Commission (2017). Regulation (EU) 2017/745 of the European Parliament and the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32017R0745&from=EN.  

European Commission (2018). ‘PredictND: Clinical decision support system for Dementia”. Press Release, Projects Story’, https://digital-strategy.ec.europa.eu/en/news/predictnd-clinical-decision-support-system-dementia.  

European Commission (2019). ‘Ethics guidelines for trustworthy artificial intelligence’. High-Level Expert Group on Artificial Intelligence, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

European Commission (2022). Regulatory framework proposal on artificial intelligence, https://digital-strategy.ec.europa.eu/en/node/9745/printable/pdf.  

Harvey, H. (2017). ‘How to get clinical AI tech approved by regulators’. Towards Data Science, November 7. 

Helo, S. § Moulton, C.E. (2017). ‘Complications: acknowledging, managing, and coping with human error’. Translational Andrology and Urology, 6(4), 773–782.  

Heyen, N.B. § Salloch, S. (2021). ‘The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory’. B.M.C. Medical Ethics, 22(1), 112. 

IBM (2020). ‘Convolutional neural networks’. Cloud Education, 20 October https://www.ibm.com/cloud/learn/convolutional-neural-networks.  

ICMRA (2021). Informal Innovation Network. Horizon Scanning Assessment Report – Artificial Intelligence. 6 August, https://www.icmra.info/drupal/sites/default/files/2021-08/horizon_scanning_report_artificial_intelligence.pdf.  

Kerasidou, C.X., Kerasidou, A., Buscher, M. et al. (2022). ‘Before and beyond trust: reliance in medical AI’. Journal of Medical Ethics, 48(11), 852–856.  

Larsson, S., Bogusz, C.I. § Andersson Schwarz, J. (2020). Human-Centred AI in the EU (Brussels; European Liberal Forum), https://liberalforum.eu/wp-content/uploads/2021/07/AI-in-the-EU_final.pdf

Lo Piano, S. (2020). ‘Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward’. Humanities and Social Sciences Communication, 7(1), 9.  

Lockey, S., Gillespie, N., Holm, D. et al. (2021). ‘A Review of trust in artificial intelligence: challenges, vulnerabilities and future directions’, Proceedings of the Annual Hawaii International Conference on System Sciences, Proceedings of the 54th Hawaii International Conference on System Sciences, Conference Paper https://www.researchgate.net/publication/349157208_A_Review_of_Trust_in_Artificial_Intelligence_Challenges_Vulnerabilities_and_Future_Directions.  

Rodriguez-Ruiz, A., Lång, K., Gubern-Merida, A. et al. (2019). ‘Stand-alone artificial intelligence for breast cancer detection in mammography: comparison with 101 radiologists’. Journal of the National Cancer Institute, 111(9), 916–922. 

Saeidi, H., Opfermann, J.D., Kam, M. et al. (2022). ‘Autonomous robotic laparoscopic surgery for intestinal anastomosis’. Science Robotics, 7(62), eabj2908.   

Sikma, T., Edelenbosch, R. § Verhoef, P. (2020), ‘The use of AI in healthcare: A focus on clinical decision support systems’, Recipes.  

Thomas, M. (2022). 16 Machine Learning in Healthcare Examples: These companies embrace machine learning technology in healthcarebuiltin.com, 11 July, https://builtin.com/artificial-intelligence/machine-learning-healthcare

Wang, L., Zhang, Y., Wang, D. et al. (2021). ‘Artificial intelligence for COVID-19: A systematic review’. Frontiers in Medicine, 8, 704256. 

AIFA (2021). Guide to the submission of a request for authorisation of a Clinical Trial involving the use of Artificial Intelligence (AI) or Machine Learning (ML) systems, https://www.aifa.gov.it/documents/20142/871583/Guide_CT_AI_ML_v_1.0_date_24.05.2021_EN.pdf

EMA (2021). Guideline on computerised systems and electronic data in clinical trials, https://www.ema.europa.eu/en/documents/regulatory-procedural-guideline/draft-guideline-computerised-systems-electronic-data-clinical-trials_en.pdf.  

WHO (2021). Ethics and governance of artificial intelligence for health, https://www.who.int/publications/i/item/9789240029200. 

Citation
No data was found

Want to receive the whole journal online or in print?

* Required field