Note. This is updated from a previous blog post I published a long time ago in robotic healthcare on November 29,2018
>>> Hello, World!
You may remember from the pre-COVID times the use of the new genome editing technique CRISPR-Cas9 by He Jiankui, scientist at the Southern University of Science and Technology of China, to alter the DNA of embryos of seven couples leading to the birth of two genetically modified female twins. This research continues to be called “monstrous”, “crazy”, and “a grave abuse of human rights” by the scientific community worldwide, and the universities involved have proclaimed no awareness of the performance of this research under their institutions.
This research, however, has a positive. It highlights the inefficient regulation of some novel technological innovations that need to urgently be addressed for the benefit of humanity and the advancement of science and technology.
Currently, there is also the rise of Artificial Intelligence (AI) being implemented in many fields, especially biological research, and healthcare. Similar to CRISPR-Cas9, the developers and users of this technology need to take a moment to step back from the technological upheaval and look at their innovation through ethical lenses, to see, address, and prepare for the potential negatives and ethical conflicts.
This post covers some major ethical issues around AI use on healthcare that need to be taken into consideration despite the benefits of AI as discussed in my previous AI in healthcare blog.
AI algorithms can discriminate against minorities
The food that enables AI algorithms to function, especially machine learning and deep learning, is large data sets, which are taken as input, processed, and used to deliver conclusions based solely on these data. For example, a company could use AI to make recommendations on the best candidate to hire by feeding the algorithm data about successful candidates for it to make a conclusion.
Applying these algorithms to make a decision over a human matter is tricky, however, as the data needs to reflect our race and gender diversity. If it does not, this is one way the recommendation by the algorithm can be biased along with human biases inherent in the data and an intentional embedding of bias into the algorithm by a prejudiced developer.
Already in non-medical fields AI has shown to reflect biases in training. For example, AI algorithms designed to help American judges make sentences by predicting an offender’s tendency to re-offend have shown an alarming amount of bias against African-Americans.
With regards to healthcare delivery, it varies by race, and an algorithm designed to make a healthcare decision will be biased if there have been few (or no) genetic studies done in certain populations. An example of this is demonstrated in the attempts to use data from the paper published by Crystel M. Gijsbert and colleagues in the scientific journal PLOS ONE to predict cardiovascular disease risks in non-white populations. These have led to biased results with overestimations and underestimations of risk.
Not everyone might benefit equally from AI in healthcare as AI might be inefficient where data is scarce. As a result, this might affect people with rare medical conditions or others underrepresented in clinical trials and research data, such as Black and Latino populations.
As the House of Lords Select Committee on AI cautions, datasets used to train AI systems usually do a poor job of representing the wider population, which can potentially make unjust decisions reflecting societal prejudice.
Another more recent example was the investigation done by The Markup which found that the home loan AI algorithm slows minority home loan approvals, where 80% of Black applicants and 40% of Latino applicants are likely to be denied a home loan.
AI algorithms can be malicious
In addition, there is the ethical issue that developers of AI may have negative or malicious intentions when making the software. After all, if everybody had good intentions the world would certainly be a better place.
Take, for example, the recent high-profile examples of Uber and Volkswagen. Uber’s machine learning software tool Greyball allowed the prediction of ride hailers that might be undercover law-enforcement officers, allowing the company to bypass local regulations. In the case of Volkswagen, they developed an AI algorithm in which their vehicles would reduce their nitrogen oxide emissions to pass emission tests.
AI private companies working with healthcare institutions might create an algorithm that is better suited for the monetary interest of the institutions instead of the monetary and care interest of the patient. Particularly in the USA, where there is a continuous tension between improving health versus generating profit, since the makers of the algorithms are unlikely to be the ones delivering bedside care. In addition, AI could be used for cyber-attacks, robbery, and revealing information about a person’s health without their knowledge.
AI algorithms can lie
In a more recent note, the neural network search engine model developed by Microsoft OpenAI ChatGPT optimizes language models for dialogue. Although it has been proven extremely beneficial, many users have complained that ChatGPT is prone to produce biased or incorrect answers, particularly with regard to inaccurate scientific literature.
A recent comment by Eva A M Van Dis and colleagues shared in the scientific journal Nature mentioned that using ChatGPT creates a “risk [of] being misled by false or biased information” and this cannot be incorporated into researcher thinking and subsequent papers. The authors mention how this technology produces text without citing the original sources or authors involved.
Importantly, the authors show how ChatGPT can lie about the scientific results. They asked ChatGPT whether it could do a concise summary of a systematic review the authors had previously published in the scientific journal JAMA Psychiatry. In return, ChatGPT” fabricated a convincing response that contained several factual errors, misrepresentations, and wrong data”. Of note, ChatGPT exaggerated the effect of cognitive behavioral therapy (CBT).
These potential negatives need to be acknowledged and addressed in the implementation of AI in any field, especially healthcare.