Bias in AI: It’s a matter of time

  • 14 July 2022
Bias in AI: It’s a matter of time

Artificial Intelligence (AI) feels like a popular buzzword in healthcare over the last few years. In a piece for Digital Health, David Newey, deputy CIO at The Royal Marsden NHS Foundation Trust, explores bias in AI and what needs to be done.

On the night of my birthday celebrations in December 2021, I received an email followed by several texts alerting me to the cyber vulnerability that was log4j. This piece of innocuous code was originally developed in January 2001 as part of the apache logging services and was in use in technology stacks worldwide. Written 21 years ago without forethought to its future, this code was to prove a cyber security headache for at least 4 months as hackers looked for ways to exploit it.

This example demonstrated the fact that code has consequences, and in particular, historic code has consequences. Even now we can look to other systems such as IBM Z/OS mainframes which, still to this day run COBOL and FORTRAN dating back to the 1950’s, but are now happily virtualised and still working for the US Department of Defence.

But as well its impact on legacy code, time also affects societal attitudes and changing demographics. For marginalised communities, the experience continues to change and evolve, sometimes for the better, sometimes for the worse; but nonetheless they change. Contemporary examples demonstrating a shift in attitudes include the redaction or warnings placed on literature such as Charlotte Bronte’s Jane Eyre or Charles Dickens’s Great Expectations; or the tearing down of statues such as Edward Colston in Bristol. These illustrate how much society has changed since 1958 when the oldest piece of working software code, US Department of Defence MOSCAT (Mechanisation of Contract Administration Services) was written.

Consider now the development of AI machine learning, and its utility in the field of healthcare. The utility of AI is already hotly contested with proponents seeing it as a way to revolutionise medicine, such as in the rapid detection of abnormalities from CT images or digital pathology slides. AI is increasingly being seen as a way to sure up services that already have an acute shortage of trained staff, fighting against a backlog Tsunami driven from Covid-19.

Opponents say that AI is another over-hyped technology which, along with virtual reality, blockchain, NFTs and cryptocurrency, is destined to end up on the heap of technological white elephants. Opponents would point to IBM’s recent move to sell Watson as an example of blue chip companies deciding that it just isn’t worth it. Yet the reality is that AI is here to stay, and just like the advent of the desktop PC, the internet and the mobile device is rapidly moving up the adoption curve, growing more powerful in line with Moore’s law.

The introduction of bias

Simply put, AI relies on two key components; the development of an algorithm, and the use training data to develop a propensity model to predict outcomes. It is here that various factors can introduce bias and hard bake into an algorithm societal injustice.

Bias in development of AI can be introduced through a number of ways:

  • Implicit Bias – Unconscious prejudice formed against person(s) that is not easily noticed by the owner of such prejudice.
  • Sampling Bias – A statistical problem where the sample data may be skewed towards specific sections of the data.
  • Temporal Bias – We can build a machine-learning model that eventually becomes obsolete due to future events not factored into the model. .
  • Over-fitting to training data – When AI models accurately predict values from the training dataset but cannot predict new data accurately, thus unrepresentative of general population.
  • Edge Cases & Outliers – Outliers are data points not within the data’s normal distribution. Edge cases are errors (missing/incorrect datasets) or noise (additional, irrelevant datasets that could impact the machine learning process negatively).

For the purposes of this article however, it is temporal bias that we seek to consider.

Just like societal values, an AI algorithm is affected by temporal factors derived from:

  • The demographic make-up of society affecting the team that develop it, and the training data used
  • The scientific knowledge and research on which the algorithm is derived
  • That systems that are either modern enough, or have sufficient investment to be able to incorporate AI algorithms

As a result, and despite best efforts to eliminate bias, code written in 2022 could legitimately be out of date within 5 years or less depending on the changes that occur in those factors over time. For example, if an AI algorithm was developed in 1981 to provide clinician decision support for HIV patients; how applicable would its advice be in 2022? Would it be aware of the use of antiretroviral therapies or the change in communities affected?

So how can this be addressed?

There is precedent for addressing these types of concern. The Medicines and Healthcare products Regulatory Agency (MHRA) for example provides ongoing pharmacovigilance for newly licenced drugs involving:

  • monitoring the use of medicines in everyday practice to identify previously unrecognised adverse effects or changes in the patterns of adverse effects
  • assessing the risks and benefits of medicines to determine what action, if any, is necessary to improve their safe use
  • providing information to healthcare professionals and patients to optimise safe and effective use of medicines
  • monitoring the impact of any action taken

Feedback about drugs is captured through an adverse reaction reporting system (Yellow Card Scheme) as well as ongoing research studies, published literature and morbidity and mortality databases.

Government intervention

Already General Data Protection Regulation (GDPR) legislation has provisions regarding the need to for organisations to provide transparency around the use of AI based decisions; and includes the ability for an individual to opt out of being subject to an AI based decision that has legal or similarly significant effects.

In December 2021, the government published a roadmap to develop a regulatory framework to address the use of AI in real-world applications including the need for the MHRA to expand its remit to incorporate AI.  Prior to its merger with NHS England and Improvement, NHSX had begun to consider this issue; but until legislation is passed there are still plenty of examples of historical and contemporary AI algorithms that are ‘out in the wild’.

Local changes for best practice

Much like best practice derived from Information Technology Infrastructure Library (ITIL) for the creation of application and data registers. It is should now be incumbent on CIO’s to now urgently look at their own digital ecosystem and put governance measures in place to both track and review regularly the AI algorithms in use within their organisations.

An AI oversight committee should be formed that regularly approves the use AI algorithms against set criteria addressing applicability and bias, as well as setting ‘licenced use’ dates for software after which it should either be re-licenced or taken out of service. By being proactive at this stage, organisations can get ahead of the curve and be in the best possible place to comply with future legislation and regulation.

Time to change

In summary, we are now at a pivotal point whereby AI algorithms developed now could affect the way in which future generations are treated, potentially hard-baking in inequality and societal injustice. It is a given that use of artificial intelligence in healthcare will only increase, however IT professionals need to be conscious of the effects of the passage of time on the development and use of AI algorithms in every area that affects individuals and communities. Despite the fact that it is only a matter of time before government regulation takes effect, CIOs should now put steps in place at organisational level to both prevent and mitigate the negative effects of bias which may be introduced into such algorithms either at inception or arising from the passage of time.

After all time waits for no one.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

AI can help build sustainable services – but only if we mitigate its risks

AI can help build sustainable services – but only if we mitigate its risks

Concerns about AI should not stop progress. They should prompt us to think about how to apply such powerful processing, argue Rebecca Hughes and Paul…
Digital Health Coffee Time Briefing ☕

Digital Health Coffee Time Briefing ☕

This edition of Coffee Time Briefing includes a call for chartered status for tech professionals and text message initiative for NHS Scotland.
Digital Health Coffee Time Briefing ☕

Digital Health Coffee Time Briefing ☕

Today's edition includes GOSH using AI to help identify Parkinson's Disease and a look at the challenges of evaluating digital health tech.

1 Comments

  • I completely agree with David Newey – now is the time to act, and proactivity will set organisations up for better regulatory compliance. He makes some excellent suggestions about necessary steps to implement AI governance—particularly having formal structures in place to oversee and approve the use of AI systems. Similarly, his comments on the need for regular review identify a core requirement for a successful AI governance approach.

    Based on my discussions with law-makers and regulators, and developments I’m seeing in the field more generally, I believe that those developing and using AI systems in the healthcare context will need to go further still. Right now, leaders in this space should be engaging with the need to lay foundations for good governance for AI, including structures and processes for ethically aligned development, procurement, and contracting; impact assessment and risk analysis and mitigation; response mechanisms for dealing with issues; as well as documenting decisions relating to the deployment of AI to not only be compliant, but to be able to demonstrate compliance.

    Chris Eastham, partner in the Technology & Data team at the law firm Fieldfisher LLP.

Comments are closed.