Ten digital priorities for NHS patient safety

  • 20 April 2021
Ten digital priorities for NHS patient safety

A number of patient safety and digital experts have given their views on immediate digital priorities that could make a significant difference in the NHS.

Professor Mike Bewick, England’s former national deputy medical director and board member of C2-Ai:

1. Strengthen digital literacy

Eric Topol’s 2019 review of the future workforce laid the foundations around this urgent requirement which still needs to be advanced. Digital literacy is fundamental to enabling effective and safe use of technology in the NHS – both for professionals and patients.

People are usually digitally literate in their own lives, but they are often less literate in the IT of a disjointed health and social care system. Some people will be fluid quickly; others will need more support.

This is about equipping people with a readiness to use technologies with confidence, being able to interpret what tools tell them, and knowing when to act.

Part of this is about determining who needs to be trained. For example, as technology and remote monitoring expands to monitor more people in their homes, it can’t always be doctors and nurses who pick up on signals from data. Other people need to be trained to pick up low-grade signals so that they can understand how to prevent issues or escalate matters quickly if things start to go wrong.

Covid has changed how we use technology for patient contact too. We are still at the beginning of using digital systems as primary means of contact. People are becoming more familiar with that, but many more people need support. For self-care scenarios, for example, with apps that allow people to manage their own COPD [chronic obstructive pulmonary disease], patients might need that added level of digital literacy or tuition to ensure apps are used well.

2. Automation

We have to diminish human error. Research on premature death, or adverse outcomes shows that often the human factor is the most significant. For years that was used as a punishment for individuals, whereas it is a systemic issue that allows a person to practice without suitable technology, or commensurate skills, or the time to do their job. To help, we need more automated processes. We need systems that measure biological variability that impact on care. In ICU, automatic alarms might go off if things happen. But they don’t always go off in routine elderly care wards. We need to do more to pick up error early on, and to intervene.

When it comes to AI – healthcare is not that advanced yet. Data can provide warnings – as there are for share prices – but what we don’t use it for particularly well is to automate a treatment pathway. If someone is diabetic with an implanted insulin device, could AI alter their drug dosage automatically?

3. Investment

We spend less than 1% on R&D in the health service, and yet we are going through a revolution in technology that could have a major impact on care. Organisations need to look at what technologies are available and how they can be evaluated. At the moment, many healthcare organisations don’t do that very well. If a company was producing cars or microchips, it would be experimenting all the time into which technologies offer the most value, or are the most efficient or the safest. There are many good examples of technology making a difference in health and care, but we don’t have a systemised way of managing this.

Helen Hughes and Clive Flashman – Patient Safety Learning:

4. Better access to centrally sourced and patient-generated data.

Care provision is often based on when a patient was last seen by a clinician. Having patient generated data from wearables or apps, could make available data more timely and contextual to the patient. This could make a big difference to:

  1. Creating dynamic patient risk profiles in near real time. These could suggest optimum timings and approaches for interventions, probability of re-admission, or the need for additional support. As devices continue to get smarter and AI becomes more accurate, they could for example, read someone’s heart rate variability to determine future risk of a mental health crisis.
  2. Highlighting trends in unsafe care so that they can be targeted quickly to avert more significant harm.
  3. Understanding the impact of actions taken so that lessons learned can be continually refined and shared. Closed-loop learning is not yet well used in healthcare.

5. Adopt patient safety standards and embed these into new technologies, especially AI.

Solutions designed with patient safety standards at their core could be intrinsically safer. This requires including patient safety in the design stage of digital solutions, considering how the product will actually be used and ensuring that is as safe as possible.

We are currently working with several healthcare organisations to finalise new patient safety standards.

AI is as good as the algorithms used to create it and it is essential that those are also designed with end user safety as a priority. Parameters used to ‘educate’ AI, or the rules made available to machine learning platforms, should always include patient safety considerations.

6. Build safety more strongly into the user experience.

We need to look at technology design, intended uses, and how it is actually used. If we put new technology into an existing environment with individuals resistant to change, people might create workarounds, or ignore it completely. Redesigning the environment is key to successful adoption and safe implementation.

Surveys of poorly-managed technology implementations have shown they can become a safety risk. If the digital solution is not properly installed, configured and tested with users, then problems related to human factors may inhibit safety from the beginning.

Sometimes technological solutions such as electronic patient records can be highly complicated and designed for organisations rather than the end users – eg, focusing on reimbursement coding rather than capturing clinical observations intuitively. Delivering safe and effective user experience requires co-design and co-production by developers, clinicians and patients.

7. Patient safety maturity index

In the same way that providers measure digital maturity, they ought to be able to use a patient safety maturity index. This could be linked to an accreditation system based on patient safety standards. Digital products could have a stated minimum patient safety threshold that must be achieved before they are procured by healthcare organisations. Users should be encouraged to provide feedback on any safety issues experienced, and ideas to improve the safety of products used.

Richard Jones, chief strategy officer, C2-Ai:

9. Faster deployment of key technologies

The pandemic has shown how hard it can be for solutions identified as being ‘no brainers’, to be deployed in a supported and systematic way in the NHS. This is particularly true when hospitals are under such pressure. For example, NICE states Covid-19 patients should be assessed for Acute Kidney Injury. Technology exists today that could help massively and that has been shown to halve AKI cases in hospitals and free up to 10% of ICU. Yet, individual trusts often have to make local decisions to procure – multiplying the effort required when bandwidth is very limited.

9. Count what counts, not what is easy to count

We need to learn that we can move to significant and meaningful metrics on the fundamental objectives of hospitals – saving lives, making people better and doing so cost-effectively. Current mortality metrics, built purely on statistics, only give a partial view of what is going on and fail to reflect the complexity of the work done. Services dealing with the sickest patients can be wrongly penalised.

A 2016 survey of heart surgeons showed one third avoided the most difficult operations for fear of the impact on their mortality statistics. The negative impact of this can be stark. Sir Bruce Keogh, former national medical director, stressed that statistical mortality should be used as a “smoke alarm” – a precursor to more meaningful investigation. Instead this has become the primary focus.

Meaningful measurement of complications and avoidable harm is almost completely absent. Many hospitals rely on legacy reporting and a few easy-to-count metrics such as falls and pressure ulcers, but these miss the bigger picture. Systems can now be used to measure and help comprehensively reduce avoidable mortality and complications to a high level of accuracy.

Tackling avoidable morbidity, as well as mortality, in hospitals is important. From across our referential dataset (nearly 200 million episodes of care from 46 countries), we see that the patient level, risk-adjusted morbidity is poor. A typical hospital’s monitoring and reporting of issues today can only see 10% of the issues that could be identified and acted on. The cost of this should not be underestimated. According to Healthcare UK, instituting even basic systems of this kind can result in 70,000 fewer deaths in UK hospitals each year and £1bn saved for the NHS to redistribute and deliver more care.

10. Scanning multiple hospitals can double the effect of new systems

Analytical improvement tools in hospitals can save anywhere between £2-15 million in costs and can help to save hundreds of lives. But our international data shows many problems are systemic, and have knock-on effects outside the hospital setting. If the NHS were to scan across hospitals in an observatory function – across ICSs, regions or nationally – this would enable identification of best practice, highlight problem areas and help to prevent issues behind some of the scandals hospitals have experienced in recent years.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

AI can help build sustainable services – but only if we mitigate its risks

AI can help build sustainable services – but only if we mitigate its risks

Concerns about AI should not stop progress. They should prompt us to think about how to apply such powerful processing, argue Rebecca Hughes and Paul…
Digital Health Coffee Time Briefing ☕

Digital Health Coffee Time Briefing ☕

This edition of Coffee Time Briefing includes a call for chartered status for tech professionals and text message initiative for NHS Scotland.
Digital Health Coffee Time Briefing ☕

Digital Health Coffee Time Briefing ☕

Today's edition includes GOSH using AI to help identify Parkinson's Disease and a look at the challenges of evaluating digital health tech.

2 Comments

  • We don’t do nearly enough health economics as we should. If we priced up our systems in QALYs then we’d have a common and appropriate currency to compare our investments, against each other and against the non-IT stuff like drugs and clinical staff.
    I assume that the patient safety maturity index would have some kind of risk score about what potential harm the product could do. It’s also worth considering the risk it mitigates. Every time a patient record tells a clinician something new about the patient, and they act on it, then a patient safety incident is avoided. The value of the system is then a matter of combining the average degree of harm avoided with the proportion of record views that are new info. That would be a much more concrete metric that the traditional, “Hands up, everyone who thinks it might be useful.”

  • Technology is the answer to everything again. Has ANYONE thought of examining the processes which provide the requirements by which to assess the correct technology to employ? If you bolt technology onto inadequate processes- you compound its shortcomings. Even Bill Gates has spotted this and he wouldn’t know a large IT system from a grand piano – it was he who gave a ‘yes’ to go ahead with NPfiT!.

Comments are closed.