Davey Winder: Does AI have a part to play in securing NHS data?
- 30 July 2018
Unless you have been living in a cave for the past month, you will know that the NHS has recently had its 70th birthday. However, you could be forgiven for not knowing that artificial intelligence (AI) is celebrating the same anniversary.
It was in July 1948 that Alan Turing’s Intelligent Machinery report was delivered to the National Physical Laboratory. It started with the words: “’You cannot make a machine to think for you.’ This is a commonplace that is usually accepted without question. It will be the purpose of this paper to question it.”
Fast forward to now, and AI is the current buzzword when looking at the future of the NHS through technology-tinted spectacles. At the end of June, Matt Hancock – then the secretary of state for digital, culture, media and sport, now the health and social care secretary – opened the new London Office for Rapid Cybersecurity Advancement (LORCA).
In his speech, he talked up the importance of AI to the NHS. “All of the great advances in the human condition have been led by improvements in knowledge and collective intelligence,” Hancock stated.
He went on to speak about how it is going to be used to “transform the prevention, early diagnosis and treatment of a number of diseases”.
While confirming the opening of NHS data sets will be critical to train the intelligence through machine learning (ML), Hancock also said “making sure that the data is both secure and held in a way that enhances privacy” is a priority.
Rise of the machines
From the health focus of IBM’s Watson AI platform, through to Google-owned DeepMind projects such as the Streams app developed with the Royal Free London NHS Foundation Trust, AI promises much when it comes to analysing big data. Not least that it can do so far more quickly, from far greater pools of data and with far greater contextual accuracy than humans could ever hope to match.
Most of this isn’t disputed – unlike sharing of patient data to feed the machine beast. But I’m thinking less about the transparency of sharing such data with intelligent machines, and more about how such machines might help secure it. Not least as NHS Digital already seems to be on the case when it comes to de-identification of patient data, having signed a contract with Privitar for such software.
This should enable consistent and privacy compliant data streams to be linked across care settings and geographical boundaries. It should also aid the progress of AI within the NHS since the bigger the data sets, the better the inferences that can be drawn using machine learning technologies.
Is it AI… or is it ML?
Ah yes, ML. Without wishing to get too tied up in technical semantics, pretty much all the artificial intelligence that gets spoken about in relation to the NHS isn’t actually ‘real’ AI at all, it’s ML.
Machine learning algorithms are most commonly used in security applications to understand what a normal flow of data, access request or network activity looks like. They are not completely autonomous, they require a degree of teaching to know what they are getting right or wrong, but over a relatively short period of time can contextualise the data presented and infer intent to a high degree of accuracy.
In other words, these systems are in a better place to identify unusual and potentially dangerous scenarios than human eyes alone. The human analyst is still required to follow-up on that incident reporting, but because there are far fewer false alarms these incident response resources can be applied much more efficiently. In other words, the intelligent machine becomes another analyst sitting in the security operations centre – albeit it a very highly experienced one – and offering guidance to the physical team.
A possible security resource
Many security solutions are already implementing ML-capabilities. The NHS just needs to catch up and allow ‘AI’ to start impacting positively on its security posture. This is starting to happen. Just recently, for example, NHS Digital announced it has partnered with IBM to provide enhanced data security services to the Cyber Security Operations Centre (CSOC). This will provide access to IBM X-Force Threat Intelligence, a dynamic database of threat indicators.
IBM is a world leader in AI research, and the Watson technology (which famously won $1 million playing the Jeopardy quiz show in the US) has been developing health applications for some years now.
But the platform can also be a security resource, combining cognitive capabilities with that threat intelligence platform to reveal hidden threats. This promises to speed up the threat response process considerably and can scale to large organisations such as the NHS without blinking. Again, it wouldn’t replace human analysts but it would augment (through intelligent automation) their ability to act on potential threats faster and with less false positives.
AI isn’t the solution to the problems of data security in the NHS, but it must be part of the solution in years to come. Human error plays a vital role in the clear majority of attacks that actually succeed and while this can be mitigated in part by better education and training but cannot be eradicated. Not that I’m suggesting AI can eradicate attacks either, but the intelligent automation of auditing and access control processes could certainly cut the human vulnerability count, and therefore the number of breaches… dramatically.
1 Comments
As I site possessor I believe the content matter here is rattling fantastic , appreciate it for your hard work. You should keep it up forever! Best of luck.
Comments are closed.