Trust in AI among healthcare staff and patients ‘will improve with regulation’
- 11 July 2022
Trust in artificial intelligence (AI) amongst healthcare staff and patients will improve with increased regulation, a thought leadership report has revealed.
The ‘AI in Healthcare – Transforming the UK’s Health System’ report, from law firm DAC Beachcroft, presents the legal challenges being faced by the use of AI in healthcare delivery.
With the technology in the NHS rapidly evolving, the report cautions that the key to its continuing success is patient buy-in. People want to be reassured that AI technology can protect their rights and manage confidential data. In addition, they want to know that the use of AI will not mean compassion and humanity are lacking in their medical care.
The report also highlights the Medicines and Healthcare Products Regulatory Agency (MHRA)’s autumn 2021 consultation on the future regulation of medical devices in the United Kingdom. This set out proposals for changing the current regulatory regime, including Software as a Medical Device (SaMD) and AI as a Medical Device (AIaMD). The new regime is now due to come into effect on 1 July 2023.
Additionally, last August the multi-agency advice service (MAAS) was established to provide a one-stop shop for support, information and guidance on the regulation and evaluation of AI technologies used in the industry.
The issue of data and trust is one that the report says needs to be considered carefully.
“Anyone operating in the AI space in healthcare should think beyond the minimum steps required in order to comply with data protection law and instead ask, ‘how can I make this understandable to patients in a way that will enable them to place their confidence in it?’,” the report states.
Commanding compliance
The report also recommended that compliance needs to be considered alongside and throughout the development of AI technology, saying: “This will ensure that fundamental considerations such as the form and necessity of data, the roles of healthcare and technology providers, and the way AI fits in with clinical services enable the adoption of technology.”
It recommends a “robust and well-planned start-to-end development process” which it believes will ensure that no considerations or critical steps are missed out.
DAC Beachcroft partner, Darryn Hale, said: “There is no doubt that AI technology is going to play an increasing role in the future of the delivery of healthcare, but key to its adoption will be clarity on its regulation alongside trust from both clinicians and patients in the products.
“We are very pleased to have brought together key voices who will have a part in shaping the future of AI in healthcare for this report, with our contributors’ thoughts and insights combining to provide a holistic overview of the key issues and the potential of deploying AI in the delivery of healthcare.”
The report brought together clinicians, regulators, developers, investors and lawyers to discuss AI tech’s potential.
You can read more about the use of AI in healthcare in a column from Andrew Davies, digital health lead at the Association of British HealthTech Industries (ABHI).
5 Comments
The future will be dominated by technology. As it necessitates aid in digital healthcare, all of which may be accomplished through IT. And, since AI has progressed and began to supplant humans as a resource, it is now more reliable and straightforward to handle everything.
Aside from AI, if you look at the healthcare industry and how it operates, you will see that healthcare app development is fast gaining market share.
I don’t see evidence that IT is supplanting people in medicine.
And gaining market share (which market?) isn’t, in my opinion, defacto evidence of healthcare app value. 2.87m apps (2021) don’t necessarily equate to a specific gain in productivity or other value – both of which can be subjective.
I fail to see the direct connection between Regulation & Trust in this, or any other, case.
I may be more trusting if I know:
– How the AI is working (explainability)
– What safeguards there are
– What the AI is actually doing
– Where AI is being used
– Why AI is better at the task in-hand
– And whether the AI is making a decision OR informing a decision
Many things are regulated. We have a speed limit on roads; I don’t trust that everyone will abide by it, and have experience that that don’t.
The future is in support of technology. As it demands support in digital healthcare and everything that can work through IT. And as AI has come into existence and started taking place of humans as a resource, so now it is more reliable and easy to take care of everything.
Apart from AI, if you look into the healthcare industry and how it is working, then [blockchain as a service](https://appinventiv.com/blog/choose-blockchain-as-service-provider/) is something that is leading the industry very speedily.
Is AI “taking the place of humans”? Where it’s arguably most prevalent, in Radiology, I’m not aware of a reduction on Radiologists.
Comments are closed.