Time to light the FHIR and get to grips with standards
- 2 March 2020
In a recent blog post, NHSX has hinted that buying technology which is compliant with standards could be a way to obtain interoperability. Ewan Davis explores whether the standards that we have so far have been developed enough to achieve this.
There is a lot of talk about buying standards compliant technology as a way to achieve interoperability like this piece from NHSX. Now I’m all in favour of the enforcement of appropriate standards, but sadly I’m not aware of any standards that are sufficiently developed to achieve this.
The consensus view, with which I concur, is that the leading standard to support interoperability between heterogeneous systems is HL7 FHIR.
The problem is that it’s not currently possible to specify FHIR compliance in any meaningful way as the necessary FHIR Profiles against which compliance is measured do not yet exist.
What I would like my NHS clients to be able to put into their contracts is this “The Vendor agrees to implement those FHIR profiles currently published by INTEROPen CIC that fall within the scope of their system. The Vendor further agrees to implement any changes to these Profiles or new Profiles within the scope of their system within six months of such changes or new Profiles being published by INTEROPen.”
For this approach to work, we need to have an initial set of FHIR Profiles and an organisation trusted by Vendors, the Professions and the NHS to only publish Profiles that are fit for purpose and not unreasonably onerous for Vendors to implement.
Furthermore, for maximum interoperability we also have to ensure that work on FHIR aligns with other standards activity particularly in relation to SNOMED-CT, openEHR and IHE.
This is all entirely possible, but to achieve it we require three things:
1. A better understanding of standards…
Firstly, policy makers need to have a better understanding of the key standards, SNOMED-CT, openEHR, IHE and HL7 FHIR, and how they fit together to support interoperability and beyond.
HL7 FHIR is the right choice for the exchange of data between heterogeneous systems. FHIR can bring some quick wins but won’t, give us the data fluidly we need to fully exploit digital technologies. For this, we need to move towards shared semantics and open platform architectures, incorporating the open standards and frameworks, openEHR and IHE-XDS.
HL7 FHIR is a new standard that’s changing fast. It is currently at version 4 but with most live implementations based on Draft Standard for Trial Use v2 (DSTU2) or Standard for Trial Use v3 (STU3). FHIR version 5 is due to appear at the end of the year and we know there will be breaking changes between version 4 and 5.
FHIR defines a set of base “Resources” representing a framework for chunks of content (like Medication, Observation or List – there are 145 In FHIR 4.1) from which specific “Profiles” can be created. To achieve interoperability, there needs to be a common set of Profiles covering the data items one wishes to share that are agreed and enforced across the community in which you wish to achieve interoperability. No such set of profiles yet exists for the UK.
It is important that any modelling work done to generate the required FHIR profiles is done in a way that also supports this longer term objective. These are not conflicting approaches. Just as it is possible to tackle climate change both by building new carbon-neutral energy sources *and* more efficient use of fossil fuels, we can move towards shared semantics while improving the interoperability of existing systems.
2. A Trusted Standards Body
Secondly, we need a trusted standards body. Such a body needs to represent the interests of all stakeholders and needs to draw expertise from the Vendors, the Professions and the NHS frontline; and NHS E/D/X need to commission the modelling work required and leave them to get on with it.
We already have such a body in INTEROPen, but it’s not currently working as we need it too primarily due to lack of support from NHS E/D/X, the vested interests of a minority of Vendors and some in the NHS, and the lack of funding to enable the Professional Records Standards Body (PRSB) to provide appropriate input.
If we want Vendors to commit to implementing UK FHIR Profiles, then they need to have the confidence that the Profiles developed are needed, fit for purpose and not excessively onerous to implement. We will only achieve this if they are equal partners in their development. The Vendor community has both the clinical informatics and practical implementation skills to make this happen, and, in the most part, both wants and needs interoperability to work. Vendor input needs to be supplemented and balanced by input from the front line of the NHS (CCIOs and CIOs) who know what is needed, professional clinical informtications (such as those in the Faculty of Clinical Informatics) and the PRSB who should ensure quality and safety.
The role of the Centre NHS E/D/X should be limited to funding the work and ensuring the resulting standards are enforced in procurement. The involvement of the centre in the detailed specification of requirements and even worse – the detailed work – has been unhelpful in the past and the current approach in trying to create the UK Core is not the way forward.
3. Enforcement of standards adoption…
Thirdly, we need to enforce the standards and have an appropriate mechanism for establishing compliance. This means ensuring the appropriate terms and conditions are included in contracts and renewals, and providing a lightweight mechanism for Vendors to demonstrate compliance.
In the past, NHS compliance regimes have been onerous, slow and expensive, and tended to exclude start-ups and new entrants. There is much to learn from the IHE connectathons and the Hackathons being run by INTEROPen. We need an approach to compliance that is effective but simple.
and finally…
Cracking the interoperability challenge is not about the technology or technical standards. We have these! Rather it is about modelling the semantics of clinical discourse. It is a big task, but we know how to do it and have the tools and methodologies to support this work. We need to sweep the politics, empires and vested interests aside and get on with it.
53 Comments
Nationally there are 10s of thousands of DATA Models but each Provider TRUST is working with 100s of different DATA Models. That is non sense and in part is due to health consultants DOing there own thing, bonkers, y(our) NHS needs more people who understand the tech and not those who just lead the tech i.e. more DOeRS please.
Hi Ewan, you write that there will be breaking changes between Version (STU) 4 and Version 5 of FHIR.
I am interested in your sources for this statement. I ask this because the FHIR team announced with STU 4 that it will remain backwards compatible (thus non-breaking) , from this point.
What’s the context behind this question?
Yes FHIR had breaking changes but it’s effect is nothing like weve seen in previous systems.
Its more difficult to manage changes to the underlying data model, such as change of codes or making a data item mandatory.
This effects every standard, so the change cost is primarly at a business level, not technical. Historically especially with older XML based standards, this would cause a high technical cost but FHIR isnt as opinionated, so more tolerant to change and has conversion support.
Anyway, this is mostly not an issue. A key thing here is that breaking changes *don’t actually break anything*.
Nothing that works in your software breaks when a new version of FHIR comes out. I hear this worry all the time.
All it means is that a newer and better version now exists. You don’t have to use it and nothing you have stops working – ever.
Does your hybrid car stop working when a fully electric model is released? No.
Yours is no worse than before, it is still better than petrol, but it is just no longer the best version that there is.
Nothing breaks. You don’t go backwards even if some others go forwards more.
Everyone you talk to with FHIR keeps working too.
Of course, if other people in your exchange community upgrade and make (breaking) changes and turn off old interfaces then you too would have to upgrade.
But, as Kevin says, that happens for lots of business reasons anyway, unrelated to FHIR.
So “breaking changes” in FHIR is mostly a minor worry.
Once you have coded a FHIR interface it has no dependencies on future versions of FHIR and should just keep working.
Now, it’s true that if you heard a new car model was coming out tomorrow, you probably wouldn’t buy one today, and would be annoyed if you only found out later. So take a look at R5, sure – it’s due early 2021, we are drafting it now.
But your purchase won’t suddenly become worse. Any less value is purely relative and psychological. Unlike used cars, FHIR interfaces don’t have a resale market value 🙂
And even with all that, the basic fact, that Bert correctly states, is that FHIR R4 now has significant normative content that simply *won’t ever break*, so there is even less need to worry.
Btw it specifically isn’t “STU 4”, it’s just the “R4” standard, because FHIR it’s no longer a “Standard for Trial Use”.
Let me try a metapgore to explain interoperability by messaging and interoperability using Archetype (OpenEHR, EN13606)
Any baker can use his choice of ingredients, his choice of bakery tools and his choice ovens and bake using one set of rules a product. Lets say Apple cake.
In healthcare, the baker must use the ingredients yo be supplied by the vendor, the kitchen tools prescribed by the vendor and the oven prescribed by the vendor. And when lucky he can bake the same apple cake or not depending on the choices made by the vendor.
In other words my first baker has all the freedom to deal with local requirements.
The second baker suffers from vendor lock-in.
The first baker needs to exchange the recipe (archetype) and an other baker can bake the same Apple cake..
The second baker must do all kinds of additional tricks (messages as transformation mechanism) to bake the same Apple cake.
Hi Gerard, this analogy fails because the issue is not like baking cakes where you can switch recipe one day and carry on. In most cases, to further strain the analogy, the cake (a software system) was designed many years ago, and the internal recipe cannot easily be changed. Your internal data model is usually already fixed – and maybe even optimised for your own environment. So we usually need to map at the outside edges of our systems to get practical interoperability.
Hi Gérard, OpenEhr is flexible, changing the archetype means changing the (virtual) datamodel.
But this theoretically, because an application is more thena (virtual) datamodel. An application is also GUI’s, semantically rich API’s, mapping to messaging, and not in the least: Users.
Users are trained to understand the semantics of the user-interfaces. Most users are not trained software specialists but nurses, clinicians for whom software is just a tool, and they do not want to think much about how to use it.
So, once an OpenEhr system is running, it will not change much, but it will extend.
And therein is a great power why I think OpenEhr is great.
But the argument of flexibility in existing virtual data models is largely illusory
Ewan et al, may I ask a daft question? What is the problem to which standards are the solution in the NHS? Please don’t say interoperability as that has 10 dfferent meanings.
Terry – I think most people would agree that it is good for different pieces of medical software to be able to exchange data, rather than, for instance, re-keying it.
Then given that there are lots of different types of software, it seems to makes sense to standardise how they talk to each other, so that people can document and adopt one “language” rather than interface every pair of systems differently.
It’s more than that Rik. It’s about making sure relevant information is available across the multidisciplinary multi organisational care pathways that we need to deliver high quality, efficient and compassionate care.
It’s also about making data available so we can get the best out of Big Data, AI to support analytics and research to help understand health needs, target care, improve treatment and measure outcomes
well yes of course. And all the things you mention layer on top of all the things I mention 🙂 We want standards at each level.
Most systems use similar data models but very different API’s.
On top of that we have many similar workflows.
So we have many projects that are spending time and money doing things that have already been done before. We should standardise the simple things like referrals, appointment bookings, document management where we can (and do only as much as we need to).
If the basics are standardised we can replace systems more easily, automate more workflows, reduce development costs, improve testing, etc.
Kevin, I think the problem is that systems don’t have similar data models rather they have different data models for similar concepts. This makes mapping a > b or a > FHIR > b non trivial and requires care to ensure clinical safety. Beyond a few high value items, where Interoperability brings some quick wins, this approach is not scalable to the whole scope of health and care.
It is true that systems do not always have comparable data models. Having such models is, of course, useful but will not solve the problem.
Whether it is liked or not, the “a > FHIR > b” approach is the one that is more often than not used. As an organisation that receives data from a large number of systems then it is true that variability exists.
Even from the same suppliers, data varies either as the authoring systems change, or data is consolidated through acquisition.
Furthermore, we see that data entered into the same system but by different users can vary. This occurs on software deployed within single organisations or across multiple organisations. The variability is both in the richness of data and the semantics (even with SNOMED). So the user interface has a large part to play even more so given the degrees of configuration that current systems allow.
So, if the models exist (and of course you mean openEHR) then getting them used by the masses is a huge (potentially futile) task as it will challenge a significant number of significant systems that are in use today. But even then to get them reliably implemented in a manner that ensures they are used in the same way will add to the overall challenge.
Personally, I think initiatives of the type undertaken by PRSB are very useful. They have to mature and the establishment of a feedback loop to form a basis of improvement is essential. This has to be coupled with SNOMED CT with more guidance on how and where to use it. If not, the data quality issues may just move to the domain of SNOMED.
The big bang, everyone uses the same models nationally approach, has, it seems to me, failed to be practical, over the last few decades of trying. We can keep trying.
But the less efficient, bottom up approach has made a surprisingly large amount of progress in a very short time. If everyone gets onto a common tech platform as a step one, many exchanges will be possible while waiting for the grand plan to happen. And when the grand plan comes to fruition, it can sit on top of the inefficiently developed but pragmatic layer that has been deployed. (“Oh, so you have decided that we shall use those particular codes via our existing FHIR interface – fine”)
Sometimes a series of inefficient steps is what works, when one giant leap would be better but never actually happens 🙂
Let us at the least attack this from both ends.
With you on this Rik. I think we are talking about increment convergence. 100 different models would be better than the thousands we have now.
I’m keen we share more and don’t duplicate effort or reinvent wheels.
Ewan
I think we need to separate “the strict use of one single information model” approach from “using own models based on a common reference model”.
Maybe we could have many models, suit for specific use cases, but at least based on the same modeling principles, patterns and common data structures. I think we lack on that area, and the main issue is people designing these pieces of software might design what they think is correct based on their own experience, maybe tied to legacy data, and they don’t do any check against common patterns. Of course, if the gov. is the one putting those patterns forward, the industry will follow, but the gov. is not doing that. Yes, I think openEHR can fill that gap in terms of being a good reference model that contains a lot of common data patterns used in many places in health care information management.
Bottom line is, I think we need to move the approach of focusing on data exchange to focus on data management, which includes exchange but is not the main focus. And data management includes also data definition, which many systems lack to do formally.
Hi Pablo, or as Ewan said originally in the article, these “models based on a common reference model” can be FHIR profiles, with the advantage that they are directly compatible with the lower level exchange mechanism itself (also FHIR).
Hi,
It a gross simplification, but in essence FHIR is a minimum data set approach designed to for messaging between heterogeneous systems while openEHR provides a maximal dataset to support all use cases.
openEHR has a two level modelling approach. The openEHR archetype is a maximal dataset, created by an open global clinical community using online tools. The aim is to include all of the data items in the archetype to meet all use cases that all those that can be bothered to participate can think of. This means, for example, even a simple bit of content like body temperature end up with 12 data points and 13 items of metadata.
At the second level of modelling (the openEHR template) arcytypes are combined and constrained to mach a use case which in for most use case in the case of body temperature will be a single data point.
If we build openEHR and FHIR with an underpinning data model (in practice this mean using openEHR models for both openEHR and FHIR) then we can create alignment between a FHIR resource or Profile and an openEHR template.
My contention is that would should use openEHRs long established model approach, with it supporting, governance, community, tolling and a substantial body of existing work to create the FHIR profile we need. Let take for example the FHIR observation Resource which covers a large number of different clinical observations each of which has to be modelled before FHIR can be used to exchange it. There are already a large number of detail openEHR observation archetypes and we have work from Diego Boscá, Universitat Politécnica de Valéncia that demonstrates we can automatically generate FHIR Observation Profiles from OpenEHR.
By using openEHR fro the underpinning modelling we both get the FHIR Profiles we need more quickly and assure the alignment of these to key standards..
RK:
“So, if the models exist (and of course you mean openEHR) then getting them used by the masses is a huge (potentially futile) task as it will challenge a significant number of significant systems that are in use today. But even then to get them reliably implemented in a manner that ensures they are used in the same way will add to the overall challenge.”
Well FHIR is trying to do that – proposing one model for the whole sector – but a) doesn’t separate clinical semantics from message semantics, b) doesn’t have proper clinician involvement to build the clinical resources and c) builds many resources that are speculative, and don’t correspond at all to existing systems.
The openEHR models are at least high quality clinically-built definitions that can be re-used across multiple implem technologies. The usual misunderstanding is that the openEHR archetypes are fixed models of content; but they’re not, they are data point/group definitions. Recombine as much as you like in templates to get data-sets matching existing systems data. In FHIR you have informal IGs to do that – it has not yet been worked out.
Defining clinical semantics in a message standard is the worst of all worlds – it imposes a third model on sender and receiver systems that today have models of their content, forcing unnecessary data translation. It’s an out of date idea.
What we should be doing is agreeing clinical data point/group models across the sector, outside of things like FHIR, CDA etc, and then providing a mechanism to do the data-set (recombining step) inside those implem technologies. We’ve been doing just that in openEHR, integrating numerous types of data sources for over a decade. If FHIR adopted that approach, then it really would be useful, because it does have some technically useful features, and solves terminology access in a reasonable way.
Yes Ewan, at a high level especially clinical, the models aren’t similar. In some areas we are e.g. Patient, Encounter, Practitioner, Organsiation. Document metadata and service/referral requests could with a little push fall into this category.
I believe we have enough commonality here to generate significant interop and workflow automation.
This is ignoring clinical, I believe getting to an openehr or FHIR model is desirable but we are a long way from that. Most interop is stuck at document and so even (coded) forms based approach would be an advance.
I’m not sure how we solve this, I would prefer a bottom up evolution driven by clinical safety concerns, accepting detailed modelling gets resistance (especially at implementation).
Decouple this from where we have commonality (and some standardisation), allowing that to connect up the NHS.
Standards as they are today in e-health are one of the main problems, not the solution. See any of the first dozen posts here: https://wolandscat.net .
The summary of why e-health standards are of poor quality is that they are attempts at a formal architecture that are built by committees instead of design methodology, and HL7 particularly also avoid using orthodox technical methods, resulting in often complex and difficult workarounds.
This is mainly a failure of process at e-health SDOs – trying to engineer things by democracy. (Normally, ‘standardisation’ is done by obtaining working technology from companies, universities etc, and then trying to agree a common one or else just picking a winner. Either way, all the science has been done. In e-health, SDOs try to do the science and engineering with same process, and it doesn’t work).
Good piece this. I suspect the biggest barrier not talked about much is legacy both in technology and design. Many health IT systems archaic in both and will not for example perform adequately or maintain a good audit or control of what goes on over their interfaces. Writes into databases will often require the application front end to do it simply because nobody can work out where the data should go.
Thanks Ade, The issue of audit of what goes over interfaces is easily dealt with by the use of an API gateway – Such at the tyk.io. Indeed the use of such a gateway is a good idea anyway as it will protect the underlying APIs from many threats such as DDOS attacks, malformed API calls and can manage API use with quotas throttling down or blocking excessive use.
I have no beneficial interest in tyk.io just think it’s a great open source product , with good commercial support.
Ewan
Ewan, I don’t know this product, but I don’t think it does what I was referring to. We use Intersystems Ensemble for integration. It has all necessary tools for auditing messages, backing up queues, the lot. I could always go and look at what went across, when etc, if I wanted to get to that data.
The issues I am referring to are things any application will need to know for itself, if it is to make use of data. Firstly, it needs to understand the context and provenance of anything inbound. It needs to be able to sort that in a way that can behave logically for its database. An example in my head [ie not a known use case] is where a programmer many moons ago wrote a set of input screens for a database and built dependencies on timings and data items for a write commit. They may map in terms of type, but not map in terms of completeness. There would be no way the application could perform a write from an interface, or any API could make that write, if it made the existing data model inconsistent.
The audit requirement I am thinking of is partly linked to what goes on within XDS, where a piece of software is aware of all accesses to the data be it within or external to the application. Typically, if an application is writing to itself, it knows what it has done, even in some rather haphazard logs by todays standards. It might not be able to assemble those writes together with writes from an interface if the author of the interface does not know where the original programmer was posting things. Reads are the same, as you’d like to be able to use one audit to see all accesses to information. If an API is calling the data, how does it know where to write back. Ideally, in supported products, the developers are fully aware, and all code is documented. My point is that I think we are sometimes talking about APIs for things that were built before this was a concept.
On the performance, I think that many applications were purely optimized for their purpose in the initial design. Some of the things we might want to do with data does not fit the original model. For instance we may now want to pull lists from systems that were built primarily for single record accesses. Over simplified example, but if the database route to this involves many calls, this will never perform.
I just think there’s a lot of legacy design and technology based on some very old thinking still in use. It will be a while at current rates of change before all of this takes us to where we want to be. Therefore, some standards etc cannot be implemented until these things are addressed.
Hi Rik, the purpose of FHIR profiles is to define refined interchange models based on the FHIR resources, this is still the definition of an interchange format, not a management guide for the underlying information model.
Hi Pablo
FHIR profiles are for adding a higher level modelling onto FHIR, primarily for interchange but also potentially for system internal use. There are existing large scale systems that use FHIR for their internal data stores.
But why should common models only be at the insides of your system? That is unnecessary for the issue at hand (interoperability). Yes it has some advantages, but it means everyone has to hugely change their system internals – totally impractical for most.
Sure, if you can redesign every system out there to use one model inside, then interoperability is no longer an issue 🙂 (or then again all the different flavours of it may still give you a headache 😉 )
Hi Rik,
FHIR was designed for data interchange, profiles can constraint and extend the base resources, all designed for interchange. If systems are actually changed inside because of FHIR implementation, then developers are not getting it right or HL7 marketing is messing up their messages to the industry. The whole HL7 philosophy from v2 was “do not interfere with system implementation”. So something is clearly wrong with FHIR or with FHIR implementers in terms of system architecture.
Maybe that rings a bell, remember RIMBAA? Whic IMO totally missed the point of not interfering with internal system design and was trying to use v3 for a purpose that was outside it’s scope as a standard.
The key thing about interoperability is that starts with the information definition, not with the message structure definition. The information is managed by the systems, so definitions should be managed inside the systems. Exchange is one technical aspect of interoperability, the other aspect, more important, is semantics. There are many levels of semantics, not everything can be defined by a code (which has been the HL7 way of defining semantics for decades). There is model semantics, there is terminology semantics, and there is business rules semantics (like for CDS or for doing calculations or checking data consistency). On another level you have the interchange semantics, mainly technical formats and communication protocols.
Going back, one thing is a common information model, another thing, which was my point, is to have a common reference model that is not strict per se, but contains certain patterns that could be reused in system design, like the well know design patterns for OOP by the gang of four. Those basic common patterns and principles could be used inside systems, yes, our developers and architects need to adapt since reinventing the wheel on each system is actually blocking interoperability.
But if you think the internal information models don’t affect interoperability, that’s a huge problem. Of course depends of what you understand as “interoperability”. The definition I use is based on “is the exchange and effective use of information…”, with big emphasis on “effective use”. But also the cost of making two systems interoperate should be minimal, because, you can refactor both systems to harmonize all data on them, putting months of work and a couple million dollars, which end up being interoperable but initially those were not. So is really a matter of costs. Implementing standards at many levels gives the “sensation” that after all is working, the system could be easily integrated with other external systems in a plug and play manner, which is far from reality.
Another thing to consider: if currently people are changing the internals of their systems because of FHIR then *every system is actually being redesigned*. This might be a side effect of really don’t knowing the scope of FHIR or just listening to HL7 marketing without understanding the core concepts (and marketing is strong, sometimes misguiding and technically inaccurate, because training and consultancy is a business…). Scary thing is: this is happening right now.
Hi Pablo
So I replied to this in the wrong place, messing up the threads, but I
will repeat some of it here, and add a little, so this question is answered:
No, don’t worry, *not* changing their system internals to use FHIR.
But some that are building new systems, are using FHIR internally, because it makes a decent data model.
It’s not everyone, but it’s an option. Why would it be worse than any other data model?
HL7 has evolved since V2 and it’s no longer taboo to say that you might use HL7 inside.
FHIR is now more about modelling than about messaging. Message models don’t make good information models, but FHIR is less about comms and more about system representation (fitting with REST and facades of course). So the models are more suitable to work internally – purely optional of course.
Rik S:
But some that are building new systems, are using FHIR internally, because it makes a decent data model.
Having studied in detail for a year, I would argue that it makes a terrible data model for anything like an EHR or CDR. I could just pick off some random things, e.g. FHIR Observation is query-oriented not commit-oriented (and is more or less useless for storing any realistic device data); its model of context is completely oriented to trying to represent data pulled from opaque systems; versioning; auditing; and so on.
A more organised discussion of some of the issues is here: https://wolandscat.net/2019/05/24/fhir-versus-the-ehr/
Hi Neil it’s true that lab report data can get very complex.
There is no magic way to remove complexity or variability. But I wonder if your issues are truly unique.
HL7 has been doing lab messaging for 35 years, and I bet almost every use case has been tackled somewhere. You are using HL7 V2, by the sounds of it, which is still the workhorse of almost every lab in the world, but does struggle somewhat with very complex structures. That’s partly why FHIR (and V3 even) have a much more modern tree style structure.
I would be interested in putting you in touch with the HL7 groups that deal specifically with lab results if want to look into this further and see what others are doing in this space. It may be a solved problem. And if not, it would be a great contribution.
A typical issue though is that people want to extend a 20 year old live message system with new data, without actually changing the format or the software on the end. That is a hard one to fix, at least cleanly.
Rik
Hi Rik,
I can’t claim to be any sort of expert where HL7 is concerned but I would be interested in identifying if any work has been done in the H&I area.
Hi Neil if you want to find me on LinkedIn I will try to do some research for you.
My company provide a software solution to Histocompatibility & Immunogenetics labs and it would be extremely beneficial to us if results could be provided via HL7. Unfortunately, the complexity of the results is such that it would be almost impossible to construct messages that handle the results in a meaningful way given current HL7 segment definitions.
Allied to the current HL7 difficulties would be finding consensus amongst H&I professionals as to how such results could be defined, as every lab we work with reports results differently.
“NHSX has hinted that buying technology which is compliant with standards could be a way to obtain interoperability”
I can’t believe that this is in a contemporary news item!
We were banging this drum in the 2000s in the RCR PACSGroup [Neelam Dugar being a leading light here] and in the RCR’s IG committee. Also lots of discussion re extending it across the board – not just for radiology [though were were then better endowed with standards than other sectors].
How come it has taken so long for the powers that be to even think about douing something along these lines?
William – radiologist and medical informatition, now retied for a number of years
It has taken all this time because saying it (whoever says it) doesn’t actually make it so. Standards do nothing by themselves in our complex information space – they are only the very minimal start of a long, tortuous and never-ending process of ‘standardisation’. As you can see from the other comments, wrangling standardisation from the local flavours, which are needed, vs. national level efforts, is really hard.
There really is no such thing as ‘baked-in interoperability’ – that’s ‘cakeism’ to coin a term.
If you want ‘baked-in standardisation’, you have to adopt ‘baked-in information models’ as per openEHR, and even that requires wrangling, governance and standardisation.
Standards are constructs you apply to an architecture, i.e a technical flow diagram with standards as the components. I could show you a sample if these comments could accept diagrams.
Grabbing a bunch of standards is like grabbing a bag of tools with no real idea of what you are going t build with them. Has any actually defined what ‘interoperability’ means in an NHS context? I have defined 5 ways the word could be interpreted and they are all different. An example of a working interoperability scenario (were it to be achieved) would put me and others out of their misery.
A Standard, like technology, is a TOOL and not a SOLUTION. An example is OWASP, part of Code of Conduct for Data-driven Technologies in Health.
‘The OWASP Application Security Verification Standard 4.0, the code of conduct, is a formidable document of 68 pages. Is it feasible for all developers inside and outside the NHS to be proficient in these rules? Is it possible to check if they have read and understand them?’ [my comment]
Unless the scores of people, external and internal, working with the NHS understand fully and can implement these standards, the outcome will be a ‘dogs breakfast’.
To William; Buying standards stuff will help interoperability. Yes and my buying a pair of football boots will get me on the Barcelona team without any effort.
Much to agree within both Ewan’s original piece and in Rik’s responses. FHIR is the best exchange format we have had so far and can be made to work at local level without extensive profiling, just by applying local guidance e.g. on which extensions to use and which codes to use. The problem, though arises when you try to scale this up from local point-to-point app use into something like a LHCRE, or indeed nationally. At that point you are going to start getting into the same sort of muddle as we have with HL7v2 messaging. The idea of Interopen Care-Connect curation was to identify the high-value profiles that could be established nationally, and adapted locally, if neccesary. I am proud of the work that was done and still do not understand why NHS-D withdrew support. I knew we could do much better 1) In having a parallel community curation effort, supported by but not bound to NHS commissioning diktat, and 2) making use of openEHR tooling and methodology to support the tricky aspects of getting detailed computable content requirements before deciding (if) which FHIR resources and profiles best fit. Doing detailed content wrangling is hard – it involves listening to industry, clinicians, mgmt and developers and gradually trying to eke out consensus and compromise. That’s what we try to do in openEHR. You cannot dodge that bullet if you are going to deliver fit-for-purpose artefacts or implementation guidance at the kind of scale that is needed to transform health and social care. I worry that there is poor understanding that this can be done quickly and easily, just by encouraging the use of ‘standards’, commissioning very high-level ‘wish-lists’ like the PRSB ‘Core Standards’ (good worknowhere near fit to be called a standardin) or asking a tech team to do it in-house and present the results for the nation. INTEROPen curation was IMO the right approach, even if it could have been done better, by making more of openEHR tooling/methodology and community curation.
Alongside curation, support was(/had?) also withdrawn from maintenance and delivery of the profiles.
This was taken on by interopen (unfunded primarily by hl7 uk members and suppliers).
Feedback from implementors was the examples, support, reference implementation, etc in the wider care connect was invaluable.
Hi Pablo, no, not changing their system internals to use FHIR.
They are building new systems using FHIR internally, because it makes a decent data model.
Why panic? It’s not everyone, but it’s an option. Why would it be worse than any other data model?
HL7 has evolved since V2 and it’s no longer taboo to say that you might use it inside. Sorry about that.
Hi Rik, there are two things:
1. the FHIR model was not designed for internal information management, so using it that way is like the RIMBAA tried to use the RIM to build systems, a total failure. This is forcing a square to fit a circle. No panic at all, it just seem people is ignoring the past experience or just too busy to research a little bit before building new systems.
In technology you can use anything in any way, but that you CAN doesn’t mean you SHOULD. This is not taboo, it is well know the HL7 approach is “external” not “internal”. The question is: why this approach is better than other approaches? Where is the comparison? Where is the research? I don’t see any scientific argument in favor of that approach, I just see “let’s jump into this bug thing”, like the old saying “it’s IBM, nothing could go wrong”.
2. There are approaches to work internally, methodologies, tools and standards available, and the good thing is: those approaches are compatible with the use of FHIR or any messaging standard. I know FHIR goes beyond messaging, it also has semantics for operations included and some other features, but still was designed for external use and not for internal information management purposes. This is a fact, I’m not making this up. The thing is, this topic is not being discussed enough and bad decisions are being made each second about how to use FHIR on our systems, just because “we need to use FHIR”.
hi Pablo
>it is well know the HL7 approach is “external” not “internal”.
No, sorry. It was strictly that way, but things change.
>1. the FHIR model was not designed for internal information management
Happy to hear your assertion, but I am disputing it. It has been clear since the early days of FHIR that it is an information model not a message model and could be used internally.
Step 1 – interoperate by exposing an idealised representation of your internal model (not a message model)
Step 2 – realise that now you have that model, it’s a good candidate for using internally (if you have the luxury of being able to choose your model)
This was realised by the FHIR designers and well known in the community 5 years ago or more.
>2. but still was designed for external use … This is a fact, I’m not making this up
I simply disagree that this is a fact 🙂 The open source presentations about FHIR architecture have been documenting this for years.
But intent is irrelevant anyway. You would have to explain why they are not good models for the purpose, rather than saying that it was not originally meant to be that way so it mustn’t be so.
No one is saying “we need to use FHIR”. People are using FHIR and saying, “hey, this actually works”. And “I even used it for the internals of my new system, and that works too”.
Ask the Ukranians 🙂
Hi Rik,
Of course you can use FHIR at a project level without a UK set of profiles, but this does not provide system wide interoperability and results in lots of duplicated effort with multiple projects defining the same bit of clinical content badly in incompatible ways.
It’s always tempting for developers to just get on and do it, and I’m not against that when the there is not a UK standard Profile available (as will often/usually be the case) but we need to acknowledge that while this approach may meet project goals it does not help with system wide interoperability and is highly sub-optimal at the whole system level.
As for not needing Profiles at the project level, while this may be technically true, you still need to define how you are going to use a particular FHIR Resource, whether you do this through formal Profiling or some ad hoc hack as. Most FHIR Resources e.g. Observation, List, Task are too generic to enable interoperability and even well defined Resources e.g. Medication, Allergy require some “profiling” formal or ad hoc for a Resource to be useful.l
The task of modelling clinical content, so that every project does not need to repeat the work is a large one, but the expertise to do this exists in the vendor/developer and professional communities as do the methodologies , governance and tooling . In INTEROPen and PRSB we have even created the sort of bodies we need to take this forward but they have been hobbled by a lack of resources and attempts at micro-management by NHS D/E.
If we want to see interoperability progress at pace we need (to steal from a previous project) “Do Once and Share”
Note: As some may not know the terms FHIR Resources and Profile have a particular meaning in the FHIR standard. If you are setting policy around or working with FHIR and don’t understand the difference, you need to. See https://digital.nhs.uk/services/fhir-apis/fhir-profiles-and-fhir-apis
Ewan
Hi Ewan, I can see what you are saying and it is good to think big.
Where I disagree is that I believe we don’t need national agreement on everything to exchange data locally and usefully.
Adopting FHIR to exchange what you already have is good. Nationally agreed data recording standards are good.
Those are orthogonal, conceptually and in practice – assuming you are agile. Is it bad to share our “imperfect” data before the perfect model comes along?
But FHIR profiles are *another* orthogonal axis and this is really my main point. Maybe it is subtle and technical, but I am trying to counter some common “profile inertia”. People think “FHIR is new, interop requirements are new to us, and on top of that we can’t even start til we have mastered FHIR profiles. Oh no!”.
Experience shows that this is not true. Ask Kevin. Great FHIR interop projects exist and they don’t use profiles. Let’s not conflate design and agreement with FHIR profiles. This is a misunderstanding or a misrepresentation.
You need good requirements, at the right level. That’s all. I am not saying, come on plucky developers, let’s just hack. But there really is no need to have these as FHIR profiles. None. And the awkward issue is that this is making FHIR profiles the problem (“FHIR is not ready!”), when in fact it is just a lack of agreement.
It’s like blaming PDF for the lack of good documentation. (“PDF is not ready!” 😉
FHIR is ready to do whatever we want. If we can’t decide exactly what we want, nationally, then that is an issue, an age old issue. But FHIR profiles are not the blocking factor. This fails to see that profiles are a tool to represent requirements, not the requirements themselves.
FHIR works, with requirements. And FHIR plus profiles can work better. But that doesn’t now mean that FHIR minus profiles suddenly doesn’t work.
By the way it really is perfectly possible to work and get great results with the un-profiled FHIR Observation (as an example). It really is easy to use. You just need to add your local requirements (not profiles), the same way you must apply requirements to any software technology. If you don’t want to use Observation.device, then don’t. No profile required.
That won’t solve national level interop today, but you will learn how to exchange data better and faster. With REST and facades and mapping to and from JSON etc etc you have plenty to get stuck into, all of which is a good platform to profile later.
And you will be ready to record the second Korotkoff sound whenever everyone decides that is what we all should be doing 🙂
Rik
They may be generic but absence of profiles doesn’t prevent implementations from being correct. For example correctly coded NEWS2 Observations were on NHS Reference Implementations servers before profiles and curation was completed.
Not saying we need to abandon top down approach but we may need to look at how we achieve our aims. We can’t keep taking waterfall siloed approaches to projects if we want to tech to enhance care.
We need to get agile and open?
I wear many different hats at the moment but as a supplier, building a FHIR based system, I have to follow practicalities on the ground.
The primary source of rules (profiles in FHIR terminology) is HL7v2 and nhs data dictionary.
So for structured or coded information we expect to receive 80%+ via HL7v2, for export this may only be 50% with the bulk of remainder being digital paper (we could add structured to this but it’s unlikely to be processed).
So oddly although I’m working with a FHIR system I’m not going to exchange data with other systems using FHIR. We dont actually have many mainstream UK FHIR interop standards except for transfer of care with its document standards.
Whilst HL7v2 is, of course, an important source of information. Let’s not forget that the lack of consistency of deployments with HL7v2 is well known. Within an enterprise certainly, outside takes more work. As for the Data Dictionary, this is helpful (in England) but again has drawbacks in that it is inflexible and has largely been developed to support secondary use datasets, not point of care interoperability.
Sure. But I seem to recall quite a lot of conversation about ‘how’ to put together the various coded Observations that did involve the INTEROpen curation team. It’s about visiblity and process.
Who says the codes are correct?
Do they line up with existing systems and process?
Do they line up with other ‘community’ efforts that might have picked different SNOMED CT codes – there is a debate raging in Norway about which SNOMED concept hierarchies to use for this sort of purpose.
One of the critical events in the Curation story was when a whole bunch of Profiles developed in good faith with Child health arrived on our door and we were asked to sign them off as ‘curated’, even though they had been developed in isolation. Correct? Aligned?
This is going to get even harder when we get into social care and care planning, where there are almost no pre-existing norms.
So do we wait for all these intractable issues to be solved nationally?
Or do we foster local use of FHIR where people develop what works for them in their community.
Maybe there is not one single right answer. Perhaps once data is flowing, we can decide what works best and possibly map between different successful approaches (and add or change some interfacing codes later, no big deal).
We can send useful data before these debates are concluded. Exchanging names, identifiers, dates and some text with social care would be a start, long before we decide exactly which SNOMED value set is the winner.
I actualky agree Rick and argued that such work should be better informed and supported by the national effort even if it was not part of a specific NHS England commission. The problem is not when people are doing small bits of work as you describe its when much bigger chunks of work with extensive proofing collide with national efforts.
Hi Ewan
FHIR profiles are useful and nice to have. But they are not absolutely mandatory. There are successful FHIR projects that don’t use them (and happened before FHIR profiles even existed).
Profiles are a FHIR specific expression of (some of) the business rules and requirements of a certain domain or functional area.
We have been building software that implements documented requirements forever though, and we have no need to stop doing that while we wait for a particular formalised view of those requirements to be made for us.
If you have requirements now, you can implement FHIR now, and can use any method you choose to show conformance – to your requirements. FHIR profiles are great, but they did not invent conformance (or requirements).
A standardised UK profile might mandate NHS/CHI number use, SNOMED CT coding and add a couple of nationally useful extensions.
That’s a good start, but actually only a few fairly trivial and well known constraints.
You can go further, but much beyond that they get use case specific and less generally applicable.
You still have to actually implement FHIR and all your dozens of other business rules and constraints that a real project needs (you may even want to make your own profiles for those – but you don’t have to).
But the amount that the national profile might buy you is not that significant compared to the bulk of the project, imho.
So I suggest that people consider whether they want to wait for a perfect 10% or 20% to be defined for them (a small subset of their project’s needs), or instead just get on with implementing it all as soon as your timeline and requirements allow.
Yes it’s nice to have some of your work cut out for you – and it will be great if the community pools its experience into making these standard profiles to help others – but don’t let that stop you innovating in the meantime.
Rik
Excellent , no nonsense summary of the challenge with the answers thrown in for free
Comments are closed.