Tuesday, December 30, 2014

MatchstickTV and FirefoxOS

Earlier this year, I spent some time developing for FirefoxOS, a Mobile OS based on a linux kernel and Mozilla browser. To be honest, I have a Revolution phone from Geeksphone which flashes between Android and FirefoxOS, and it has been Android since day one. It work well for what it needs to do as an Android device. However, the idea of an instant developer community for FirefoxOS is compelling - it just hasn't appeared yet because the OS hasn't become a large enough target yet. And, like ChromeOS before it, this may never happen.

MatchstickTV is a recent kickstarter project that takes FirefoxOS and uses it to run an HDMI dongle very similar to Chromecast, but based on open source apps on an open source development platform and an open source OS. It's a little cheaper as well, but I suspect that is not an important selling feature. On the other hand... because there is nothing in particular to license here, this sort of thing could become a great conference giveaway over time, just like USB sticks used to be.

Honestly, I think a tablet using FirefoxOS with a bunch of onboard educational applications - similar to the original OLPC program, perhaps - would be a really good thing. There is a niche for browser-based mobile, and Mozilla is doing a lot of smart, good things to capture it.

Monday, December 29, 2014

Atomized Integration, IBM Worklight and AngularJS

Over the past year, I have worked fairly extensively with IBM Worklight, Big Blue's enterprise mobility package. In the coming year, I plan to find more things to do with Bluemix, IBM's cloud mashup line; for now, some thoughts.


In general, my guidance has been to use open source mobility frameworks, PhoneGap for cross-platform, Bootstrap for Responsive Web Design, Angular for templating, and some form of OAuth2 for security, at least until the vendor solutions from IBM, Oracle, et al reach a higher level of maturity, since these are stepping stones.

If you look at the latest Gartner quadrants for enterprise mobility and cloud for the previous year, you will see IBM maturing in the MADP space and Oracle maturing in the cloud space... but maturity in both areas is necessary for enterprise mobility to fire on all pistons.

Worklight does three things really well:
  1. Simple adaptation on the server-side, using Rhino-based Javascript adapters.
  2. Integrating with existing Websphere and SAM infrastructure.
  3. Increasing productivity through modularization and emulation.
Probably the biggest win here is 3. I started out 2014 working with Firefox OS, Saxon-CE and AngularJS, so I was already committed to using Javascript and XSL as much as possible, and Worklight Adapters played into this approach nicely; however, after *hating* the slowness of native Android development using the Android toolkit, what I appreciated most about Worklight was being able to use an emulator that ran as smoothly as the Firefox OS simulator (which is really just a browser plugin). On the server-side, we are becoming more accustomed to devOps tools like JRebel; on the client-side, we should have similar expectations - is, don't use the Android emulator if you can avoid it. It sucks.

I have mentioned previously how much I like Worklight's lightweight Rhino-based adapters. They are intentionally lightweight, eschewing any sort of SOA reusability. A Worklight adapter does one thing, and it does it well. This can be initially quite pleasant, then very frustrating, and then liberating, as you sort out how much integration you need to do in your client applications. My experience has been that a well designed piece of XSL can convert an XML data source into some standards-compliant JSON, and then a client-side library service can take it from there.

For instance, consider that I have an XML data source containing a number of patient records. Let's say it is NIEM compliant XML. I could build a client application that can consume NIEM compliant JSON, and then all I would need to do in Worklight is create a very simple boilerplate adapter that transforms the XML into JSON. This is assuming that my server-side data source doesn't already support JSON-flavoured NIEM, which would be even simpler. In other words, if my intent is to take a NIEM compliant data source and build a NIEM compliant mobile application, this is quite straightforward. Server-side Worklight adaptation transforms XML into JSON; client-side Angular data-binding injects JSON into the HTML-based presentation layer, and presto, you have an application.

Granted, the development process is not that easy, and let's consider now that we have a number of data sources, some of which are NIEM compliant, some of which are HL7 compliant, some of which are based on direct SQL access, and some of which are ad hoc.

When you look at the various Worklight adaptation examples, you might get the idea that RSS is treated preferentially, which is untrue; however, thinking of these adapters as syndication is still a useful approach.

Throughout the past year, I have been working with HL7 FHIR, a draft standard from HL7 that among other things introduces a JSON-based pattern for aggregation and composition that is essentially Atom syndication in JSON instead of XML. It turns out that if all of my Worklight adapters create Atom-compliant JSON on the server-side, then I can use a Javascript Atom library in the client, and it really doesn't matter what format my data sources are using. By the time they reach my client application, they are all Atom-based.

The client-side service that I have written - using Angular for modularization - is responsible for merging multiple Atom streams. Once I have a single Atom stream, data-binding takes place, so that information can be presented. In practice, this can be frustrating because Atom is intended for serialization of information, but an Atom bundle can also contain relative links between entries. This is fundamental to the way HL7 FHIR works, but not NIEM, so I have ended up creating synthetic and essentially schemaless resources as necessary. Ideally, all information could be mapped into Atom-syndicated FHIR resources. Maybe that's a good project for this year.

Adaptation frameworks always run into a problem based around the decision to go lightweight or go modular. I like that Worklight has gone lightweight, but I am frustrated that I can't reuse just a little bit more code between adapters. In particular, I would really like to use a single set of XSL transforms to support multiple adapters. Perhaps there is a way to do this, but for now, I am still forcing myself to prune my adaptation code as much as possible to keep it easy to maintain. If I find myself using the full set of DocBook or DITA transforms in an adapter, it's probably time to rethink my approach.

On the whole, I have enjoyed working with Worklight adapters immensely; I don't think this would be the case if I was not also using Angular or some other Javascript framework to support development of client-side services. I haven't particularly used the built-in Worklight support for Dojo or JQuery, but I'd go so far as to say that without some sort of hybrid framework support, you will lose much of the productivity that Worklight gives you. After a year, I have reached an understanding that I would not enjoy using a framework like Angular without a platform like Worklight, and I would not enjoy using a platform like Worklight without a framework like Angular.

Unless, of course, the platform was also a framework, which is what approaches like Meteor promise. 

  

Saturday, December 27, 2014

Some Canadian Context for HL7 FHIR


I work in Healthcare Messaging in Canada; specifically, I work in messaging in British Columbia, where we work primarily with Point of Service applications and Clinical Information Systems that generate and consume messages in the HL7 v2 pipe and caret notation, with a foundation of registries and repositories that use a version of HL7 v3 Messaging XML. More or less, this follows Canada Health Infoway's iEHR blueprint; however, following Infoway's original blueprint, we would have HL7 v3 at the Point of Service as well as the foundation.

HL7 v2 is still used extensively in other Canadian jurisdictions. Some use v2 almost exclusively. In Canada, we have a mix of v2, v3, with some CDA. The United States, on the other hand, never embraced v3, creating a desperate need for a better messaging layer. In this case, FHIR will accomplish things in the U.S. that v3 could not, and that leaves Canada in a challenging position - continuing on with further investment in HL7 v3 makes little sense. Like CDA before it, FHIR can be used to augment these projects; there are enough similarities between FHIR XML and v3 Messaging to make this plausible.

Ongoing CDA projects in Canada are bound to continue as such, which will be worth paying attention to as CDA projects in the States start shifting to HL7 FHIR as an implementation standard. The message from Infoway recently here is to use the appropriate standard for the work at hand, and I expect this message to percolate on both sides of the border; but what does this really imply? How do you decide? For new business cases which would previously have required a document standard like CDA, HL7 FHIR is going to be compelling, as well as low risk, local, and greenfield projects.

Worth noting is the four ways that FHIR can be used. As previously discussed, FHIR supports both Messaging and Document use cases; but, perhaps more importantly, FHIR also supports both REST and Service use cases. In addition, FHIR is in many ways custom built for the security and transport requirements of mobile use cases, and contains resource definitions that will enable social use cases like circle of care and information provenance. For existing health information systems and applications, as well as new, FHIR creates new ways to expose, access, and share information; providing not only tools, but also challenges.

Tuesday, December 23, 2014

Yosemite Project and other Chimera

In Greek mythology, the chimera  was a monstrous fire-breathing hybrid nightmare composed of the parts of more than one animal, a lion with the head of a goat rising out of its back, and a tail with a snake's head; a nasty piece of business, eventually dispatched by Bellerophon with some assistance from Pegasus.


Chimera was also the subject of a presentation by Jeni Tennison, OBE, of the Open Data Institute and W3C TAG, at XMLPrague 2012, entitled "Collisions, Chimera and Consonance in Web Content." In this presentation, she introduces a compelling argument that suggests that currently, in the web, we are dealing with four different formats: HTML, XML, JSON, and RDF.

In many ways, these formats complement one another. Sometimes, they clash, creating impedance and dissonance, and sometimes they merge, forming weird and wonderful hybrids. Tennison's presentation is really quite remarkable, and well worth watching as each of these formats evolves.

As I have previously mentioned, another set of presentations, from Dataversity and SemanticWeb.com, are also worth watching and paying attention to. These deal with the Yosemite Project, ongoing work which intends to position RDF as a Universal Healthcare Exchange Language. This work is important in part because it directly addresses how to go about migrating and transforming between formats, once you can establish a common representation using RDF. In many ways, this is a mythical undertaking, but also very promising.

For instance, with the work underway with Project Argonaut and HL7 FHIR, you are looking at a standard for healthcare that comes in two flavours, XML and JSON; however, like its predecessor HL7 CDA, FHIR relies on a human-readable portion, which in this case means HTML5. Add to that the work underway with Yosemite - go watch the presentations! Now you have an ecosystem that supports appropriate use of HTML, XML, JSON, and RDF - the subject of  Dr. Tennison's XMLPrague presentation - now in the context of healthcare. This is really what John Halamka has referred to as the "HTTP and HTML for healthcare".

If you broaden your horizons just a little, you will see some of the work which is also being carried out by Health & Human Services and the NIEM Health Domain, as a counterpart to the work of HL7 International. NIEM is primarily an XML-based standard, but in the last couple years, the underlying tooling there is expanding into UML-, JSON-, and HTML-based representations. With the support of some underlying ontology work, perhaps in concert with Yosemite, NIEM too could be used to create linked health data. These are all very exciting, very important things that are happening very very quickly, and it is a great time to get involved with some of these projects and initiatives.

Monday, December 15, 2014

Project Yosemite, SMART on FHIR, and the Argonauts

The Argonaut Project is a collaboration between Health industry vendors like McKesson, Epic, Meditech and so forth, along with the Mayo Clinic and Beth Israel Deaconess Medical Center, to provide the necessary resources to complete the work of the upcoming HL7 FHIR DSTU (Draft Standard for Trial Use). As Grahame Grieve elaborates on OpenHealthNews, Argonaut is aimed at three particular pieces of work:
  1. Security
  2. CCDA to FHIR Mapping
  3. FHIR Implementation Testing
This work is intended for completion by May 2015. As described, the Security piece initially involves SMART on FHIR®, a platform developed by Harvard Medical School and Boston Children's Hospital, implementing open standards for healthcare data, authorization, and UI integration. For authorization, SMART uses OAuth2, a profile for which will most likely become built in to the FHIR standard.

Josh Mandel, the lead architect behind SMART on FHIR® also spoke recently as part of a series on  of five presentations on Project Yosemite, held by SemanticWeb.org and DataVersity. Project Yosemite began a year or so ago with the Yosemite Manifesto, which establishes RDF (the Resource Description Framework that underlies the Semantic Web and Linked Data) as the best candidate for a universal healthcare exchange language. Project Yosemite follows two paths, "Standards" and "Translation", based on the premise that standards adoption is of primary importance, but that there will always be a need to translate between standards, and even between versions of the same standard.

The idea here is that once you build ontological mappings of various healthcare standards into RDF representations, then Semantic mapping tools like SPINMap and TopQuadrant's TopBraid can be used to construct robust migration/translation layers. This is the first step in producing a distributed network of Linked Health providers, similar to the work currently taking place with Linked Data. At this point, the presentation recordings from DataVersity are not yet all available, but they are definitely worth watching.

HL7 FHIR provides a potential successor to several HL7 standards currently in use internationally. Migration is a critical success factor here, and Project Yosemite presents a different way to approach migration. Perhaps coincidentally, RDF and FHIR are both resource-based approaches; RSS is a syndication format that emerged from work with RDF, and FHIR uses a similar syndication format, Atom, to aggregate and compose health resources, like Patient and Observation.

Project Yosemite benfits FHIR and Project Argonaut, Argonaut accelerates the first phase of ONC Data Access Framework (DAF) project. Project Yosemite is involved with ICD-11. This seems like lot of convergence, and the next 6 months will really show how much. It's a great time to get involved.

Wednesday, December 10, 2014

HL7 FHIR and Argonaut in Canada

I am Canadian, so for me, Argonauts play football, and by football, I don't mean soccer. The Argonaut Project is also the subject of a recent announcement at last week's HL7 Policy Conference in Washington, in response to the latest JASON Report. There appears to be a mythological theme emerging in Health IT, and I'm looking forward to an opportunity at some point to scream "release the KRAKEN!!!" or something similar. But not yet.

The Argonaut Project has the backing of a number of American EHR vendors, including Epic, Cerner, Meditech, McKesson, athenahealth, with additional support from Partners HealthCare, Intermountain Healthcare, Beth Israel Deaconess, and Mayo Clinic. The project extends involvement these organizations already have with HL7 International, and promises to deliver implementation guides related to an emerging HL7 standard, HL7 FHIR, by May timeframe 2015.



This is a diverse group of collaborators and an aggressive timeline, but what does this mean for Health IT projects here in Canada?

Migration and Transformation

Whereas HL7 v2 uses "pipe and caret" notation, and HL7 v3 supports any wire format as long as it is XML, HL7 FHIR comes in two flavours, XML and JSON (which makes it particularly useful for mobile use cases). By design, FHIR is intended to provide a migration path for v2, v3, and CDA. This really reminds me of the intentions behind the development of XML in particular, as a sort of lingua franca for the web, and in that sense, XML has been very successful. As mentioned, for mobile and social use cases, a JSON-based standard for health information will be hugely beneficial as well.

In Canada, we have built a foundation of healthcare registries and repositories based on HL7 v3 Messaging, although the applications that are in place in Hospitals and other Health Information sources typically come from U.S. vendors including many of those mentioned above, which requires a transformation layer from v2 to v3 and back again. I'd like to imagine a world where both the foundation and the Hospital information systems can communicate using the same standard, or through an integration layer that uses a common standard. Argonaut is at the very least a step in that direction.

Documents and Messages

Here in Canada, we have built our information access layer for health around Messaging; in the U.S., Document-centric health prevails. Canadian projects may involve the HL7 Clinical Document Architecture (CDA), but these are more limited in scope than the foundational work which has been carried out involving HL7 v3 Messaging. Recent guidance from Canada Health Infoway is to use the most appropriate standard for the job at hand. In many cases, that will be v3 Messaging, simply because the work is already underway.

FHIR is quite clever in that it is based around Healthcare resources (Patients, Providers, Observations and so forth), a more granular approach than either CDA or v3 Messaging, and this is how FHIR supports both Message- and Document-based flow of information. This is crucial if your requirements are a hybrid, or if you are currently supporting one approach, but are aware that you will need to support the other. Simply put, FHIR dispels the holy war between Health Messaging and Health Documents. ("Unleash the KRAKEN!!!")

Example: Questionnaires


It goes something like this: you are tasked with creating a set of health questionnaires for a Canadian healthcare organization. Most likely, you will create PDF documents, but you might consider using CDA for a moment, because CDA provides an architecture for Clinical Documents. But that moment would pass. Now, consider this: the FHIR community has already held several connectathons involving questionnaires, and one of its members, David Hay, has already written a series of articles about extending the Questionnaire resource based on his experience.

So that's useful.

In particular, IHE (Integrating the Healthcare Enterprise) is currently developing multiple profiles using FHIR as a basis for mobile access - (MHD, PDQm, RESTful PIX). With Canada Health Infoway as the home of IHE in Canada, I am hoping that we can find uses for these profiles here as well. These profiles are under development, but if the consortium behind the Argonaut Project really wants to make a difference, they can throw their support behind IHE as well.

References

HL7 International Press Release
HealthLeaders Media - Argonaut Project is a Sprint toward EHR Interoperability
OnHealthCareTechnology - JASON: The Great American Experiment
HealthcareITNews - Epic, Cerner, others join HL7 project
John Halamka - Life as a Healthcare CIO - Kindling FHIR

Thursday, August 14, 2014

Hard not to agree with this observation by Alex Howard about the newly branded U.S. Digital Services.

Given the anger, doubt and frustration prevalent in the public discourse around government IT, the only way public trust in the federal government's ability to use technology well for something other than surveillance and warfare will be through the deployment of beautiful, modern Web services that work. Jen Pahlka has explicitly connected government's technical competency to trust in this young century.
"If government is to regain the trust and faith of the public, we have to make services that work for users the norm, not the exception," she told to Government Technology, after leaving the White House. Mayors, governors and presidents are experiencing the truth of her statement around the country, from small towns to 1600 Pennsylvania Avenue.
The challenge here is to move beyond secure, mission-critical systems that work in insulated environments - but fail to provide high value - to focus on measurable outcomes, quick(er) wins, higher value services for citizens. This is the holy grail of digitization.

AngularJS and Durandal

When I read on the Angular blog that Rob Eisenberg is working with Angular in addition to continuing revisions on the Durandal templating library, I was understandably excited. I have really enjoyed working with Angular, not just as an SPA framework, but as a prescriptive, modular, and mature JavaScript framework, but like many people, I have found custom directives frustrating; jsFiddle and similar tools provide a good way to develop and test a new directive in isolation, but still. I've experienced this on other projects, using Adobe Flex, for instance. On a project team, you have a number of developers, one of whom supports custom web components, and that works okay, but the other developers don't really understand how the components work under the hood.

I am hoping that the next evolution of AngularJS (3.0) will align much more closely with Durandal, Web Components API, and Polymer. There is really no reason why custom Web Components cannot become just a standard practice for the web. And that's nothing like Angular custom directives, which are confusing, I think, because whereas in most cases Angular balances flexibility and prescription nicely, custom directives are incredibly flexible - transclude? allow directive through attribution or class, or just elements? and so forth. Custom directives are just way too flexible, and they need to be a simple API for doing one thing well, not a combinatorics problem.

On the other hand, what Angular provides that Durandal does not is exactly that balance of prescription and flexibility. Angular tells you how to do things like module structure and model-view-star, and as a development project lead, I appreciate that, because this makes establishing best practices and code reviews manageable. That is why I am expecting great things from Angular, especially if the next version also results in an update to the angular-ui.bootstrap project, providing a ready to use library of web components.

Tuesday, July 22, 2014

Single Page Applications and AngularJS


For many years, the phrase Single Page Application (SPA) was synonymous with TiddlyWiki, a JavaScript-based wiki that was most useful for running independently without an application server and for some very well written code. Aside from TiddlyWiki, SPA was an approach, not a thing.

Mature JavaScript frameworks like Backbone, Angular and Ember have changed this, embodying the notion that you don't find a sweet spot between pure server push and pure client: you either load an application page by page, or you load a single page and construct the application from client-side templates, routing and model-binding. JQuery can support an SPA approach, but doesn't enforce it, and Adobe Flex enforces an SPA approach, but requires Flash to do so.

Of course, Angular is more than just an SPA framework. Amongst the features Angular provides:

Dependency Injection - a core value of the Angular framework, DI clearly defines the interface for what a class consumes through its constructor, rather than hiding class requirements within the class, which makes Angular JavaScript more readable, easier to maintain, and easier to test, since it is clear how internal connections between services are made within your application code base.  This results in fewer lines of more maintainable code, and ease of testing.
 
Templating - Angular templating consists of partial HTML views that contains Angular-specific elements and attributes (known as directives). Angular combines the template with information from the model and controller to create the dynamic view that a user sees in the browser.  The result is more natural templates, based on attribution well-formed HTML.

Two-way data-binding - allows you to work with JSON easily, particularly when this JSON is generated from standardized schematics. An example of this would be if an application receives a JSON payload that is constrained by an XML Schema in the server-side API (the API supports both XML and JSON, and the XML complies to an industry standard). In this case, the Angular view could also be generated from the underlying XML Schema.

Modular JavaScript - nothing special here, Angular allows you to separate the concerns of
controllers, services, filters, and directives into separate modules. Encapsulation makes these components easier to maintain and easier to test, for instance by a team with multiple members.

Controllers and URL Routing - aside from Dependency Injection, Angular's MVVM pattern is the big win here, and routing is just something you need to get used to. Originally, JavaScript was the glue-code for the web, and once you have your application sufficiently modularized, you will find that your Angular controllers retain this stickiness, but, as you build reusable services, your controllers remain lightweight. If you have any business or maintenance logic in your controllers, it is time to refactor and create services. Controllers and routing may not be reusable; services and views will be.
 

Multi-level data scoping - scope is confusing in JavaScript because of the way global scope and declaration hoisting work. Angular simplifies passing scope into a controller or service, and offers a rootScope object that replaces the global namespace. Further, events can be associated with scope at various levels. Data binding, the event model, and service invocation all use the same scope mechanism.

Responsive Design - Bootstrap is a Responsive Web Design library built in CSS and JavaScript. The JavaScript portion of Bootstrap has been ported to Angular directives as part of the AngularUI extension, which fits nicely within the Angular directive paradigm. Directives are fundamental to how Angular works. Using the Bootstrap directives removes some of the need to develop custom directives (custom behaviors and HTML tags). [http://angular-ui.github.io/bootstrap/]

Web Components - with the upcoming partial merging of the Angular framework with the Durandal presentation framework, Angular should move one step closer to supporting the Web Component API, which aligns with the intent behind Angular custom directives, and will bring these more in line with projects like Polymer. By using a common API, these UI libraries become more transportable. 

Monday, July 21, 2014

Back to Basics: Rhizome

I started the Rhizome reference implementation a year ago as a way of demonstrating how a combination of client-side services, constructed using Angular and Cordova, and server-side adaptation and integration, constructed using Worklight, could be used to build a mobile health app for the enterprise. The pieces are there, and I have come to their conclusion that the server-side integration, while important, should really just be built into the application server, which hosts the server-side API. If the server-side API is built to an industry standard like NIEM or HL7, then the burden of integration is lightened, and maybe it could take place within a resource-based suite of client-side services.

The greatest illumination for me came when I stopped trying to build the server back end and with a client app extending it, and instead focused on a client app with an HL7 FHIR standardized interface. Do I have to do a lot of adaptation on the server? Depends on the data source, but... In an ideal world, thee data source has low impedance, and it is already FHIR JSON. In that case, an Angular app built around the core FHIR resources just works.

So I'm taking my references implementation in a slightly different direction, less coupled to an enterprise mobility platform, more reliant on a strong client-side architecture which is resource-based and standardized for the health industry, leveraging profiles from organizations like IHE and HL7 where possible, and probably with a more specific focus on care plans and questionnaires, without losing focus of prescription medications.

I'm also going to try posting more frequently, for a variety of reasons, so please feel free to comment. I have really enjoyed working with AngularJS over the last year, and I know I'm not alone in this.

Saturday, July 19, 2014

Tracking the convergence of NIEM and HL7

Every few years, someone asks "could you implement HL7 with NIEM" or vice versa; well, with enough resources, you can accomplish anything, but what I want to do here is consider how the two standards are converging, how they are divergent, and why. NIEM has had a Health Domain for several years, evolving under the auspices of HHS. You wouldn't know it from the LinkedIn group.

The two communities could really benefit from sharing an understanding that to save money on implementation and stakeholder engagement, they need tools which provide the ability to easily and visually review and alter exchange packages (IEPD, FHIR Conformance Profiles), to reach absolute consensus; and then generate terse and completely accurate validation packages and conformance suites, so as to increase ongoing information safety. We need to be able to put all of the important details on one page.


NIEM and HL7 are both messaging models based on an underlying information model, and whereas HL7 is moving away from design by constraint towards design by extension, NIEM has always relied upon an extension mechanism. The difference here comes down to the size of the NIEM problem space ("everything"), as opposed to HL7 ("healthcare"), for which you might be able to imagine a totalizing framework that encompasses all workflow in all contexts; however, for HL7 as well, a workable extension mechanism is proving to be essential to success, and this is a change from the paradigm established with HL7v3.

NIEM and HL7 are both moving towards support for multiple wire formats. In domestic U.S. markets, HL7 means either "pipe and caret" v2 or "quasi-XML-HTML" hybrid CDA, but internationally, HL7 is an XML standard which is outgrowing the business cases for XML, much like NIEM. For both of these standards to grow and implement future business cases, they will need to also embrace and support JSON, HTML, and RDF, and given time they will.

HL7 is moving away from a proprietary tooling set towards tooling which is readily accessible, like Excel, Java, and XML editors. NIEM already uses a similar toolset, and has several initiatives in play to support open tooling like CAM Editor and UML tooling. One of the difficulties we have run into with HL7 v3 is difficulty sharing visual models, since these are captured in proprietary tooling, and it is here that the NIEM and HL7 communities would both benefit from demanding better tooling. Put simply, shouldn't these two standards support and be supported by a common toolset which extends beyond XMLSpy or Oxygen? And, given time I'm sure they will.

This is something I feel strongly about. At their core, NIEM and HL7 RIM rely on XML Schemas, and yet, XML Schemas are not sufficient to the task. In the HL7 world, as far as v3 Messaging and CDA are concerned, ISO Schematron fills this gap. For NIEM, OASIS CAM performs a similar task; but there is a disservice here to both of CAM and Schematron, that these are treated only as validation tools, when in fact, they contain key pieces of business. The same is true of UML - these should be the tools we use to visually communicate the business to the business.

Some of the tools will be open source, some of them will come from the product world. If the NIEM and HL7 communities articulate their needs, the tool vendors will follow. In short, HL7 and NIEM are both going to need to converge on a set of XML-based tooling that goes beyond XML Schemas and Visio diagrams. The CAM tooling provides some of this. The Excel-based Resource Profiling in FHIR provides some of this. UML tooling provides some of this.

To reduce the burden of approval for stakeholders, both messaging standards need to allow modelers, implementers, and business stakeholders to meet in a room and review the details of a proposed information exchange on a single page, and this will provide high value. When this is happening, information safety increases because the resulting XML Schemas and documentation produced after this meeting will be simpler, more accurate representations of the business.
  

Thursday, July 10, 2014

Converting NIEM XML to HTML5

Currently with Open-XDX, you can persist and retrieve XML information based on a NIEM IEPD. You can also expose information using NIEM JSON through a transformation library, which is useful for building Web and Mobile Web applications, using 4th generation client-side frameworks like Angular, Ember and Polymer, as well as older frameworks like JQuery and Dojo.

There are 4 main information formats used in the Worldwide Web:
  1. HTML is ideal for documentation, tables, and open data, because it is easy to publish and forgiving. HTML is fundamental to REST as a way of exposing endpoint documentation.
  2. JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate.
  3. The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web.
  4. Extensible Markup Language (XML) is a simple, flexible text format derived from SGML (ISO 8879), designed to meet the challenges of large-scale electronic publishing, and plays an increasingly important role in the exchange of a wide variety of data on throughout Web.
In addition, several other XML formats are commonly used for document collation and syndication:
  1. DITA (Darwin Information Typing Architecture) and DocBook are used to assemble documentation out of markup. These will probably both be eventually supplanted by HTML5.
  2. ATOM and RSS are XML-based syndication formats. JSON-based syndication formats have also been described, although this is less mature.
(This discussion sort of refers back to Jeni Tennison's XML Prague keynote on "chimera", in which she discusses the different formats, and the way they, for instance, handle links and URIs differently.)

NIEM currently supports XML-based and JSON-based business cases as a way of quickly and rigorously exposing data for exchange and migration. In addition, the NIEM JSON flavor also supports web and mobile web applications, using the mentioned 4GL frameworks and their like. The quickest way to expose NIEM information, however, is using the HTML information format (most likely HTML5, which is more semantically rich than previous versions).

Basic rules for converting NIEM XML into NIEM HTML:
  1. Create one element per element, with the exception of lists.
  2. For node elements, use div.
  3. For leaf elements, use span.
  4. Where makeRepeatable, use ol and li, containing either div or span elements as per above.
  5. For any element, class attribution represents datatype (like "string" or "date")
  6. For any element, id attribution represents XML element name, including namespace prefix (like "ncPersonName")
Based on these rules, an XSLT transform can be generated from the OASIS CAM schema representation from the NIEM IEPD, which could be generated directly from the CAM tooling. This transform can then be applied to the XML exposed by Open-XDX, allowing this information to be quickly exposed using HTML for read only use and for add/update using forms (XForms? HTML5 forms? Hybrid using JavaScript?).

In the same way that a full HTML page can be created from NIEM information, it should also be possible to generate partial or natural templating. In essence, this is just a fragment of HTML. This may be required to support platforms like Java-Spring-Thymeleaf, Oracle ADF, or Meteor, which all rely on some sort of direction through attribution. The simplest way to expose information is still to create the entire HTML page, instead of a partial. This is noted here because whenever NIEM JSON is used, there will likely be a requirement to generate a template from the NIEM CAM as well.

Note that NIEM is not currently resource-based; their is no inbuilt facility to support REST by exposing resource identifiers; however, one of the requirements for REST is to expose documentation at the endpoint, and it should be possible to generate this documentation directly from the IEPD (I think Datypic generates something like this for the NIEM Core). In this case, the IEPD may be sufficient.

Working with multiple standards for Health

I've been doing some reconsidering over the summer about the existing profiles and use of health standards in Canada, standards like HL7 v2, v3 and CDA. Primarily, v2 is used for ADT (Admission, Discharge and Transfer), but that's a huge chunk of workflow. The reason HL7 v2 is used so pervasively in Canada is because the clinical systems in use in hospitals (Cerner, Epic, Meditech, etc) use v2. At the time the current blueprint for electronic health record adoption was planned, there was an expectation that the U.S. would undergo a process of migration to HL7 v3, and that never happened, and this has left us with a foundation built around HL7 v3 Messaging and its core principles, with a thick transformation layer between this foundation and clinical systems in hospitals.

Obviously, this creates a space of impedance mismatch where continuity of service is put at risk. As a way of mitigating this risk, v3 Messaging is augmented with a companion specification, CDA, the Clinical Document Architecture, which promises to supports health documents like Continuity of Care, Health Questionnaires and Care Plans, as well as business cases using CDA to handle data in migration. Again, in the U.S., HL7 CDA has been used as an alternative to v3 Messaging to support exchange of health information, and in Canada we may benefit from following that path, but if we do, we should be aware that this path is probably morphing as we speak into a thing called "C-CDA using HL7 FHIR XML".

As discussed here and elsewhere, FHIR is a successor standard to all three HL7 standards, providing support for JSON and REST which have not been previously available, as well as the ability to essentially re-implement CDA using a similar XML standard. FHIR has a lot of potential in Canada and abroad in order to enable mobile health applications, but in order to design an build these applications, we need to reconsider the iEHR architecture on which we are currently building.

To that end, I have a number of suggestions:
  1. Foster communication between systems using like standards: for instance, we have invested substantially in communicating clinical information between clinical systems in hospitals and the foundation layer of Labs, Pharmacies and Diagnostic Imaging; but can we find quick wins through improved intercommunication amongst the domains in the foundation, or between the enterprise systems that use v2 natively?
  2. Create an adaptation layer supporting lightweight secure access: this is where FHIR may play a part, used to expose high value information across the enterprise. The danger in providing an incomplete picture is that people will take it for a complete picture; because FHIR is rooted in extension, composition and aggregation, it may provide a way to build a fuller picture of longitudinal patient information.
  3. Registries like Provider, Client and Location should provide more comprehensive Identity Assurance; again, this really means removing continuity gaps within the services available to a patient, thus providing the history of interactions which is a necessary part of guaranteeing identity.
  4. Create an application layer that supports developing mobile and web applications that can connect directly to the resources exposed in step 2.
This last step is what I have been reconsidering. As a document standard, CDA has had some success as a technology for mobile health in the U.S., under the auspices of Meaningful Use; however, a mobile health application built to natively use HL7 FHIR JSON and XML, even during the early adoption phase, would still be a solid target for reuse, and it is this capability for reuse that I find compelling, more than using HL7 FHIR to build new registries, repositories, or even greenfield projects. Simply put, if you build a health app using HL7 FHIR as an interface specification, you may still need to perform server side adaptation, but more than likely you would anyway. The benefit from doing so is that you are constructing a stationary target, and that is invaluable. 

Wednesday, April 02, 2014

HIMSS14 and the Culture of Interoperability



Interesting follow up piece to HIMSS14 from Deloitte in WSJ, and a few thoughts on human and machine interoperability. The message:

Interoperability standards are often overemphasized in discussions of data sharing, and it is important to understand that standards for interoperability already exist and can be implemented. What is critical is shedding the light on building interoperability into vendor design of medical products versus just building standards. Enlisting provider buy-in is one way of supporting this goal.
 
We have the standards, now we need to apply them. For modern interoperability standards like HL7 CDA, for instance, the barrier to entry is incredibly low – you can wrap an existing PDF in an appropriate header, which is essentially boilerplate, and you have satisfied the lowest level of CDA compliance. What is important to understand here is that “clinical interoperability” does not have to imply “machine-readable” except in the broad sense of syntactically capable of exchange between two systems or components. The lowest level of interoperability is exchange of human-readable content within a structure that can later be extended to support machine-readable coding. 

From the HL7 Standards blog:

The primary characteristic of a CDA document is that it must be readable by humans. The CDA specification states that, “human readability guarantees that a receiver of a CDA document can algorithmically display the clinical content of the note on a standard Web browser.” This requirement means that a clinician or patient can take a CDA document and display it on any computer with a web browser without the need to load any additional application.

The real work in interoperability, as we know, is in rationalizing and aligning code-sets. That's a governance issue. Exposing human-readable content in a structured fashion is important, as described above. But is it possible to access a system's supported vocabulary  and conformance profile using a standard Web browser? Maybe that would be useful as well. Incidentally, that's one of the ways FHIR goes beyond CDA. Clinical interoperability is about exchanging a specific type of information, for instance, exchanging clinical information about a patient that allows an exchange partner to leverage what we already know about them. One of the things we should be able to exchange is a conformance profile that defines how such an exchange can take place.

This is at the heart of an ongoing debate in Canada about the future of both CDA and SNOMED-CT. We have existing standards and terminology sets, so aren't these adequate to the task? What can we learn from this debate about what factors contribute to the success or failure of clinical interoperability projects? How can we reduce complexity, while increasing availability of information and metadata?