Tuesday, July 22, 2014

Single Page Applications and AngularJS


For many years, the phrase Single Page Application (SPA) was synonymous with TiddlyWiki, a JavaScript-based wiki that was most useful for running independently without an application server and for some very well written code. Aside from TiddlyWiki, SPA was an approach, not a thing.

Mature JavaScript frameworks like Backbone, Angular and Ember have changed this, embodying the notion that you don't find a sweet spot between pure server push and pure client: you either load an application page by page, or you load a single page and construct the application from client-side templates, routing and model-binding. JQuery can support an SPA approach, but doesn't enforce it, and Adobe Flex enforces an SPA approach, but requires Flash to do so.

Of course, Angular is more than just an SPA framework. Amongst the features Angular provides:

Dependency Injection - a core value of the Angular framework, DI clearly defines the interface for what a class consumes through its constructor, rather than hiding class requirements within the class, which makes Angular JavaScript more readable, easier to maintain, and easier to test, since it is clear how internal connections between services are made within your application code base.  This results in fewer lines of more maintainable code, and ease of testing.
 
Templating - Angular templating consists of partial HTML views that contains Angular-specific elements and attributes (known as directives). Angular combines the template with information from the model and controller to create the dynamic view that a user sees in the browser.  The result is more natural templates, based on attribution well-formed HTML.

Two-way data-binding - allows you to work with JSON easily, particularly when this JSON is generated from standardized schematics. An example of this would be if an application receives a JSON payload that is constrained by an XML Schema in the server-side API (the API supports both XML and JSON, and the XML complies to an industry standard). In this case, the Angular view could also be generated from the underlying XML Schema.

Modular JavaScript - nothing special here, Angular allows you to separate the concerns of
controllers, services, filters, and directives into separate modules. Encapsulation makes these components easier to maintain and easier to test, for instance by a team with multiple members.

Controllers and URL Routing - aside from Dependency Injection, Angular's MVVM pattern is the big win here, and routing is just something you need to get used to. Originally, JavaScript was the glue-code for the web, and once you have your application sufficiently modularized, you will find that your Angular controllers retain this stickiness, but, as you build reusable services, your controllers remain lightweight. If you have any business or maintenance logic in your controllers, it is time to refactor and create services. Controllers and routing may not be reusable; services and views will be.
 

Multi-level data scoping - scope is confusing in JavaScript because of the way global scope and declaration hoisting work. Angular simplifies passing scope into a controller or service, and offers a rootScope object that replaces the global namespace. Further, events can be associated with scope at various levels. Data binding, the event model, and service invocation all use the same scope mechanism.

Responsive Design - Bootstrap is a Responsive Web Design library built in CSS and JavaScript. The JavaScript portion of Bootstrap has been ported to Angular directives as part of the AngularUI extension, which fits nicely within the Angular directive paradigm. Directives are fundamental to how Angular works. Using the Bootstrap directives removes some of the need to develop custom directives (custom behaviors and HTML tags). [http://angular-ui.github.io/bootstrap/]

Web Components - with the upcoming partial merging of the Angular framework with the Durandal presentation framework, Angular should move one step closer to supporting the Web Component API, which aligns with the intent behind Angular custom directives, and will bring these more in line with projects like Polymer. By using a common API, these UI libraries become more transportable. 

Monday, July 21, 2014

Back to Basics: Rhizome

I started the Rhizome reference implementation a year ago as a way of demonstrating how a combination of client-side services, constructed using Angular and Cordova, and server-side adaptation and integration, constructed using Worklight, could be used to build a mobile health app for the enterprise. The pieces are there, and I have come to their conclusion that the server-side integration, while important, should really just be built into the application server, which hosts the server-side API. If the server-side API is built to an industry standard like NIEM or HL7, then the burden of integration is lightened, and maybe it could take place within a resource-based suite of client-side services.

The greatest illumination for me came when I stopped trying to build the server back end and with a client app extending it, and instead focused on a client app with an HL7 FHIR standardized interface. Do I have to do a lot of adaptation on the server? Depends on the data source, but... In an ideal world, thee data source has low impedance, and it is already FHIR JSON. In that case, an Angular app built around the core FHIR resources just works.

So I'm taking my references implementation in a slightly different direction, less coupled to an enterprise mobility platform, more reliant on a strong client-side architecture which is resource-based and standardized for the health industry, leveraging profiles from organizations like IHE and HL7 where possible, and probably with a more specific focus on care plans and questionnaires, without losing focus of prescription medications.

I'm also going to try posting more frequently, for a variety of reasons, so please feel free to comment. I have really enjoyed working with AngularJS over the last year, and I know I'm not alone in this.

Saturday, July 19, 2014

Tracking the convergence of NIEM and HL7

Every few years, someone asks "could you implement HL7 with NIEM" or vice versa; well, with enough resources, you can accomplish anything, but what I want to do here is consider how the two standards are converging, how they are divergent, and why. NIEM has had a Health Domain for several years, evolving under the auspices of HHS. You wouldn't know it from the LinkedIn group.

The two communities could really benefit from sharing an understanding that to save money on implementation and stakeholder engagement, they need tools which provide the ability to easily and visually review and alter exchange packages (IEPD, FHIR Conformance Profiles), to reach absolute consensus; and then generate terse and completely accurate validation packages and conformance suites, so as to increase ongoing information safety. We need to be able to put all of the important details on one page.


NIEM and HL7 are both messaging models based on an underlying information model, and whereas HL7 is moving away from design by constraint towards design by extension, NIEM has always relied upon an extension mechanism. The difference here comes down to the size of the NIEM problem space ("everything"), as opposed to HL7 ("healthcare"), for which you might be able to imagine a totalizing framework that encompasses all workflow in all contexts; however, for HL7 as well, a workable extension mechanism is proving to be essential to success, and this is a change from the paradigm established with HL7v3.

NIEM and HL7 are both moving towards support for multiple wire formats. In domestic U.S. markets, HL7 means either "pipe and caret" v2 or "quasi-XML-HTML" hybrid CDA, but internationally, HL7 is an XML standard which is outgrowing the business cases for XML, much like NIEM. For both of these standards to grow and implement future business cases, they will need to also embrace and support JSON, HTML, and RDF, and given time they will.

HL7 is moving away from a proprietary tooling set towards tooling which is readily accessible, like Excel, Java, and XML editors. NIEM already uses a similar toolset, and has several initiatives in play to support open tooling like CAM Editor and UML tooling. One of the difficulties we have run into with HL7 v3 is difficulty sharing visual models, since these are captured in proprietary tooling, and it is here that the NIEM and HL7 communities would both benefit from demanding better tooling. Put simply, shouldn't these two standards support and be supported by a common toolset which extends beyond XMLSpy or Oxygen? And, given time I'm sure they will.

This is something I feel strongly about. At their core, NIEM and HL7 RIM rely on XML Schemas, and yet, XML Schemas are not sufficient to the task. In the HL7 world, as far as v3 Messaging and CDA are concerned, ISO Schematron fills this gap. For NIEM, OASIS CAM performs a similar task; but there is a disservice here to both of CAM and Schematron, that these are treated only as validation tools, when in fact, they contain key pieces of business. The same is true of UML - these should be the tools we use to visually communicate the business to the business.

Some of the tools will be open source, some of them will come from the product world. If the NIEM and HL7 communities articulate their needs, the tool vendors will follow. In short, HL7 and NIEM are both going to need to converge on a set of XML-based tooling that goes beyond XML Schemas and Visio diagrams. The CAM tooling provides some of this. The Excel-based Resource Profiling in FHIR provides some of this. UML tooling provides some of this.

To reduce the burden of approval for stakeholders, both messaging standards need to allow modelers, implementers, and business stakeholders to meet in a room and review the details of a proposed information exchange on a single page, and this will provide high value. When this is happening, information safety increases because the resulting XML Schemas and documentation produced after this meeting will be simpler, more accurate representations of the business.
  

Thursday, July 10, 2014

Converting NIEM XML to HTML5

Currently with Open-XDX, you can persist and retrieve XML information based on a NIEM IEPD. You can also expose information using NIEM JSON through a transformation library, which is useful for building Web and Mobile Web applications, using 4th generation client-side frameworks like Angular, Ember and Polymer, as well as older frameworks like JQuery and Dojo.

There are 4 main information formats used in the Worldwide Web:
  1. HTML is ideal for documentation, tables, and open data, because it is easy to publish and forgiving. HTML is fundamental to REST as a way of exposing endpoint documentation.
  2. JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate.
  3. The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web.
  4. Extensible Markup Language (XML) is a simple, flexible text format derived from SGML (ISO 8879), designed to meet the challenges of large-scale electronic publishing, and plays an increasingly important role in the exchange of a wide variety of data on throughout Web.
In addition, several other XML formats are commonly used for document collation and syndication:
  1. DITA (Darwin Information Typing Architecture) and DocBook are used to assemble documentation out of markup. These will probably both be eventually supplanted by HTML5.
  2. ATOM and RSS are XML-based syndication formats. JSON-based syndication formats have also been described, although this is less mature.
(This discussion sort of refers back to Jeni Tennison's XML Prague keynote on "chimera", in which she discusses the different formats, and the way they, for instance, handle links and URIs differently.)

NIEM currently supports XML-based and JSON-based business cases as a way of quickly and rigorously exposing data for exchange and migration. In addition, the NIEM JSON flavor also supports web and mobile web applications, using the mentioned 4GL frameworks and their like. The quickest way to expose NIEM information, however, is using the HTML information format (most likely HTML5, which is more semantically rich than previous versions).

Basic rules for converting NIEM XML into NIEM HTML:
  1. Create one element per element, with the exception of lists.
  2. For node elements, use div.
  3. For leaf elements, use span.
  4. Where makeRepeatable, use ol and li, containing either div or span elements as per above.
  5. For any element, class attribution represents datatype (like "string" or "date")
  6. For any element, id attribution represents XML element name, including namespace prefix (like "ncPersonName")
Based on these rules, an XSLT transform can be generated from the OASIS CAM schema representation from the NIEM IEPD, which could be generated directly from the CAM tooling. This transform can then be applied to the XML exposed by Open-XDX, allowing this information to be quickly exposed using HTML for read only use and for add/update using forms (XForms? HTML5 forms? Hybrid using JavaScript?).

In the same way that a full HTML page can be created from NIEM information, it should also be possible to generate partial or natural templating. In essence, this is just a fragment of HTML. This may be required to support platforms like Java-Spring-Thymeleaf, Oracle ADF, or Meteor, which all rely on some sort of direction through attribution. The simplest way to expose information is still to create the entire HTML page, instead of a partial. This is noted here because whenever NIEM JSON is used, there will likely be a requirement to generate a template from the NIEM CAM as well.

Note that NIEM is not currently resource-based; their is no inbuilt facility to support REST by exposing resource identifiers; however, one of the requirements for REST is to expose documentation at the endpoint, and it should be possible to generate this documentation directly from the IEPD (I think Datypic generates something like this for the NIEM Core). In this case, the IEPD may be sufficient.

Working with multiple standards for Health

I've been doing some reconsidering over the summer about the existing profiles and use of health standards in Canada, standards like HL7 v2, v3 and CDA. Primarily, v2 is used for ADT (Admission, Discharge and Transfer), but that's a huge chunk of workflow. The reason HL7 v2 is used so pervasively in Canada is because the clinical systems in use in hospitals (Cerner, Epic, Meditech, etc) use v2. At the time the current blueprint for electronic health record adoption was planned, there was an expectation that the U.S. would undergo a process of migration to HL7 v3, and that never happened, and this has left us with a foundation built around HL7 v3 Messaging and its core principles, with a thick transformation layer between this foundation and clinical systems in hospitals.

Obviously, this creates a space of impedance mismatch where continuity of service is put at risk. As a way of mitigating this risk, v3 Messaging is augmented with a companion specification, CDA, the Clinical Document Architecture, which promises to supports health documents like Continuity of Care, Health Questionnaires and Care Plans, as well as business cases using CDA to handle data in migration. Again, in the U.S., HL7 CDA has been used as an alternative to v3 Messaging to support exchange of health information, and in Canada we may benefit from following that path, but if we do, we should be aware that this path is probably morphing as we speak into a thing called "C-CDA using HL7 FHIR XML".

As discussed here and elsewhere, FHIR is a successor standard to all three HL7 standards, providing support for JSON and REST which have not been previously available, as well as the ability to essentially re-implement CDA using a similar XML standard. FHIR has a lot of potential in Canada and abroad in order to enable mobile health applications, but in order to design an build these applications, we need to reconsider the iEHR architecture on which we are currently building.

To that end, I have a number of suggestions:
  1. Foster communication between systems using like standards: for instance, we have invested substantially in communicating clinical information between clinical systems in hospitals and the foundation layer of Labs, Pharmacies and Diagnostic Imaging; but can we find quick wins through improved intercommunication amongst the domains in the foundation, or between the enterprise systems that use v2 natively?
  2. Create an adaptation layer supporting lightweight secure access: this is where FHIR may play a part, used to expose high value information across the enterprise. The danger in providing an incomplete picture is that people will take it for a complete picture; because FHIR is rooted in extension, composition and aggregation, it may provide a way to build a fuller picture of longitudinal patient information.
  3. Registries like Provider, Client and Location should provide more comprehensive Identity Assurance; again, this really means removing continuity gaps within the services available to a patient, thus providing the history of interactions which is a necessary part of guaranteeing identity.
  4. Create an application layer that supports developing mobile and web applications that can connect directly to the resources exposed in step 2.
This last step is what I have been reconsidering. As a document standard, CDA has had some success as a technology for mobile health in the U.S., under the auspices of Meaningful Use; however, a mobile health application built to natively use HL7 FHIR JSON and XML, even during the early adoption phase, would still be a solid target for reuse, and it is this capability for reuse that I find compelling, more than using HL7 FHIR to build new registries, repositories, or even greenfield projects. Simply put, if you build a health app using HL7 FHIR as an interface specification, you may still need to perform server side adaptation, but more than likely you would anyway. The benefit from doing so is that you are constructing a stationary target, and that is invaluable. 

Wednesday, April 02, 2014

HIMSS14 and the Culture of Interoperability



Interesting follow up piece to HIMSS14 from Deloitte in WSJ, and a few thoughts on human and machine interoperability. The message:

Interoperability standards are often overemphasized in discussions of data sharing, and it is important to understand that standards for interoperability already exist and can be implemented. What is critical is shedding the light on building interoperability into vendor design of medical products versus just building standards. Enlisting provider buy-in is one way of supporting this goal.
 
We have the standards, now we need to apply them. For modern interoperability standards like HL7 CDA, for instance, the barrier to entry is incredibly low – you can wrap an existing PDF in an appropriate header, which is essentially boilerplate, and you have satisfied the lowest level of CDA compliance. What is important to understand here is that “clinical interoperability” does not have to imply “machine-readable” except in the broad sense of syntactically capable of exchange between two systems or components. The lowest level of interoperability is exchange of human-readable content within a structure that can later be extended to support machine-readable coding. 

From the HL7 Standards blog:

The primary characteristic of a CDA document is that it must be readable by humans. The CDA specification states that, “human readability guarantees that a receiver of a CDA document can algorithmically display the clinical content of the note on a standard Web browser.” This requirement means that a clinician or patient can take a CDA document and display it on any computer with a web browser without the need to load any additional application.

The real work in interoperability, as we know, is in rationalizing and aligning code-sets. That's a governance issue. Exposing human-readable content in a structured fashion is important, as described above. But is it possible to access a system's supported vocabulary  and conformance profile using a standard Web browser? Maybe that would be useful as well. Incidentally, that's one of the ways FHIR goes beyond CDA. Clinical interoperability is about exchanging a specific type of information, for instance, exchanging clinical information about a patient that allows an exchange partner to leverage what we already know about them. One of the things we should be able to exchange is a conformance profile that defines how such an exchange can take place.

This is at the heart of an ongoing debate in Canada about the future of both CDA and SNOMED-CT. We have existing standards and terminology sets, so aren't these adequate to the task? What can we learn from this debate about what factors contribute to the success or failure of clinical interoperability projects? How can we reduce complexity, while increasing availability of information and metadata?

Thursday, March 27, 2014

Architecture of Participation in Healthcare

In general, I have a lot of positive things to say about HL7 FHIR, an emerging healthcare standard with deep roots in both Web-oriented Architecture (WOA) and the HL7 Reference Information Model (RIM). Along with a focus on REST, URLs, granularity and so forth, one idea that typifies WOA is a term coined by Tim O'Reilly back in 2004, the Architecture of Participation, in which he describes the participatory nature of the Worldwide Web, which was successful because it expanded participation in technology and information sharing far beyond the insular community of software developers. It worked because participation was expanded to include anyone. And this is important because without participation, there can be no success.

You may ask, can you apply this principle of Architecture of Participation to Healthcare? Good question, and I think this is where the "a-ha" moment comes in, and why when people start to think about HL7 FHIR, they kick themselves and say "well, it's about time," because the fact is, the Architecture of Participation is built right into the RIM: first day on the job with  HL7, someone hands you a primer and explains that it's very simple, the RIM describes Entities in Roles Participating in Acts.  These are the RIM base classes, and it's right there in the centre: Participation. HL7 describes clinical workflow, and workflow is performance. It is Entities and it is Acts, and the associations between these are mediated by Roles.

And there in the middle is Architecture of Participation.

View Source. Blue Button. Ten years' worth of knowledge in the RIM. Which is why I am excited for the next ten years.

Saturday, March 22, 2014

Object-oriented JavaScript as a language for learning

I have been reading Nicholas Zakas' Principles of Object-oriented JavaScript, from No Starch Press. I have been using JavaScript for many years, and to this point, the books I have found most useful are the essential "rhino" guide and Doug Crockford's "Good Parts," both from O'Reilly, because they were the books that first turned me onto the awesome potential of JavaScript as a declarative language that also supports inheritance, and then steered me away from the dangers of some of the patterns and habits into which I had fallen.

Zakas writes well - this is not his first book - and I love the way this book is organized around well thought out descriptions of core ideas; for instance, what is an Own Property, and why would you need one, or how does the Prototype Chain actually work. This is a quick read that really provides you with everything you need to then move onto a framework specific book on Backbone or Angular, whatever you need for your particular project; and really this is where this book shines, when it digs down into some of the more esoteric features of the JavaScript language, but never leaves stops answering the question, when is this going to be important when I am writing actual code. This is the book I would recommend, for instance, to any of my colleagues who is making the transition from server-side Java to client-side JavaScript. Quite simply, Zakas answers the questions you are going to ask with well though out answers.


Beyond this, all the books I have seen from No Starch Press are beautiful to look at. My own background is in Literary Theory in addition to Information Technology, and I loved my graphic novel guides to Derrida, Foucault and Freud, so much so that I gave them all away to people who wanted to know more about what I was studying. Like those graphic novels, I imagine the books from No Starch Press are a delight to give away; they look great. In addition to the aforementioned Java programmer colleagues, I also think Zakas' book would be very apt for a 15 or 16 year old who is interested in programming, as would any number of No Starch's "manga guides".

In short, good code examples, short chapters, great discussion in depth of core ideas with well
thought out descriptions of the things that make object-oriented JavaScript idiosycratic, and a book I find myself going back to and recommending for others, particularly people I know will be going on to use JavaScript with framework support like Angular.