Friday, June 05, 2015

My jsFiddle Bag of Tricks

I'm just going to say it, I love jsFiddle.net. Like others pastebins such as Gist or CodePen.io, jsFiddle allows you to rapidly prototype HTML/JavaScript/CSS and test the result in real time. There is extensive library support, and you can do tricky things like embed your results as an iframe on a GitHub page, or use GitHub to back up a demo. But the basics of jsFiddle are that you can create a small to medium application, share it easily so others can fork it, and tweak it to your heart's content, with easy to use restful routing and version control. What's not to love?

In order to make the most of jsFiddle, however, I had to sort out a few things. Please feel free to share your own bag of tricks, and here are some of mine.
  1. Use the Ionic bundle as an External Resource. You can select AngularJS 1.2 from Frameworks and Extensions, but if you add the following links to External Resource, you will get a bundle containing both AngularJS and the Ionic Framework, which allows you to do some clean mobile UI.

    http://code.ionicframework.com/1.0.0-beta.6/js/ionic.bundle.js
    http://code.ionicframework.com/1.0.0-beta.6/css/ionic.css


    You may strip out the Ionic stuff before you release your work, but I have found it to be a great framework for rapid prototyping with a Mobile First mindset. Cards are great. Also, ngCordova is an extension for Ionic which allows you to embed your work into iOS or Android using PhoneGap.
    Ionic may not be perfect (it's still in beta), but it is very Mobile First, useful for rapid development, and has a lot of similarities with Bootstrap. Try it out.
  2. Embedded results pages are great, but they often fail to load until you change "https" to "http". Same goes for the External Resources. Get used to quietly deleting the "s" during demos, and hoping that nobody notices.
    GitHub and jsFiddle can work very well together. I suspect you could get similar results with Gist or Codepen, but I personally prefer jsFiddle. Learn to use social repositories to create individual code samples and project pages.
  3. JSONP can work, this is very handy. I often set up JSONP files, JSON wrapped in a function call, on a GH Page on GitHub, so that I can access them later as though they were an actual API. In order to do so, rather than echoing the call from the jsFiddle itself, I have a useful Angular directive I use:

    rhizomeBase.factory('jsonpService', function ($http) {
        var svc = {}, jsonp_data   
        jsonp = function(data) {
            jsonp_data = data
        }
    
        svc.getData = function(callback, url) {
            $http({
                method: "JSONP",
                params: {
                    input: "GM",
                    callback: "jsonp"
                },
                url: url,
                isArray: true
            }).success(function(data, status) {
                callback(jsonp_data)
            }).error(function(data, status) {
                callback(jsonp_data)
            });
        };
        
        return svc
    });

    All this service does is set up an http: request as a promise, and then apply a callback whether the request succeeds or fails. It turns out that when the request succeeds, jsFiddle treats this as a failure, but all of this gets hidden in the service. In practice, once you have an Angular application set up in jsFiddle, you can call inject the service into your controller or directive and invoke it with a very direct call like this:
    jsonpService.getData(function(data) {
        alert(JSON.stringify(data))
    }, url)
    One of the things a pastebin allows you to do is prototype a client application without having to worry about building a server-side API first. Hosting a handful of sample messages on GitHub is a good way to accomplish this.
  4. Use appropriate standards. This is really the flip side of  building a client-side application first. If you have access to a JSON-based standard like NIEM, or you can use a snippet gleaned from Schema.org, this will save you some design time and get you moving in a forward trajectory. In my case, the JSON flavour of HL7 FHIR, the component-based nature of Angular, and the ease of use and built in UX touches of Ionic are a perfect storm for the kind of work I like to do, and jsFiddle is the glue that holds these together.
    If you can, learn and use appropriate standards.

Sunday, January 04, 2015

Using AngularJS with HL7 FHIR Questionnaire

This  is a very simple example using Angular templating (no directives) with an HL7 FHIR Questionnaire resource, which I am using directly from one of the public FHIR servers (Grahame's to be precise). In addition to the machine-readable content for question groups and answers, I am creating a rudimentary service to manage coded concepts (ie value sets), and I am displaying the human-readable portion of the resource (the HTML text div) as well. This is really just a beginning, and one thing I would like to build into this demonstration is the ability to use embedded SVG for the human-readable portion, as it seems to me that existing PDF questionnaires could be converted to SVG and thus embedded. Makes for a large resource file, but this would be a great way to retain the look and feel of existing documents, once an easing library is used to zoom in on the appropriate area of the SVG. I'm calling this demonstration "QuestionCare" because I think it would also be useful to be able to use Careplan within this context.

We begin with a root HTML that sets up my Single Page Application, and a reference to my Rhizome library, which contains a number of useful client-side services:
<body id="content" style="display: none;" ng-app="questioncare">
      <h3 ng-controller="ErrorController" ng-bind="errorText" ng-show="showError"></h3>
      <div id="request" ng-controller="RequestController">
        <button id="view-questionnaire" ng-click= "viewQuestionnaire()">View Questionnaire</button>
      </div>
      <ng-include src="res/templates/Questionnaire.html"> </ng-include>

This corresponds to a single line in the javascript initialization:
var questioncare = angular.module('questioncare', ['rhizome', 'ngSanitize']);
 We'll see how the ngSanitize module is required in order to handle rendering the human-readable HTML div from the Questionnaire resource as HTML, instead of text, using ng-bind-html. In any case, we are setting up a controller to handle a request, and then we are including a template to handle the response. The rest, as we shall see is handled through controller code and client-side services, but let's take a quick look at the included template for Questionnaire:
<div id="questionnaireResponse" ng-controller="QuestionnaireController" ng-show="showQuestionnaire">
  <div ng-bind-html="humanReadable"></div>     <hr/>
  <div ng-bind="questionnaire.name.text"></div>
  <div ng-bind="group.header"></div>
  <hr/>
  <ol id="questions">
    <li ng-repeat="question in group.question">
      <div ng-bind="question.text"></div>
      <ol id="options">
        <li ng-repeat="option in getOptions(question.options.reference)">                    [<span ng-bind="option.code"></span>]:
      <span ng-bind="option.display"></span>
        </li>
      </ol>
    </li>
  </ol>
</div>
 That takes care of the HTML. The request controller invokes an adapter (this is running on IBM Worklight), and then sends the response to the questionnaire controller using $rootScope.broadcast, but first it calls a client-side service which I have made responsible for managing codes; in this case, the value sets for the different options you can pick when you answer the questionnaire.
questioncare.controller( 'RequestController',
  function($scope, $http, $rootScope, errorService, codeService) {

    $scope.viewQuestionnaire = function() {
           
      var invocationData = {
        adapter: 'FHIR',
        procedure: 'getQuestionnaire',
        parameters: []
      };
           
      WL.Client.invokeProcedure(invocationData, {
        onSuccess : function(result) {
          if (200 == result.status) {
            var ir = result.invocationResult;
            if (true == ir.isSuccessful) {
              $scope.$apply(function () {
                var questRes = ir.content;
      codeService.loadCodedConcepts(questRes.contained);          $rootScope.$broadcast('qr', questRes);
              });
            } else {
      errorService.worklightError('Bad Request');
            };
          } else {
      errorService.worklightError('Http Failure ' + result.status);
        };
      },
      onFailure : errorService.worklightError
    });
  }
});
The codeService itself is quite simple, although, since value sets could potentially come from a variety of places, this service could become a lot more complicated. In this case, I am just scraping contained value sets from the Questionnaire resource itself:
rhizome.factory('codeService', function($rootScope, errorService) {
  var codeService = {};
  codeService.codedConcept = Object;
   
  codeService.loadCodedConcepts = function(contained) {
    for (c in contained) {
      codeService.codedConcept[contained[c].id] = contained[c].define.concept;
    }
  };
   
  codeService.getCodedConcept = function(opt, remHash) {
    if (remHash) {
      opt = opt.substr(1);
    }
    return(codeService.codedConcept[opt]);          
  };
   
  return codeService;
});    
Angular services can be difficult to grasp at first, but they are one of the more important features of the framework, since they allow you to make your client-side more portable and standardized; however, this particular service is little more than a stub at this point. It deals with a hash sign which is probably included with the value set id, which is useful. Once the coded concepts have been scraped out of the Questionnaire, the document is displayed using the included template and a response controller.
questioncare.controller( 'QuestionnaireController',
  function($scope, errorService, codeService) {
   
    $scope.showQuestionnaire = false;
   
    $scope.$on('qr', function (event, arg) {
      $scope.questionnaire = arg;
      $scope.group = $scope.questionnaire.group;
      $scope.humanReadable = $scope.questionnaire.text.div;
      $scope.showQuestionnaire = true;
    });
          
    $scope.getOptions = function(opt) {
      return codeService.getCodedConcept(opt, true);
    };

});
Again, there is nothing too complicated here. Notice how the humanReadable questionnaire text div gets bound into an element that allows HTML to be rendered. Also, a second function is used to get and then display the options because these need to be repeated, as you can see in the Questionnaire.html. In addition, the entire questionnaire template is hidden until it is populated.

Next steps here will be to work with nested questionnaires, where selected options will traverse through a hierarchy of question groups. At this point, it may be useful to use Angular custom directives, although I am also trying to be careful about anything that will be subject to change with Angular 2.0, such as controllers.

More and more as I work with Angular, Worklight and HL7 FHIR, it strikes me that what is important here is building a library of standard services and templates on the client side, and then simply binding into it. Once DSTU2 is complete for FHIR it will become less of a moving target, but resources like Questionnaire, which has been the subject of several connect-a-thons now, seem especially stable.

Tuesday, December 30, 2014

MatchstickTV and FirefoxOS

Earlier this year, I spent some time developing for FirefoxOS, a Mobile OS based on a linux kernel and Mozilla browser. To be honest, I have a Revolution phone from Geeksphone which flashes between Android and FirefoxOS, and it has been Android since day one. It work well for what it needs to do as an Android device. However, the idea of an instant developer community for FirefoxOS is compelling - it just hasn't appeared yet because the OS hasn't become a large enough target yet. And, like ChromeOS before it, this may never happen.

MatchstickTV is a recent kickstarter project that takes FirefoxOS and uses it to run an HDMI dongle very similar to Chromecast, but based on open source apps on an open source development platform and an open source OS. It's a little cheaper as well, but I suspect that is not an important selling feature. On the other hand... because there is nothing in particular to license here, this sort of thing could become a great conference giveaway over time, just like USB sticks used to be.

Honestly, I think a tablet using FirefoxOS with a bunch of onboard educational applications - similar to the original OLPC program, perhaps - would be a really good thing. There is a niche for browser-based mobile, and Mozilla is doing a lot of smart, good things to capture it.

Monday, December 29, 2014

Atomized Integration, IBM Worklight and AngularJS

Over the past year, I have worked fairly extensively with IBM Worklight, Big Blue's enterprise mobility package. In the coming year, I plan to find more things to do with Bluemix, IBM's cloud mashup line; for now, some thoughts.


In general, my guidance has been to use open source mobility frameworks, PhoneGap for cross-platform, Bootstrap for Responsive Web Design, Angular for templating, and some form of OAuth2 for security, at least until the vendor solutions from IBM, Oracle, et al reach a higher level of maturity, since these are stepping stones.

If you look at the latest Gartner quadrants for enterprise mobility and cloud for the previous year, you will see IBM maturing in the MADP space and Oracle maturing in the cloud space... but maturity in both areas is necessary for enterprise mobility to fire on all pistons.

Worklight does three things really well:
  1. Simple adaptation on the server-side, using Rhino-based Javascript adapters.
  2. Integrating with existing Websphere and SAM infrastructure.
  3. Increasing productivity through modularization and emulation.
Probably the biggest win here is 3. I started out 2014 working with Firefox OS, Saxon-CE and AngularJS, so I was already committed to using Javascript and XSL as much as possible, and Worklight Adapters played into this approach nicely; however, after *hating* the slowness of native Android development using the Android toolkit, what I appreciated most about Worklight was being able to use an emulator that ran as smoothly as the Firefox OS simulator (which is really just a browser plugin). On the server-side, we are becoming more accustomed to devOps tools like JRebel; on the client-side, we should have similar expectations - is, don't use the Android emulator if you can avoid it. It sucks.

I have mentioned previously how much I like Worklight's lightweight Rhino-based adapters. They are intentionally lightweight, eschewing any sort of SOA reusability. A Worklight adapter does one thing, and it does it well. This can be initially quite pleasant, then very frustrating, and then liberating, as you sort out how much integration you need to do in your client applications. My experience has been that a well designed piece of XSL can convert an XML data source into some standards-compliant JSON, and then a client-side library service can take it from there.

For instance, consider that I have an XML data source containing a number of patient records. Let's say it is NIEM compliant XML. I could build a client application that can consume NIEM compliant JSON, and then all I would need to do in Worklight is create a very simple boilerplate adapter that transforms the XML into JSON. This is assuming that my server-side data source doesn't already support JSON-flavoured NIEM, which would be even simpler. In other words, if my intent is to take a NIEM compliant data source and build a NIEM compliant mobile application, this is quite straightforward. Server-side Worklight adaptation transforms XML into JSON; client-side Angular data-binding injects JSON into the HTML-based presentation layer, and presto, you have an application.

Granted, the development process is not that easy, and let's consider now that we have a number of data sources, some of which are NIEM compliant, some of which are HL7 compliant, some of which are based on direct SQL access, and some of which are ad hoc.

When you look at the various Worklight adaptation examples, you might get the idea that RSS is treated preferentially, which is untrue; however, thinking of these adapters as syndication is still a useful approach.

Throughout the past year, I have been working with HL7 FHIR, a draft standard from HL7 that among other things introduces a JSON-based pattern for aggregation and composition that is essentially Atom syndication in JSON instead of XML. It turns out that if all of my Worklight adapters create Atom-compliant JSON on the server-side, then I can use a Javascript Atom library in the client, and it really doesn't matter what format my data sources are using. By the time they reach my client application, they are all Atom-based.

The client-side service that I have written - using Angular for modularization - is responsible for merging multiple Atom streams. Once I have a single Atom stream, data-binding takes place, so that information can be presented. In practice, this can be frustrating because Atom is intended for serialization of information, but an Atom bundle can also contain relative links between entries. This is fundamental to the way HL7 FHIR works, but not NIEM, so I have ended up creating synthetic and essentially schemaless resources as necessary. Ideally, all information could be mapped into Atom-syndicated FHIR resources. Maybe that's a good project for this year.

Adaptation frameworks always run into a problem based around the decision to go lightweight or go modular. I like that Worklight has gone lightweight, but I am frustrated that I can't reuse just a little bit more code between adapters. In particular, I would really like to use a single set of XSL transforms to support multiple adapters. Perhaps there is a way to do this, but for now, I am still forcing myself to prune my adaptation code as much as possible to keep it easy to maintain. If I find myself using the full set of DocBook or DITA transforms in an adapter, it's probably time to rethink my approach.

On the whole, I have enjoyed working with Worklight adapters immensely; I don't think this would be the case if I was not also using Angular or some other Javascript framework to support development of client-side services. I haven't particularly used the built-in Worklight support for Dojo or JQuery, but I'd go so far as to say that without some sort of hybrid framework support, you will lose much of the productivity that Worklight gives you. After a year, I have reached an understanding that I would not enjoy using a framework like Angular without a platform like Worklight, and I would not enjoy using a platform like Worklight without a framework like Angular.

Unless, of course, the platform was also a framework, which is what approaches like Meteor promise. 

  

Saturday, December 27, 2014

Some Canadian Context for HL7 FHIR


I work in Healthcare Messaging in Canada; specifically, I work in messaging in British Columbia, where we work primarily with Point of Service applications and Clinical Information Systems that generate and consume messages in the HL7 v2 pipe and caret notation, with a foundation of registries and repositories that use a version of HL7 v3 Messaging XML. More or less, this follows Canada Health Infoway's iEHR blueprint; however, following Infoway's original blueprint, we would have HL7 v3 at the Point of Service as well as the foundation.

HL7 v2 is still used extensively in other Canadian jurisdictions. Some use v2 almost exclusively. In Canada, we have a mix of v2, v3, with some CDA. The United States, on the other hand, never embraced v3, creating a desperate need for a better messaging layer. In this case, FHIR will accomplish things in the U.S. that v3 could not, and that leaves Canada in a challenging position - continuing on with further investment in HL7 v3 makes little sense. Like CDA before it, FHIR can be used to augment these projects; there are enough similarities between FHIR XML and v3 Messaging to make this plausible.

Ongoing CDA projects in Canada are bound to continue as such, which will be worth paying attention to as CDA projects in the States start shifting to HL7 FHIR as an implementation standard. The message from Infoway recently here is to use the appropriate standard for the work at hand, and I expect this message to percolate on both sides of the border; but what does this really imply? How do you decide? For new business cases which would previously have required a document standard like CDA, HL7 FHIR is going to be compelling, as well as low risk, local, and greenfield projects.

Worth noting is the four ways that FHIR can be used. As previously discussed, FHIR supports both Messaging and Document use cases; but, perhaps more importantly, FHIR also supports both REST and Service use cases. In addition, FHIR is in many ways custom built for the security and transport requirements of mobile use cases, and contains resource definitions that will enable social use cases like circle of care and information provenance. For existing health information systems and applications, as well as new, FHIR creates new ways to expose, access, and share information; providing not only tools, but also challenges.

Tuesday, December 23, 2014

Yosemite Project and other Chimera

In Greek mythology, the chimera  was a monstrous fire-breathing hybrid nightmare composed of the parts of more than one animal, a lion with the head of a goat rising out of its back, and a tail with a snake's head; a nasty piece of business, eventually dispatched by Bellerophon with some assistance from Pegasus.


Chimera was also the subject of a presentation by Jeni Tennison, OBE, of the Open Data Institute and W3C TAG, at XMLPrague 2012, entitled "Collisions, Chimera and Consonance in Web Content." In this presentation, she introduces a compelling argument that suggests that currently, in the web, we are dealing with four different formats: HTML, XML, JSON, and RDF.

In many ways, these formats complement one another. Sometimes, they clash, creating impedance and dissonance, and sometimes they merge, forming weird and wonderful hybrids. Tennison's presentation is really quite remarkable, and well worth watching as each of these formats evolves.

As I have previously mentioned, another set of presentations, from Dataversity and SemanticWeb.com, are also worth watching and paying attention to. These deal with the Yosemite Project, ongoing work which intends to position RDF as a Universal Healthcare Exchange Language. This work is important in part because it directly addresses how to go about migrating and transforming between formats, once you can establish a common representation using RDF. In many ways, this is a mythical undertaking, but also very promising.

For instance, with the work underway with Project Argonaut and HL7 FHIR, you are looking at a standard for healthcare that comes in two flavours, XML and JSON; however, like its predecessor HL7 CDA, FHIR relies on a human-readable portion, which in this case means HTML5. Add to that the work underway with Yosemite - go watch the presentations! Now you have an ecosystem that supports appropriate use of HTML, XML, JSON, and RDF - the subject of  Dr. Tennison's XMLPrague presentation - now in the context of healthcare. This is really what John Halamka has referred to as the "HTTP and HTML for healthcare".

If you broaden your horizons just a little, you will see some of the work which is also being carried out by Health & Human Services and the NIEM Health Domain, as a counterpart to the work of HL7 International. NIEM is primarily an XML-based standard, but in the last couple years, the underlying tooling there is expanding into UML-, JSON-, and HTML-based representations. With the support of some underlying ontology work, perhaps in concert with Yosemite, NIEM too could be used to create linked health data. These are all very exciting, very important things that are happening very very quickly, and it is a great time to get involved with some of these projects and initiatives.

Monday, December 15, 2014

Project Yosemite, SMART on FHIR, and the Argonauts

The Argonaut Project is a collaboration between Health industry vendors like McKesson, Epic, Meditech and so forth, along with the Mayo Clinic and Beth Israel Deaconess Medical Center, to provide the necessary resources to complete the work of the upcoming HL7 FHIR DSTU (Draft Standard for Trial Use). As Grahame Grieve elaborates on OpenHealthNews, Argonaut is aimed at three particular pieces of work:
  1. Security
  2. CCDA to FHIR Mapping
  3. FHIR Implementation Testing
This work is intended for completion by May 2015. As described, the Security piece initially involves SMART on FHIR®, a platform developed by Harvard Medical School and Boston Children's Hospital, implementing open standards for healthcare data, authorization, and UI integration. For authorization, SMART uses OAuth2, a profile for which will most likely become built in to the FHIR standard.

Josh Mandel, the lead architect behind SMART on FHIR® also spoke recently as part of a series on  of five presentations on Project Yosemite, held by SemanticWeb.org and DataVersity. Project Yosemite began a year or so ago with the Yosemite Manifesto, which establishes RDF (the Resource Description Framework that underlies the Semantic Web and Linked Data) as the best candidate for a universal healthcare exchange language. Project Yosemite follows two paths, "Standards" and "Translation", based on the premise that standards adoption is of primary importance, but that there will always be a need to translate between standards, and even between versions of the same standard.

The idea here is that once you build ontological mappings of various healthcare standards into RDF representations, then Semantic mapping tools like SPINMap and TopQuadrant's TopBraid can be used to construct robust migration/translation layers. This is the first step in producing a distributed network of Linked Health providers, similar to the work currently taking place with Linked Data. At this point, the presentation recordings from DataVersity are not yet all available, but they are definitely worth watching.

HL7 FHIR provides a potential successor to several HL7 standards currently in use internationally. Migration is a critical success factor here, and Project Yosemite presents a different way to approach migration. Perhaps coincidentally, RDF and FHIR are both resource-based approaches; RSS is a syndication format that emerged from work with RDF, and FHIR uses a similar syndication format, Atom, to aggregate and compose health resources, like Patient and Observation.

Project Yosemite benfits FHIR and Project Argonaut, Argonaut accelerates the first phase of ONC Data Access Framework (DAF) project. Project Yosemite is involved with ICD-11. This seems like lot of convergence, and the next 6 months will really show how much. It's a great time to get involved.

Wednesday, December 10, 2014

HL7 FHIR and Argonaut in Canada

I am Canadian, so for me, Argonauts play football, and by football, I don't mean soccer. The Argonaut Project is also the subject of a recent announcement at last week's HL7 Policy Conference in Washington, in response to the latest JASON Report. There appears to be a mythological theme emerging in Health IT, and I'm looking forward to an opportunity at some point to scream "release the KRAKEN!!!" or something similar. But not yet.

The Argonaut Project has the backing of a number of American EHR vendors, including Epic, Cerner, Meditech, McKesson, athenahealth, with additional support from Partners HealthCare, Intermountain Healthcare, Beth Israel Deaconess, and Mayo Clinic. The project extends involvement these organizations already have with HL7 International, and promises to deliver implementation guides related to an emerging HL7 standard, HL7 FHIR, by May timeframe 2015.



This is a diverse group of collaborators and an aggressive timeline, but what does this mean for Health IT projects here in Canada?

Migration and Transformation

Whereas HL7 v2 uses "pipe and caret" notation, and HL7 v3 supports any wire format as long as it is XML, HL7 FHIR comes in two flavours, XML and JSON (which makes it particularly useful for mobile use cases). By design, FHIR is intended to provide a migration path for v2, v3, and CDA. This really reminds me of the intentions behind the development of XML in particular, as a sort of lingua franca for the web, and in that sense, XML has been very successful. As mentioned, for mobile and social use cases, a JSON-based standard for health information will be hugely beneficial as well.

In Canada, we have built a foundation of healthcare registries and repositories based on HL7 v3 Messaging, although the applications that are in place in Hospitals and other Health Information sources typically come from U.S. vendors including many of those mentioned above, which requires a transformation layer from v2 to v3 and back again. I'd like to imagine a world where both the foundation and the Hospital information systems can communicate using the same standard, or through an integration layer that uses a common standard. Argonaut is at the very least a step in that direction.

Documents and Messages

Here in Canada, we have built our information access layer for health around Messaging; in the U.S., Document-centric health prevails. Canadian projects may involve the HL7 Clinical Document Architecture (CDA), but these are more limited in scope than the foundational work which has been carried out involving HL7 v3 Messaging. Recent guidance from Canada Health Infoway is to use the most appropriate standard for the job at hand. In many cases, that will be v3 Messaging, simply because the work is already underway.

FHIR is quite clever in that it is based around Healthcare resources (Patients, Providers, Observations and so forth), a more granular approach than either CDA or v3 Messaging, and this is how FHIR supports both Message- and Document-based flow of information. This is crucial if your requirements are a hybrid, or if you are currently supporting one approach, but are aware that you will need to support the other. Simply put, FHIR dispels the holy war between Health Messaging and Health Documents. ("Unleash the KRAKEN!!!")

Example: Questionnaires


It goes something like this: you are tasked with creating a set of health questionnaires for a Canadian healthcare organization. Most likely, you will create PDF documents, but you might consider using CDA for a moment, because CDA provides an architecture for Clinical Documents. But that moment would pass. Now, consider this: the FHIR community has already held several connectathons involving questionnaires, and one of its members, David Hay, has already written a series of articles about extending the Questionnaire resource based on his experience.

So that's useful.

In particular, IHE (Integrating the Healthcare Enterprise) is currently developing multiple profiles using FHIR as a basis for mobile access - (MHD, PDQm, RESTful PIX). With Canada Health Infoway as the home of IHE in Canada, I am hoping that we can find uses for these profiles here as well. These profiles are under development, but if the consortium behind the Argonaut Project really wants to make a difference, they can throw their support behind IHE as well.

References

HL7 International Press Release
HealthLeaders Media - Argonaut Project is a Sprint toward EHR Interoperability
OnHealthCareTechnology - JASON: The Great American Experiment
HealthcareITNews - Epic, Cerner, others join HL7 project
John Halamka - Life as a Healthcare CIO - Kindling FHIR

Thursday, August 14, 2014

Hard not to agree with this observation by Alex Howard about the newly branded U.S. Digital Services.

Given the anger, doubt and frustration prevalent in the public discourse around government IT, the only way public trust in the federal government's ability to use technology well for something other than surveillance and warfare will be through the deployment of beautiful, modern Web services that work. Jen Pahlka has explicitly connected government's technical competency to trust in this young century.
"If government is to regain the trust and faith of the public, we have to make services that work for users the norm, not the exception," she told to Government Technology, after leaving the White House. Mayors, governors and presidents are experiencing the truth of her statement around the country, from small towns to 1600 Pennsylvania Avenue.
The challenge here is to move beyond secure, mission-critical systems that work in insulated environments - but fail to provide high value - to focus on measurable outcomes, quick(er) wins, higher value services for citizens. This is the holy grail of digitization.

AngularJS and Durandal

When I read on the Angular blog that Rob Eisenberg is working with Angular in addition to continuing revisions on the Durandal templating library, I was understandably excited. I have really enjoyed working with Angular, not just as an SPA framework, but as a prescriptive, modular, and mature JavaScript framework, but like many people, I have found custom directives frustrating; jsFiddle and similar tools provide a good way to develop and test a new directive in isolation, but still. I've experienced this on other projects, using Adobe Flex, for instance. On a project team, you have a number of developers, one of whom supports custom web components, and that works okay, but the other developers don't really understand how the components work under the hood.

I am hoping that the next evolution of AngularJS (3.0) will align much more closely with Durandal, Web Components API, and Polymer. There is really no reason why custom Web Components cannot become just a standard practice for the web. And that's nothing like Angular custom directives, which are confusing, I think, because whereas in most cases Angular balances flexibility and prescription nicely, custom directives are incredibly flexible - transclude? allow directive through attribution or class, or just elements? and so forth. Custom directives are just way too flexible, and they need to be a simple API for doing one thing well, not a combinatorics problem.

On the other hand, what Angular provides that Durandal does not is exactly that balance of prescription and flexibility. Angular tells you how to do things like module structure and model-view-star, and as a development project lead, I appreciate that, because this makes establishing best practices and code reviews manageable. That is why I am expecting great things from Angular, especially if the next version also results in an update to the angular-ui.bootstrap project, providing a ready to use library of web components.

Tuesday, July 22, 2014

Single Page Applications and AngularJS


For many years, the phrase Single Page Application (SPA) was synonymous with TiddlyWiki, a JavaScript-based wiki that was most useful for running independently without an application server and for some very well written code. Aside from TiddlyWiki, SPA was an approach, not a thing.

Mature JavaScript frameworks like Backbone, Angular and Ember have changed this, embodying the notion that you don't find a sweet spot between pure server push and pure client: you either load an application page by page, or you load a single page and construct the application from client-side templates, routing and model-binding. JQuery can support an SPA approach, but doesn't enforce it, and Adobe Flex enforces an SPA approach, but requires Flash to do so.

Of course, Angular is more than just an SPA framework. Amongst the features Angular provides:

Dependency Injection - a core value of the Angular framework, DI clearly defines the interface for what a class consumes through its constructor, rather than hiding class requirements within the class, which makes Angular JavaScript more readable, easier to maintain, and easier to test, since it is clear how internal connections between services are made within your application code base.  This results in fewer lines of more maintainable code, and ease of testing.
 
Templating - Angular templating consists of partial HTML views that contains Angular-specific elements and attributes (known as directives). Angular combines the template with information from the model and controller to create the dynamic view that a user sees in the browser.  The result is more natural templates, based on attribution well-formed HTML.

Two-way data-binding - allows you to work with JSON easily, particularly when this JSON is generated from standardized schematics. An example of this would be if an application receives a JSON payload that is constrained by an XML Schema in the server-side API (the API supports both XML and JSON, and the XML complies to an industry standard). In this case, the Angular view could also be generated from the underlying XML Schema.

Modular JavaScript - nothing special here, Angular allows you to separate the concerns of
controllers, services, filters, and directives into separate modules. Encapsulation makes these components easier to maintain and easier to test, for instance by a team with multiple members.

Controllers and URL Routing - aside from Dependency Injection, Angular's MVVM pattern is the big win here, and routing is just something you need to get used to. Originally, JavaScript was the glue-code for the web, and once you have your application sufficiently modularized, you will find that your Angular controllers retain this stickiness, but, as you build reusable services, your controllers remain lightweight. If you have any business or maintenance logic in your controllers, it is time to refactor and create services. Controllers and routing may not be reusable; services and views will be.
 

Multi-level data scoping - scope is confusing in JavaScript because of the way global scope and declaration hoisting work. Angular simplifies passing scope into a controller or service, and offers a rootScope object that replaces the global namespace. Further, events can be associated with scope at various levels. Data binding, the event model, and service invocation all use the same scope mechanism.

Responsive Design - Bootstrap is a Responsive Web Design library built in CSS and JavaScript. The JavaScript portion of Bootstrap has been ported to Angular directives as part of the AngularUI extension, which fits nicely within the Angular directive paradigm. Directives are fundamental to how Angular works. Using the Bootstrap directives removes some of the need to develop custom directives (custom behaviors and HTML tags). [http://angular-ui.github.io/bootstrap/]

Web Components - with the upcoming partial merging of the Angular framework with the Durandal presentation framework, Angular should move one step closer to supporting the Web Component API, which aligns with the intent behind Angular custom directives, and will bring these more in line with projects like Polymer. By using a common API, these UI libraries become more transportable. 

Monday, July 21, 2014

Back to Basics: Rhizome

I started the Rhizome reference implementation a year ago as a way of demonstrating how a combination of client-side services, constructed using Angular and Cordova, and server-side adaptation and integration, constructed using Worklight, could be used to build a mobile health app for the enterprise. The pieces are there, and I have come to their conclusion that the server-side integration, while important, should really just be built into the application server, which hosts the server-side API. If the server-side API is built to an industry standard like NIEM or HL7, then the burden of integration is lightened, and maybe it could take place within a resource-based suite of client-side services.

The greatest illumination for me came when I stopped trying to build the server back end and with a client app extending it, and instead focused on a client app with an HL7 FHIR standardized interface. Do I have to do a lot of adaptation on the server? Depends on the data source, but... In an ideal world, thee data source has low impedance, and it is already FHIR JSON. In that case, an Angular app built around the core FHIR resources just works.

So I'm taking my references implementation in a slightly different direction, less coupled to an enterprise mobility platform, more reliant on a strong client-side architecture which is resource-based and standardized for the health industry, leveraging profiles from organizations like IHE and HL7 where possible, and probably with a more specific focus on care plans and questionnaires, without losing focus of prescription medications.

I'm also going to try posting more frequently, for a variety of reasons, so please feel free to comment. I have really enjoyed working with AngularJS over the last year, and I know I'm not alone in this.

Saturday, July 19, 2014

Tracking the convergence of NIEM and HL7

Every few years, someone asks "could you implement HL7 with NIEM" or vice versa; well, with enough resources, you can accomplish anything, but what I want to do here is consider how the two standards are converging, how they are divergent, and why. NIEM has had a Health Domain for several years, evolving under the auspices of HHS. You wouldn't know it from the LinkedIn group.

The two communities could really benefit from sharing an understanding that to save money on implementation and stakeholder engagement, they need tools which provide the ability to easily and visually review and alter exchange packages (IEPD, FHIR Conformance Profiles), to reach absolute consensus; and then generate terse and completely accurate validation packages and conformance suites, so as to increase ongoing information safety. We need to be able to put all of the important details on one page.


NIEM and HL7 are both messaging models based on an underlying information model, and whereas HL7 is moving away from design by constraint towards design by extension, NIEM has always relied upon an extension mechanism. The difference here comes down to the size of the NIEM problem space ("everything"), as opposed to HL7 ("healthcare"), for which you might be able to imagine a totalizing framework that encompasses all workflow in all contexts; however, for HL7 as well, a workable extension mechanism is proving to be essential to success, and this is a change from the paradigm established with HL7v3.

NIEM and HL7 are both moving towards support for multiple wire formats. In domestic U.S. markets, HL7 means either "pipe and caret" v2 or "quasi-XML-HTML" hybrid CDA, but internationally, HL7 is an XML standard which is outgrowing the business cases for XML, much like NIEM. For both of these standards to grow and implement future business cases, they will need to also embrace and support JSON, HTML, and RDF, and given time they will.

HL7 is moving away from a proprietary tooling set towards tooling which is readily accessible, like Excel, Java, and XML editors. NIEM already uses a similar toolset, and has several initiatives in play to support open tooling like CAM Editor and UML tooling. One of the difficulties we have run into with HL7 v3 is difficulty sharing visual models, since these are captured in proprietary tooling, and it is here that the NIEM and HL7 communities would both benefit from demanding better tooling. Put simply, shouldn't these two standards support and be supported by a common toolset which extends beyond XMLSpy or Oxygen? And, given time I'm sure they will.

This is something I feel strongly about. At their core, NIEM and HL7 RIM rely on XML Schemas, and yet, XML Schemas are not sufficient to the task. In the HL7 world, as far as v3 Messaging and CDA are concerned, ISO Schematron fills this gap. For NIEM, OASIS CAM performs a similar task; but there is a disservice here to both of CAM and Schematron, that these are treated only as validation tools, when in fact, they contain key pieces of business. The same is true of UML - these should be the tools we use to visually communicate the business to the business.

Some of the tools will be open source, some of them will come from the product world. If the NIEM and HL7 communities articulate their needs, the tool vendors will follow. In short, HL7 and NIEM are both going to need to converge on a set of XML-based tooling that goes beyond XML Schemas and Visio diagrams. The CAM tooling provides some of this. The Excel-based Resource Profiling in FHIR provides some of this. UML tooling provides some of this.

To reduce the burden of approval for stakeholders, both messaging standards need to allow modelers, implementers, and business stakeholders to meet in a room and review the details of a proposed information exchange on a single page, and this will provide high value. When this is happening, information safety increases because the resulting XML Schemas and documentation produced after this meeting will be simpler, more accurate representations of the business.
  

Thursday, July 10, 2014

Converting NIEM XML to HTML5

Currently with Open-XDX, you can persist and retrieve XML information based on a NIEM IEPD. You can also expose information using NIEM JSON through a transformation library, which is useful for building Web and Mobile Web applications, using 4th generation client-side frameworks like Angular, Ember and Polymer, as well as older frameworks like JQuery and Dojo.

There are 4 main information formats used in the Worldwide Web:
  1. HTML is ideal for documentation, tables, and open data, because it is easy to publish and forgiving. HTML is fundamental to REST as a way of exposing endpoint documentation.
  2. JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate.
  3. The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web.
  4. Extensible Markup Language (XML) is a simple, flexible text format derived from SGML (ISO 8879), designed to meet the challenges of large-scale electronic publishing, and plays an increasingly important role in the exchange of a wide variety of data on throughout Web.
In addition, several other XML formats are commonly used for document collation and syndication:
  1. DITA (Darwin Information Typing Architecture) and DocBook are used to assemble documentation out of markup. These will probably both be eventually supplanted by HTML5.
  2. ATOM and RSS are XML-based syndication formats. JSON-based syndication formats have also been described, although this is less mature.
(This discussion sort of refers back to Jeni Tennison's XML Prague keynote on "chimera", in which she discusses the different formats, and the way they, for instance, handle links and URIs differently.)

NIEM currently supports XML-based and JSON-based business cases as a way of quickly and rigorously exposing data for exchange and migration. In addition, the NIEM JSON flavor also supports web and mobile web applications, using the mentioned 4GL frameworks and their like. The quickest way to expose NIEM information, however, is using the HTML information format (most likely HTML5, which is more semantically rich than previous versions).

Basic rules for converting NIEM XML into NIEM HTML:
  1. Create one element per element, with the exception of lists.
  2. For node elements, use div.
  3. For leaf elements, use span.
  4. Where makeRepeatable, use ol and li, containing either div or span elements as per above.
  5. For any element, class attribution represents datatype (like "string" or "date")
  6. For any element, id attribution represents XML element name, including namespace prefix (like "ncPersonName")
Based on these rules, an XSLT transform can be generated from the OASIS CAM schema representation from the NIEM IEPD, which could be generated directly from the CAM tooling. This transform can then be applied to the XML exposed by Open-XDX, allowing this information to be quickly exposed using HTML for read only use and for add/update using forms (XForms? HTML5 forms? Hybrid using JavaScript?).

In the same way that a full HTML page can be created from NIEM information, it should also be possible to generate partial or natural templating. In essence, this is just a fragment of HTML. This may be required to support platforms like Java-Spring-Thymeleaf, Oracle ADF, or Meteor, which all rely on some sort of direction through attribution. The simplest way to expose information is still to create the entire HTML page, instead of a partial. This is noted here because whenever NIEM JSON is used, there will likely be a requirement to generate a template from the NIEM CAM as well.

Note that NIEM is not currently resource-based; their is no inbuilt facility to support REST by exposing resource identifiers; however, one of the requirements for REST is to expose documentation at the endpoint, and it should be possible to generate this documentation directly from the IEPD (I think Datypic generates something like this for the NIEM Core). In this case, the IEPD may be sufficient.