Danish healthcare’s mega IT investment vendor qualification is based on Software Quality Assessment

 In Industry News

ipad healthcare2When the Capital Region of Denmark and the Region of Zealand (RH/RS) make the final decision on which suppliers to invite to tender, software quality will be a critical parameter.

The Capital Region of Denmark and The Region of Zealand (RH/RS) are in the process of selecting a vendor for their close to DKK 1 billion (€135M) investment in a new IT healthcare platform.

The purpose of the tender is to ensure that the chosen IT-Healthcare Platform will meet the defined requirements for IT support of clinical and administrative work within the health care organization of Zealand, supporting close to 40,000 named and 12,000 concurrent clinical and administrative users at 17 hospitals and 54 other healthcare institutions.

The chosen IT-Healthcare Platform is expected to start a pilot operation in 2014 in the Capital Region and is expected to be commissioned in early 2015 throughout the Region Zealand and then rolled out towards the end of 2016 in the rest of the Capital Region.

Five suppliers long-listed

Five suppliers are long-listed for the qualification process:

  • American IBM with Danish Systematic as subcontractor
  • American EPIC with Danish NNIT as subcontractor
  • Swedish Cambio with Danish Netcompany as subcontractor
  • German Siemens with American KMD and French Atos as subcontractors
  • American Cerner with Canadian Logica as subcontractor

Only three suppliers will receive an invitation to the final qualification process where they are to demonstrate their systems and make their software available for simulation tests.

One of the deciding parameters is an assessment of the quality of the software. Two areas in particular have the attention of the Zealand healthcare authorities:

  • Scalability
  • Maintainability

The reasoning behind this is to remove some of the uncertainties that accompany such large-scale projects. It is a proactive effort that moves forward some of the quality assurance tasks that have previously been stated as acceptance criteria in the process, thus mitigating some of the risks that are inherent in deploying and maintaining enterprise wide systems.

Scalability

scalability12.000 healthcare workers (doctors, nurses, secretaries, laboratory staff, administration etc.) will use the system concurrently when fully implemented. Some of the suppliers do not have such large installation today, thus the Zealand healthcare authorities want to make sure that the software will scale and that the cost of scaling is acceptable.

Scalability is a well-recognized challenge in all large software systems.

“Software performance testing involves testing not only the software application but also the underlying hardware, its configuration and usage. A failure to reach the specified performance criteria can come from not only the software application itself, but also from the server platform, and/or from the network and system configurations on all systems involved.”

Source: Delivering Quality in Software Performance and Scalability Testing by Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan, Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan. Software and Services Group, Intel Corporation.

Most enterprise software systems will scale well up to around 1.000 concurrent users. Beyond 1.000 concurrent users most systems start to show signs of saturation unless specifically designed for large scale operation. The traditional solution of adding more hardware will not have any impact on software with intrinsic scalability issues.

Maintainability

maintainabilityA software maintainability assessment basically defines how easy it is to maintain the system. This means how easy it is to analyze, change and test the application or product.

The maintainability test shall be specified in terms of the effort required to execute a change under each of the following four categories:

Corrective maintenance

The maintainability of a system can be measured in terms of the time taken to diagnose and fix problems identified within that system.

Perfective maintenance

The maintainability of a system can also be measured in terms of the effort taken to make required enhancements to that system. This can be tested by recording the time taken to achieve a new piece of identifiable functionality such as a change to the database, etc. A number of similar tests should be run and an average time calculated. The outcome will be that it is possible to measure an average effort required to implement specified functionality. This can be compared against a target effort and an assessment made as to whether requirements are met.

Adaptive maintenance

The maintainability of a system can also be measured in terms on the effort required to make required adaptations to that system. This can be measured in the way described above for perfective maintainability testing.

Preventive maintenance

This refers to actions applied to reduce future maintenance costs: http://istqbexamcertification.com

The testing is outsourced to SIG

SIGThe actual software testing has been outsourced to “The software Improvement Group” (SIG) that will produce a report with the results for RH/RS healthcare authorities.

All suppliers have uploaded their source code to the RH/RS healthcare authorities for the testing purposes.

SIG will do the testing in cooperation with TÜV Informationstechnik (TÜViT) and following the standards defined in ISO/IEC 25010:2011.

Our Assessment

RH/RS as well as most other healthcare authorities have suffered from software performance issues in the past. Performing a software quality assessment on the potential IT healthcare platforms is a wise decision and we must compliment (RH/RS) for taking this precaution.  Implementing and operating a complex IT system supporting 12.000 concurrent users is never “a walk in the park.”  Better safe than sorry.

We assume that newer technologies will have more coherent platform architectures and more consistent data models (data-architecture), which again will yield a better rating compared to the “older” technologies. It is unlikely that RH/RS will publish the results from the assessment, so we will probably never learn if this is the case.

Assuming that newer technologies do achieve higher scores it will compensate the suppliers who have more modern technology, but fewer references. As we wrote in a previous post RH/RS has taken a pragmatic approach to the project and will favor suppliers who can demonstrate the highest possible “fit for purpose” on user and process functionality.  Weighing in the software quality issues may give suppliers with more modern technology an opportunity to level the playing field.

Acknowledgments

Thank you to Claus Ehlers for expert advice, input and comments.

Other posts in this series

Recommended Posts
Most Recent Projects