System and Functional Testing


13


System and Functional Testing



Theresa (Tess) Settergren / Denise D. Tyler



INTRODUCTION



System and functional testing are critical components of the system life cycle, whether the software or system is under new development or is commercial software that will be configured to a customer’s specific needs. Testing definitions and goals have evolved over the past 60 years from the most simplistic “bug detection” process, conducted toward the end of the design and coding phases. The more contemporary definition goes far beyond bug detection to include dimensions of “correctness” (Lewis, 2009) and alignment of the technology to the business goals, with the intent that the software does what it is supposed to do, errors are caught and resolved very early in the development process, and testing includes the business and end-user impacts.


Testing and quality assurance are not synonymous (Beizer, 1984). Testing is comprised of activities performed at various intervals during the development process with the overall goal of finding and fixing errors. The system life cycle phase usually drives how testing activities are organized, but the testing plan will normally include coordination of test efforts (scheduling resources, preparing scripts, and other materials), test execution (running prepared test scripts with or without automated tools), defect management (tracking and reporting of errors and issues), and a formulation of a test summary. Quality assurance is a proactive, planned effort to ensure that a defect-free product fulfills the user-defined functional requirements. Testing is an indispensable tool, but it represents the most reactive aspect of the process. Testing is all about finding defects. Quality assurance is all about preventing defects. In theory, an exceptional QA process would all but eliminate the need for bug fixing. Although QA is utilized widely in the broader information technology industry to help guarantee that a product being marketed is “fit for use,” many— perhaps even most—healthcare organizations rely on testing alone. Quality assurance can be best envisioned as an integrated approach, comprised of test planning, testing, and standards (Fig. 13.1; Ellis, 2012).


Images


• FIGURE 13.1. Testing and Quality Assurance. (Reproduced, with permission, from Shari Ellis, 2012.).


Test planning activities include the following:


•   Requirements analysis—user needs compared to the documented requirements.


•   Ambiguity reviews—identify flaws, omissions, and inconsistencies in requirements and specifications.


•   Non-redundant test script design—all key functions are tested only once in the scripts.


•   Creation of test data—the right kinds of test patients and data to test all functions.


•   Problem analysis—defect management includes uncovering underlying issues.


•   Coverage analysis—the scripts will test all key functions, and all key nonfunctional components and features.


Standards, the third element of quality assurance, involves the creation and enforcement of testing standards, process improvement activities related to QA, evaluation and appropriate use of automated testing tools, and quality measures (application of effectiveness metrics to the QA function itself ). Software quality assurance is a planned effort to ensure that a software product fulfills verification and validation testing criteria and additional attributes specific to the project, for example: portability, efficiency, reusability, and flexibility. This chapter will focus primarily on testing types and levels that are most commonly employed in the implementation and maintenance of commercial clinical systems; it will also include concepts more relevant to software development.


TESTING MODELS AND METHODOLOGIES



Testing models have evolved in tandem with the everincreasing complexity of healthcare software and systems. Early software development models were derived from Deming’s Plan-Do-Check-Act cycle (Graham, Veenendaal, Evans, & Black, 2008; Lewis, 2009). The Waterfall model, for example, was characterized by relatively linear phases of design, development, and testing. The testing is sequential, each phase of testing starts after the previous phase is completed (Singh & Kaur, 2017). Detailed definition of end-user requirements, including any desired process redesign, flowed to logical design (data flow, process decomposition, and entity relationship diagrams), which flowed to physical design (design specifications, database design), then to unit design, and ultimately to coding (writing in a programming language). Testing occurred toward the end of the Waterfall development model—a bit late for any substantive code modifications. This iterative software development model employed cyclical repeating phases, with incremental enhancements to the software during each define-develop-build-test-implement cycle. The main advantage of an iterative approach was earlier validation of prototypes, but the costs of multiple cycles were often prohibitive. As software development models evolved, testing levels were correlated with the Waterfall technical development phases (Table 13.1) to demonstrate how development phases should inform the testing plan and how testing should validate the development phases.



TABLE 13.1. V-Model


Images


Agile software development, in contrast to some of the earlier models, is characterized by nearly simultaneous design, build, and testing (Watkins, 2009). Agile methodology can work well in healthcare projects since it works well in a variable environment (Hakim, 2019). Extreme Programming (XP) is a well-known Agile development life cycle model that emphasizes end-user engagement. Characteristics include the following:


•   Generation of business stories to define the functionality.


•   On-site customer or end-user presence for continual feedback and acceptance testing.


•   Programmer-tester pairs to align coding with testing—and in fact, component test scripts are expected to be written and automated before code is written.


•   Integration and testing of code are expected to occur several times a day.


•   Simplest solution is implemented to solve today’s problems.


Scrum is a project management method that accelerates communication by all team members, including customers or end-users, throughout the project. A key principle of Scrum is the recognition that customers are likely to change their minds about their needs (requirements churn). Scrum approaches requirements churn as an opportunity to respond quickly to emerging requirements and to better meet the business needs of the customer. The rapid improvement cycle associated with Scrum works well in organizations with a strong data governance structure (Chopra, & Bonello, 2019). The Agile software development–Scrum project management model encompasses some key quality assurance principles, is adaptable to healthcare information technology commercial system implementation projects, and the intense end-user participation improves the end product’s “fitness for use.”


The newer DevOps model combines traditionally siloed technical development and operations teams, and sometimes security and quality assurance teams, in the testing and quality assurance process (CapGemini, Sogeti, & MicroFocus, 2019). Speed of innovation for and delivery to customers and improved system reliability and security are expected benefits of DevOps collaborations. Information technology industry adoption of DevOps and Agile for testing and QA is rising, as is application of automation and analytics, including artificial intelligence, to testing processes.


TESTING STRATEGY AND PROCESS



Broad goals for health information technology system and functional testing include building and maintaining a superior product, reducing costs by preventing defects, meeting the requirements of and maintaining credibility with end-users, and making the product “fit for use.” Failure costs in system development may be related to fixing the product, to operating a faulty product, and/or to damages caused by using faulty product. Resources required to resolve defects may start with programmer or the analyst time at the component level, but in the context of a clinical information system thought to be “ready for use,” defect resolution includes multiple human resources: testing analysts, interface analysts, analysts for upstream and downstream applications, analysts for integrated modules, database administrators, change management analysts, technical infrastructure engineers, end-users of the new system and of upstream and downstream interfaced systems, and others. In addition to the human resource expenses, unplanned fix-validate cycles require significant additional computer resources and may impact project milestones with deleterious cascading effects on critical path activities and the project budget. Operating a faulty software product incurs unnecessary costs in computer resources and operational efficiencies; potential damages include patient confidentiality violations, medical errors, data loss, misrepresentation of or erroneous patient data, inaccurate data analytics, and lost revenue. There may be also costs associated with loss of credibility. An inadequately tested system that fails “fit for use” criteria will negatively, perhaps irreparably, influence end-user adoption.


A testing plan is as indispensable to a clinical system project as an architect’s drawing is to a building project. You wouldn’t try to build a house without a plan—how would you know if it will turn out “right”? Test planning should begin early in the system life cycle (Douglas & Celli, 2011), and should be aligned with business and clinical goals. Project definition and scope, feasibility assessments, functional requirements, technical specifications, required interfaces, data flows, workflows and planned process redesign, and other outputs of the planning and analysis phases become the inputs to the testing plan. Technological constraints of the software and hardware, and the ultimate design configuration to support the workflows and use cases represent additional inputs to the testing plan. Testing predictably takes longer than expected, and the testing timeline is often the first project milestone to be compressed, so it is advisable to plan for contingency testing cycles. Depending on the complexity and magnitude of a clinical system implementation, three or more months should be reserved for testing in the project timeline. Figure 13.2 depicts a testing timeline for multiple concurrent projects, and reflects varying scope and complexity of the projects.


Images


• FIGURE 13.2. Testing Timeline Example.


SYSTEM ELEMENTS TO BE TESTED



Commonly tested clinical system elements include software functions or components, software features, interfaces, links, devices, reports, screens, and user security and access (Douglas & Celli, 2011). Components and features include clinical documentation templates and tools, order and results management functions, clinician-to-clinician messaging, care plans, and alerts and reminders based on best care practices. Testing of the documentation features should, at minimum, include how clinical data are captured and displayed. For example, are the data captured during a clinical documentation episode displayed accurately, completely, and in the expected sequence in the resulting clinical note? Are discrete data elements captured appropriately for secondary use, such as for building medication, allergy, immunization, and problem lists, driving clinical alerts and reminders, and populating operational and clinical reports? Can clinicians and other users, such as ancillary department or billing staff or auditors, easily find the data? Can the user add data and modify data in the way expected in that field—for example, structured coded values versus free text. Can data be deleted or modified with a versioning trail visible to an end-user? Do the entered values show up in the expected displays?


System outputs to test include printing, faxing, and clinical messaging. Printing is often complex—it can be automated batches or on-demand local print jobs, and can be controlled at workstation (e.g., Windows printing), application (electronic health record [EHR] system), patient location, or network service (e.g., Citrix) levels (Carlson, 2015). Printing failures can completely disrupt clinical workflows, and must be thoroughly tested. Do test requisitions, patient education and clinical summaries, and letters print where they are supposed to print? Are prescriptions for scheduled medications printing on watermarked paper, if required? Do the documents print in the right format, and without extra pages? Faxing is important to thoroughly test, especially when it occurs automatically from the system, to prevent violations of patient confidentiality. The physical transmission is tested to ensure that it gets to the correct destination and includes all required data, formatted correctly, and inclusion of contact information in case a fax gets to the wrong recipient. Testing should also include the procedures that ensure fax destinations are regularly verified.


Order entry and transmittal testing occurs at several levels. The simplest level of testing is unit testing: does each orderable procedure have the appropriate billing code associated? Are all orderables available to ordering providers in lists or sets, via searches, and are the orderables named in a clinically identifiable and searchable way, including any abbreviations or mnemonics? Does each order go where it is supposed to go and generate messages, requisitions, labels, or other materials needed to complete the order? For example, an order entered into the EHR for a laboratory test on a blood specimen collected by the ICU nurse generates an order message that is transmitted into the laboratory information system, and may generate a local paper requisition or an electronic message telling the nurse what kind and how many tubes of blood to collect, as well as generating specimen labels. These outputs for a nurse-collected lab specimen order are tested, as well as the outputs for the same ordered lab specimen collected by a phlebotomist. At the integrated level of testing, the order message and content received in the laboratory information system (LIS) are reviewed for completeness and accuracy of the display—does the test ordered exactly match the test received? Does the result value received from the LIS display correctly, with all expected details, such as the reference ranges and units, collection location, ordering provider, and other requirements?


Interface testing may include interand even intrasystem modules, external systems, medical devices, and file transfers. A data conversion from one system to another system is a set of one-way interfaces in addition to possible batch uploads. Interfaces are tested for messaging and content, data transformation, and processing time. Interface messages typically utilize Health Level 7 (HL7) standards (Carlson, 2015). Interface data flows may be unidirectional or bidirectional. System module interfaces include admission/discharge/transfer (ADT), clinical documentation, order entry and results management, document management, patient and resource scheduling, charge entry and editing, coding, claims generation and edit checking, and other major functions. In a fully integrated clinical system, the master files for specific data elements may be shared, so a change to the metadata in a master file needs to be tested for unintended impacts across modules. Clinical systems could alternatively have redundant master files within multiple modules, and testing plans should ensure that the master file data elements are mapped accurately among modules and various interfaced systems. Data conversion testing must ensure that the data being converted from one system will populate the new system accurately. This often requires multiple test cycles in nonproduction environments to test each component of each data element being converted, prior to the actual conversion into the live production environment. The conversion of historic data into the production system can take days, requiring spot check testing after each new conversion.


Clinical systems testing include links to third-party content for patient education, clinical references and peerreviewed evidence, coding support systems, and others. Links to Web content resources for clinicians, including software user support, can be embedded into the software in multiple locations. These links should be tested in every location to ensure that they work properly and bring the user to the correct content display.


Medical device interfaces include invasive and noninvasive vital signs monitoring, oximetry, wired and wireless cardiac monitoring, hemodynamic monitoring, ventilators, infusion pump integration, urimetry monitoring, and other devices. Middleware can be used to acquire the data and provide the nurse with an interim step of validating data before accepting it into the clinical system. Newer medical devices include wireless continuous vital signs monitoring worn in the hospital or at home, nurse call systems, and bed or chair fall alarm systems. Testing outcomes for medical devices should consider data validation and alert or alarm risk mitigation steps. Data validation refers to the accurate capture of physiological and device data, such as blood pressure and concurrent pump infusion rate, such as the dose of a vasopressor, and requires the nurse to review and accept or reject the data. Alerts and alarms for data captured into the clinical system must be thoroughly tested for relevant settings, such as range and sensitivity. It is critical to minimize false alarms, which may result in alert fatigue, yet still alert nurses and other clinicians to genuine patient status changes.


Report testing should be performed at multiple levels. Reports may be static or dynamic. Examples of static reports include pre-formatted displays of documentation events, specialized reports that pull in data from multiple modules within the clinical system, requisitions generated from orders, and reports generated from the data warehouse and accessed within the clinical system. Data warehouse reports typically contain day-old data, intended to support individual clinical decision-making, as well as provide snapshots of population-level information used for trending, quality improvement, operational assessment, and various external reporting requirements. Dynamic reports (using “real-time” data) provide clinical and operations staff with current information on individual or aggregated patients, and can be updated real-time. Examples of dynamic reports include patient lists or registries, appointment no-shows, and queries relevant to a provider (“all of my patients that have a statin prescription”) or nurse (“all of my heart failure patients”). Reports testing checks accuracy of formatting and data display, but also compare the source data to the data in the reports. For example, an EHR must produce a data transmission that meets health information exchange standards, and testing ensures that data are transmitted with precision. Reports may also be sent to other systems, including a health information exchange (HIE). The content, structure, and timing of the transmission to the HIE will need to be tested.


User security and access can make or break go-live success. User access is typically designed to be role-based, so that each user role has specific functions and data views tailored to specific use cases. For example, a physician requires the ability to order tests and procedures including referrals and consults, e-prescribe, and document with tools tailored to specialty. Physicians and other providers also manage electronic medication administration and reconciliation, update problem lists and allergy lists, view customized displays of data, query data real-time, calculate and submit professional charges, generate health information exchanges, among other functions. The registered nurse performs slightly different activities in the same system, some of which may be required, and will usually incorporate a subset of physician functions. Nursing activities generally depend on the location of care. Hospital nurses generally may not have full ordering privileges, but can enter orders under certain circumstances, and in some states may enter orders under protocol. Some of those orders require physician co-signature. Hospital nurses may or may not be able to update the patient’s problem list, per the organization’s policy, but typically can update allergy lists and priorto-admission medication lists. The security needed for ordering can be different in an ambulatory clinical setting, where physicians may delegate some ordering tasks. For an enterprise-wide EHR, practice scope differences must be thoroughly tested to ensure that each clinician has the right access in the right care location. Testing should also include what the person in that role should not be able to perform, such as a hospital registrar ordering medications. This is sometimes termed “negative testing.” Security testing should also include testing the ability to do audits. Audit logs are a regulatory requirement for the Joint Commission, the Health Insurance Portability and Accountability Act (HIPAA), meaningful use, and the Health Information Technology for Economic and Clinical Health (HITECH) Act (Greene, 2015). Audit logs or audit reports may be run automatically or manually by a patient or user. Audits are typically run on celebrities, VIPs, employees, and their families.


TESTING TYPES



Testing types used vary based on where in the system life cycle testing is planned, and the degree of development or programming involved. Table 13.2 (Beizer, 1984; Graham et al., 2008; Lewis, 2009; Watkins, 2009) describes common testing types suggested, based on magnitude, for development of the testing plan. Table 13.3 describes testing done during implementations and upgrades in more detail. The first step in test planning is to define what is to be accomplished (Graham et al., 2008). The goals should include the scope, expectations, critical success factors, and known constraints, which will be used to shape the rest of the plan. Testing approach defines the “how”—the techniques or types of testing, the entrance and exit criteria, defect management and tracking, feedback loop with development, status and progress reporting, and perhaps most importantly, exceptionally well-defined requirements as the foundational component. The environment for testing defines the physical conditions: end-user hardware to be included in testing (desktops, laptops, wireless workstations, tablets, smartphones, thick client, thin client, scanners, printers, e-signature devices, and other), the interfaces to be included, the test environment for the application and other systems to be tested, automated tools, type of help-desk support needed, and any special software or system build required to support testing.



TABLE 13.2. Testing Types Grid

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 29, 2021 | Posted by in NURSING | Comments Off on System and Functional Testing

Full access? Get Clinical Tree

Get Clinical Tree app for offline access