Computer Use in Nursing Research


49


Computer Use in Nursing Research



Veronica D. Feeg / Theresa A. Rienzo / Marcia T. Caton / Olga S. Kagan



INTRODUCTION



Nursing research involves a wide range of tools and resources that researchers employ throughout the research process. From the individual or collaborative project initiation, through refinement of the idea, selection of approaches, development of methods, capturing the data, analyzing the results, and disseminating the findings, computer applications are an indispensable resource for the researcher. The investigators must be well prepared in a variety of computerized techniques for research activities as they are employed in the domain of knowledge that will be investigated. Without the power of technology, contemporary research would not reach the levels of sophistication required to discover and understand health and illness today. With emerging technologies and applications, researchers will continue to find efficiencies and innovation in conducting science while consumers will feel real-time benefits of research conducted on computer technologies used in healthcare delivery.


In addition to the traditional approaches of the scientific method, researchers today have new avenues to explore in the development of knowledge. New opportunities to mine existing “big data” for evaluation, discovery, and transforming data to knowledge (D2K) are forming a bridge between the process of conducting research and the products of discovery. New tools for automatic capture and analysis are changing methods. New online strategies and exponential growth of mobile and tablet apps are being implemented as the process of researching health and the product of researched interventions in health. Computer use in nursing has exploded concomitantly, including mobile applications that are downloadable to smart phones for researchers and patients, and tools for exploring large data sets that have been already captured.


Today, hospital-wide information technology (IT) is the spine of all healthcare delivery, which is tied to reimbursement, and which inevitably forms the data engine that health systems put to work to research improvement and outcomes-driven questions. With the power of systemwide integration supported by the proliferation of cloud technology, nursing is becoming a discipline that must be data-centric and caring at the same time for nurses to function in their roles. It is therefore imperative to understand the underlying terminologies and sources of data, communication of those connected pieces of information, and the elements in the nursing environments through informatics research if one is to understand how computers and nursing research co-exist.


Cloud computing has reshaped the IT industry with the potential to transform a large part of how organizations purchase and implement technology, and allow the power of managing, communicating, analyzing, and sharing large system databases. Software development can exponentially grow as a service and it redesigns how information is stored and reported. Cloud computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the data centers that provide those services (Armbrust, et al. 2010). The availability of cloud services for storage, communication, and access to statistics and other licensed research products will enhance the researchers’ activities across the cycle of inquiry—from idea to dissemination.


Outside the electronic health record (EHR) and computerized health systems, in the rapidly changing world of Internet technologies and the growing era of machine learning and artificial intelligence (AI) applications, information management and computer-enhanced intervention research on the use of computers have produced a new body of science that will continue to grow. Blending the focus of computer use in research (tools and process) and research on computer use (informatics research, secondary analysis, data mining and AI) calls for an understanding of process and products. This chapter will provide an overview of the research process for two separate and fundamentally different research approaches—quantitative and qualitative—and discuss select computer applications and uses relative to these approaches. The discussion will be supplemented by examples of current science and the trajectory of research on the impact of informatics, electronic records, treatments, and integrated technologies using the computer as a tool.


Research tools today have evolved in many aspects of the research process and have gone beyond its historic application once limited to number crunching and business transactions. Field-notes binders, ring tablets, index cards, and paper logs have all but disappeared in the researchers’ world. Personal computers, laptops, PC-tablets and iPads, and handheld devices have become part of the researcher’s necessary resources in mounting a research project or study. Wireless technologies are ubiquitous and connect people to people as well as researchers to devices. Cloud computing today connects diverse enterprises with stable sources of software and data that can be shared or used by anyone, at any time or any place. Numerous enhancements have been added to the well-known text processing products to store and manage data that reduce time and effort in every research office. In addition, a wide range of new hand-held, mobile, flexible, and interconnected “blue tooth” technologies for database management and sharing of subjects, contacts, or logistics have emerged in the research product marketplace. Nurse researchers use a range of hardware and software applications that are generic to research development operations in addition to the tools and devices that are specific to research data collection, analysis, results reporting, and dissemination. New apps appear continuously, customized to the data collection, management, or analysis process.


In today’s electronic healthcare environment, numerous advances have been made with the sources of data collection relative to general clinical applications in nursing, health, and health services. System implementations for large clinical enterprises have also provided opportunities for nurses and health service researchers to identify and extract information from existing computer-based resources. The capturing of rich nursing data from these systems that can be managed and mined for advanced analyses should be recognized in the development of EHRs and other sources to promote organizations such as hospitals to become “learning organizations”—where sharing and learning from analysis are continuously integrated into the organization-planned change to enhance outcomes of care.


In addition, the era of Web-based applications has produced a wide range of innovative means of entering data and, subsequently, automating data collection in ways that were not possible before. With the advancements in clinical systems, acceptable terminology and vocabularies to support nursing assessment, interventions, and evaluation, computers are increasingly being used for clinical and patient care research. Although research is a complex cognitive process, certain aspects of conducting research can be aided by software applications. For example, examination of nursing care/patient outcomes and the effect of interventions would have been prohibitive in the past, but with the aid of computers and access to large data sets, many health outcomes can be analyzed quantitatively and qualitatively. Data analytics built into software and the visualization capabilities using large samples of existing data can help predict best outcomes. For example, one hospital examined procedures from a rapid analysis of system-wide data that were different across systems to determine how to minimize postoperative complications of infection. Their preoperative procedure was standardized and changed in 6 months, a change that in the past would have taken years of randomized trials (Englebright, 2013).


With a wider view of computer use in nursing research, the objective of this chapter is fourfold to: (1) provide an overview of innovations in systems, software, and mobile applications related to the stages of the research process; (2) describe how new technologies and mobile and wireless tools facilitate the work of the researcher in both quantitative and qualitative aspects; (3) highlight how research on widespread technology in healthcare has influenced patient care and health systems; and (4) give attention to the explosion of research in categories delineating clinical and nursing informatics research. These will serve as a snapshot of the research on computer use and research using computers and related technologies for the future with contextual influences.


The chapter begins with a focus on some of the considerations related to the logistics and preparation of the research proposal, project planning, and budgeting, followed by the implementation of the proposal with data capture, data management, data analysis, and information presentation. The general steps of proposal development, preparation, and implementation are applicable to both quantitative and qualitative approaches with the explosion of apps to aid available in these processes. However, no chapter about computer use in research could be complete without acknowledging the range of research now appearing in the literature that examines the trajectory of how innovative technologies, integration, and Web-based applications are used in patient care. With increasing emphasis on cost and quality of healthcare, the computersources-of-data and computer-as-intervention must be part of understanding computer use in nursing research today. The chapter closes with the projections of D2K plans across healthcare, learning organizations and the horizon of AI that will soon disrupt health delivery as we know it today.


PROPOSAL DEVELOPMENT, PREPARATION, AND IMPLEMENTATION



Research begins with a good idea. Good science is typically based on the nurse researcher’s identification of a problem that is amenable to study from a theoretical perspective and existing evidence choosing a paradigmatic approach. This sets the stage for selecting one’s methods to investigating the problem or developing the idea. Because the theoretical paradigm emerges from an iterative process, and because the theoretical perspective will subsequently drive the organization to the research study, it is important to distinguish between these two distinct approaches—quantitative or qualitative, or some combination of both in mixed methods (Polit & Beck, 2017). Each approach can be facilitated at different points along the proposal process with select computer applications. These will be described as they relate to the methodology.


Quantitative or Qualitative Methodology


The important distinction to be made between the quantitative and qualitative approaches is that for a quantitative study to be successful, the researcher is obliged to fully develop each aspect of the research proposal before collecting any data, that is, a priori, whereas for a qualitative study to be successful, the researcher is obligated to allow the data collected to determine the subsequent steps as it unfolds in the process and/or the analysis. Quantitative research is derived from the philosophical orientations of empiricism and logical positivism with multiple steps bound together by precision in quantification (Polit & Beck, 2017). The requirements of a hypothesis-driven or numerically descriptive approach are logical consequences of, or correspond to, a specific theory and its related tenets. The hypothesis can be tested statistically to support or refute the prediction made in advance. Statistics packages are the mainstay of the quantitative methodologist, but are not the only connection to computers for the researcher.


The qualitative approaches offer different research traditions (e.g., phenomenology, hermeneutics, ethnography, and grounded theory, to name a few) that share a common view of reality, which consists of the meanings ascribed to the data such as a person’s lived experiences (Creswell & Cresswell, 2018; Cresswell & Poth, 2018). With this view, theory is not tested, but rather, perspectives and meaning from the data narratives by participants are described and analyzed. For nursing qualitative studies, knowledge development is generated from the participant’s experiences and responses to health, illness, and treatments as voiced by the participants. The requirements of the qualitative approach are a function of the philosophical frames through which the data unfold and evolve into meaningful interpretations by the researcher (Polit & Beck, 2017). There have been many new interview transcription devices for data capture and analytics software applications to assist the qualitative methodologist to enter, organize, frame, code, reorder, and synthesize text, audio, video, and sometimes numeric data.


General Considerations in Proposal Preparation


The cloud has revolutionized the connectivity that has become indispensable for all users of software and sharing resources. It has facilitated development of the research proposal, communication with team members, documentation, and planning for the activities that will take place when implementing the study. These include broad categories of cloud-based office programs including word processing, spreadsheet, and database management applications. Office 365 is the new release from Microsoft (microsoft365.com), programs with cloud capability that continues to offer improved clerical tools to manage the text from numerous sources and assemble them in a cogent and organized package. The cloud connectivity gives researchers access to all programs and data virtually from anywhere. All versions of user hardware from smartphones to tablets to PCs have broadened the connectivity for researchers with a range of handheld apps that make users in constant connection with the research progression, participant log-ins, and ongoing data analytics throughout the execution of the project.


Cloud-based products from Google (google.com) to Microsoft Office 365 provide capabilities and a platform into which other off-the-shelf applications can be integrated. Tables, charts, and images can be inserted, edited, and moved as the proposal takes shape, with final products in publishable forms. Line art and scanned images using Adobe industry standards such as Illustrator CC (www.adobe.com) or Photoshop CC (www.photoshop.com), now with cloud capability, can be integrated into the document for clear visual effects. These offer the researcher and grant managers tools to generate proposals, reports, and manuscripts that can be submitted electronically directly or following conversion to portable document formats (PDFs) using Adobe Acrobat (www.adobe.com) or other available conversion products.


There are a variety of Web-based reference management software products available as add-ons to word processing, with ranging prices and functionalities that leverage the power of connectivity and sharing with team members. For example, unique template add-ons give Microsoft Word in Microsoft Office 365 additional power to produce documents in formatted styles. Bibliographic management applications emerge frequently and librarians often help sort out best ways to keep reference materials in order. Common Web resources such as reference managers that are subscriptionbased maintain resources that are available to users from anywhere. For example, RefWorks from ProQuest (https://www.proquest.com/products-services/refworks.html) provides options for reference management from a centrally hosted Web site. Searching online is one function of these applications, and then working between the reference database and the text of the proposal document is efficient and easy, calling out citations when needed with “cite as you write” capability into the finished document. Members of the research team can share files, materials, and the ongoing development of the proposal. Output style sheets can be selected to match publication or proposal guidelines.


Research applications and calls for proposals are often downloadable from the Internet into an interactive form where individual fields are editable and the documents can be saved in a portable, fillable format such as Adobe Acrobat, printed, or submitted from the Web. The Web also allows the researcher to explore numerous opportunities for designing a proposal tailored to potential foundations for consideration of funding. Calls for proposals, contests, and competitive grants may provide links from Web sites that give the researcher a depth of understanding of what is expected in the proposal. There are more and more home-grown submission procedures today for grants, journal manuscripts, and conference “call for abstracts” with Web-based instructions. These often convert the documents automatically to PDF for submission with key data fields organized and sorted for easier review procedures. Instructions are customized for the user.


Research Study Implementation


A funded research study becomes a logistical challenge for most researchers in managing the steps of the process. Numerous demands for information management require the researcher to maintain the fidelity of the procedures, manage the subject information and paper flow, and keep the data confidential and secure. These processes require researchers to use a database management system (DBMS) that is reliable. Several DBMS software applications exist and have evolved to assist the researcher in the overall process of study implementation. These applications are operations oriented, used in non-research programs and projects as well, but can assist the researcher in management of time, personnel, money, products, and ultimately dissemination, with reporting capability for reviews and audits.


The ubiquitous Microsoft Office 365 suite and Google systems include programs that (1) manage data in a relational database (Microsoft Access), (2) number crunch in a flat database (Microsoft Excel), and (3) share through document storage with hyperlink Web capabilities. Proprietary database applications and new customized, more sophisticated, integrated, and proprietary database management applications from locally produced Web-based systems provide the researcher with ways to operationalize the personnel, subjects, forms, interviews, dates, times, and/or tracking systems over the course of the project. Many of these proprietary systems can map out the research flow for enrollment of subjects, consenting, and data capture all together in one solution. Clinical trials management software (CTMS) is available from a variety of vendors. For example, one vendor, Trial By Fire Solutions, is the team behind SimpleTrials, an eClinical software application with a focus on clinical trial management to improve planning, execution, and tracking of clinical trials (www.simpletrials.com/why-simpletrials-overview). Most of these applications require specially designed screens that are unique to the project if the research warrants complicated connections such as reminders, but simple mailing lists and zip codes of subjects’ addresses and contact information in a generic form can also be extremely useful for the researcher. Some of these traditionally designed clinical tool applications are emerging as portable apps with devices such as smartphones and tablets (Table 49.1).



TABLE 49.1. Clinical Application Tools


Images


Images


Scheduling and project planning software is also available from cloud products such as Microsoft Project that allows the project director to organize the work efficiently and track schedules and deadlines using Gantt charts over the lifetime of the project. In more sophisticated research offices, customized tracking and data capture devices, programs, and systems have been launched, including the exemplar of data management tools from the recent U.S. Census, that have captured and made data available to researchers with data tools (https://www.census.gov/quickfacts/fact/table/US/PST045218).


One more important consideration related to the development of the plan for the seasoned researcher or novice, doctoral dissertation investigator is the essential step of submitting the proposal to the Institutional Review Board (IRB). Home institutions that have IRBs will have specific procedures and forms for the researcher who can benefit from the proposal development electronically. In some institutions, the IRB document management has been done through contracts with outside Internet organizations providing mechanisms for posting IRB materials, managing the online certifications required, and communicating with the principal investigators. One such example is IRBNet.org, hosting services for organizations to manage IRB and other administrative documents associated with the research enterprise and reported use across 50 states and more than 1600 organizations (IRBNet, n.d.).


In summary, the general considerations of developing and conducting a research study are based on philosophical approaches and will dictate which methodology the researcher will use to develop the study. This will subsequently influence the research and computer applications to be used in carrying out the project, followed by the steps of proposal preparation, depending on the choice of application most useful for the quantitative or qualitative study to be planned. After identifying the research problem, the researcher must proceed through the steps of the process, where computers play an important role that is unique to each of the methodologies.


THE QUANTITATIVE APPROACH



Data Capture and Data Collection


Data capture and data collection are processes that are viewed differently from the quantitative and qualitative perspectives. Data collection can take a number of forms depending on the type of research and variables of interest. Computers are used in data collection for paper-andpencil surveys and questionnaires as well as to capture physiological and clinical nursing information in quantitative or descriptive patient care research. There are also unique automated data capturing applications that have been developed recently that facilitate large group data capture in single contacts or allow paper versions of questionnaires to be scanned directly into a database ready for analysis or provided online with Web-based survey tools.


Paper and Pencil Questionnaires. Paper and booklet surveys do still exist today in data collection, but new enhancements aid the researcher in time-saving activities. Surveys and questionnaires can be scanned or programmed into a computer application. Researchers are also using computers for direct data entry into studies via automated data capture where subjects enter their own responses via a device with simultaneous coding of responses to questions. These online survey tools can provide a wide range of applications, including paper or portable versions, and range in price and functionality. Many proprietary tools have been automated to be executed by the researchers and distributed to the subjects enrolled in the individual studies, to capture data efficiently on the Web and provide a number of analytic and comparative norm-referenced scores (capterra.com). One example is the computerized neurocognitive testing produced and delivered to subjects online by CNS Vital signs (https://www.cnsvs.com/) with demonstrated reliability and validity (Gualtieri & Johnson, 2006).


The use of Web-based responses to questions for clinicians as well as researchers has grown. Respondents or their surrogates can enter information directly into the computer or Web site through Internet access. There are several research study examples where patients with chronic conditions used a computer application or the Internet as the intervention as well as the data capture device; patients or caregivers responded to questions directly and the data were processed with the same system (Berry et al. 2010; Berry, Halpenny, Bosco, Bruyere, & Sanda, 2015).


Automated Data Capture. Other examples of unique data capture in research include individual devices such as the “Smart Cap” used to measure patient compliance with medications. The Medication Event Monitoring System (MEMS® 6) (Fig. 49.1) automates digitized data that can be downloaded for analysis in research such as patient adherence studies (El Alili, Vrijens, Demonceau, Evers, & Hiligsmann, 2016; Figge, 2010).


Images


• FIGURE 49.1. MEMS (Medication Event Monitoring System) SmartCap Contains an LCD Screen; MEMS Reader Transfers Encrypted Data from the MEMS Monitor to the Web Portal. (Published with permission of MWV Aardex Group, www.aardexgroup.com.)


A variety of online survey tools also provide researchers the power to collect data from a distance, without postage, using the Internet. These applications can present questionnaire data in graphically desirable formats, depending on the price and functionality of the software, to subjects delivered via e-mail, Web sites, blogs, and even social networking sites such as Facebook or Twitter if desirable. Social media mechanisms such as blogs and tweets are often providing sources of data analyses, albeit questionably scientific, that have sometimes been harnessed to extract meaning for researchers. Web surveys, although previously criticized for yielding poorer response rates than traditional mail (Granello & Wheaton, 2004), are becoming increasingly popular and deemed appropriate for their cost and logistical benefits (Dillman, 2011). The data from the Internet can be downloaded for analysis and several applications provide instant summary statistics that can be monitored over the data collection period. Several of these programs are available for free with limited use; others yield advanced products that can be incorporated into the research, giving mobility (e.g., smartphones) and flexibility (e.g., scanning or online entry) to the data capture procedures. Several of these applications include (1) Survey Monkey (www.surveymonkey.com); (2) E-Surveys Pro (www.esurveyspro.com); and (3) Qualtrics (www.qualtrics.com). Many of these products continue to enhance functionality, team, and sharing capabilities with integration with statistical analysis, graphics, and qualitative narrative exportability (capterra.com).


Software packages also exist that can be integrated with the researcher’s scanner to optically scan a specially designed questionnaire and produce the subjects’ responses in a database ready for analysis. OmniPage 19 (Nuance Imaging, 2014) is a top-rated optical character recognition (OCR) program that converts a scanned page into plain text. Programs such as SNAP Survey software (www.snapsurveys.com) and Remark Office OMR 10 (www.remarksoftware.com) can facilitate scanning large numbers of questionnaires with speed and accuracy. These products, enhanced even more with Web-based products, increase the accuracy of data entry with very low risk of errors, thereby improving the efficiency of the data capture, collection, and entry processes.


Physiological Data. The collection of patient physiological parameters has long been used in physiological research. Some of these parameters can be measured directly from patient devices such as cardiac monitoring of heart rhythm, rate, and fluid or electrolytes and be captured in the patient care records of the hospital systems. For example, hospitals have developed mechanisms to use information from intensive care unit (ICU) data to calculate benchmarks for mortality and resource use. Now that many measurements taken from various types of imaging (e.g., neurological, cardiovascular, and cellular) have become digitized, they can also be entered directly from the patient into computer programs for analysis. Each of these applications is unique to the measures, such as systems to capture cardiac functioning and/or pulmonary capacity, devices that can relay contractions, or monitors that pick up electronic signals remotely. Numerous measurements of intensity, amplitude, patterns, and shapes can be characterized by computer programs and used in research. For example, the APACHE IV system and its multiple development versions have been tested in benchmarking hospital mortality and outcomes from captured physiological data in several groups of patients in the ICU (Dahhan, Jamil, Al-Tarifi, Abouchala, & Kherallah, 2009; Paul, Bailey, Van Lint, & Pilcher, 2012; van Wagenberg, Witteveen, Wieske, & Horn, 2017). Each of these measurement systems has evolved with the unfolding of research specific to their questions, and within each community of scholars, issues about the functionality, accuracy, and reliability of electronic data extracted from these physiological devices are debated.


Along with the proliferation of clinical diagnostic measurement systems, there has been a rapid expansion of unique computer applications that have emerged for the data analysis aspects of these clinical systems, and physiological and record sources. Millions of gigabytes of data are stored in machines that can be tapped for multiple studies on the existing data. Data mining is a powerful tool in the knowledge discovery process that can now be done with a number of commercial and open-source software packages (Khokhar et al. 2017). Data mining and the evolving “big data” initiatives to make patient care data available introduce new ways to manipulate existing information systems.


With increased attention to comparative effectiveness research (CER), several government and private organizations are encouraging researchers to hone the techniques to extract valid and reliable information from these large data sets (Sox & Greenfield, 2009). For example, the Agency for Healthcare Research and Quality (AHRQ) developed its Effective Health Care (EHC) as a partnership with researchers to examine scientific evidence and compare effectiveness (https://effectivehealthcare.ahrq.gov/). The Effective Health Care Program was initiated in 2005 to provide valid evidence about the comparative effectiveness of different medical interventions. The object is to help consumers, healthcare providers, and others in making informed choices among treatment alternatives. Through its Comparative Effectiveness Reviews, the program supports systematic appraisals of existing scientific evidence regarding treatments for high-priority health conditions. It also promotes and generates new scientific evidence by identifying gaps in existing scientific evidence and supporting new research. (Full reports are available at http://www.effectivehealthcare.ahrq.gov/reports/final.cfm.)


Data mining is a mechanism of exploration and analysis of large quantities of data in order to discover meaningful patterns and rules, applied to large physiological data sets as well as clinical sources of data. The nature of the data and the research question determine the tool selection (i.e., data-mining algorithm or technique). Analytics tools and consultants exist to help researchers unfamiliar with these data mining algorithms use data mining for analysis, prediction, and reporting purposes (Lebied, 2018). Many of the first commercial applications of data mining were in customer profiling and marketing analyses. Today, many special technologies can be applied, for example, to predict physiological phenomena such as genetic patterns with the promise of therapeutics in the next generation through genomics research (Issa, Byers, & Dakshanamurthy, 2014).


The National Institutes of Health (NIH) is undertaking several initiatives to address the challenges and opportunities associated with big data. As one component of the NIH-wide strategy, the Common Fund in cooperation with all NIH Institutes and Centers was supporting the Big Data to Knowledge (BD2K) initiative in 2012, which aimed to facilitate broad use of biomedical big data, develop and disseminate analysis methods and software, enhance training for disciplines relevant for large-scale data analysis, and establish centers of excellence for biomedical big data (NIH, 2012). Large volumes of digital data that can come from multiple sources such as EHRs, genomics, monitoring devices, population surveys, automatically captured coded health-related reports, and nursing care–related data elements all have the potential to provide data that can be used in secondary analyses to describe, explore, predict, compare, and evaluate healthrelated data to answer researchable questions (Westra & Peterson, 2016). Patient data captured to provide patient care has the potential to be subsequently used for additional purposes beyond patient care, yielding new insights extracted from big data (Brennan & Bakken, 2015).


Unique Nursing Care Data in Research. Scientists and technologists from a variety of disciplines are working hard to identify the domain of data and information that is transferable across situations, sites, or circumstances that can be captured electronically for a wide array of analyses to learn how the health system impacts the patients it serves. The American Nurses Association (ANA) has supported the need to standardize nursing care terms for computer-based patient care systems. The clinical and economic importance of structured recording to represent nursing care was recognized by the acceptance of the nursing minimum data set (NMDS) (Werley, Lang, & Westlake, 1986). As the integration of EHRs has proliferated since the 1990s, ANA has incrementally accepted multiple terminologies for the description of nursing practice, including the North American Nursing Diagnosis Association (NANDA) taxonomy of nursing diagnosis, Clinical Care Classification (CCC) System; Nursing Interventions Classification (NIC); and Nursing Outcomes Classification (NOC), patient care data set, Omaha Home Healthcare, and the International Classification of Nursing Practice (ICNP). The Clinical Care Classification System (sabacare.com) nursing terminology has been accepted by the U.S. Department of Health and Human Services (HHS) (DHHS, 2007) as a named standard within the Healthcare Information Technology Standards Panel (HITSP) Interoperability Specification for Electronic Health Records, Biosurveillance and Consumer Empowerment as presented to a meeting of the American Health Information Community (AHIC), a federal advisory group on health IT (Saba, 2012, 2014). In 2014, the National Action Plan for Sharable and Comparable Nursing Data for Transforming Health and Healthcare was published to coordinate the long-standing efforts of many individuals. Foundational to integrating nursing data into CDRs for big data science and research is the implementation of standardized nursing terminologies, common data models, and information structures within EHRs. The plan built on existing federal health policies for standardized data that are relevant to meaningful use of eHRs and clinical quality eMeasures (Westra et al. 2015).


Since its first meeting, the group continues to work to advance this National Action Plan through efforts of the original 12 subgroups which includes care coordination, context of care, mobile health, nursing value, and the social and behavioral determinants of health workgroups, among others up through 2019 (https://www.nursing.umn.edu/centers/center-nursing-informatics/news-events/2019nursing-knowledge-big-data-science-conference). Over the next years, the conference has continued to grow and expand its reach. In addition to the articulated plan of the Nursing Knowledge: Big Data Science Committee, the buy-in of senior nursing leadership in national and international healthcare organizations, such as the Chief Nursing Officer (CNO) and the Chief Nursing Informatics Officer (CNIO), is critical in order to influence future data system adoption and the integration of a standardized nursing language in the EMR.


Although no standardized product from the multiple working groups has emerged as a single standard, they provide guidance toward a structured coding system to record patient care problems that are amenable to nursing actions, the actual nursing actions implemented in the care of patients, and the evaluation of the effectiveness of these actions—researchers can analyze large nursing data (Bakken, 2013; Byrne & Lang, 2013; Englebright, Aldrich, & Taylor, 2014). With the federal government “interoperability” incentive to enhance of cross-platform compatibility and collaboration, “harmonized” data elements with nursing and SNOMED-CT (Systematized Nomenclature of Medicine—Clinical Terms (Coenan, 2012; Coenen & Jansen, 2013) are critical to the development of nursing research using nursing data. Research on outcomes of care is one of the centerpieces of this massive policy that has begun to show an impact on integrated information technologies in healthcare that can transform practice. Nursing research on nursing practice captured from standardized terminology will be essential to document outcomes of nursing care. Big data initiatives will promote data mining of nursing data that can fuel the ongoing development of health services research focusing on nursing (Glassman & Rosenfeld, 2015; Khokhar et al. 2017; Westra & Peterson, 2016).


Data Coding


In most quantitative studies, the data for the variables of interest are collected for numerical analysis. These numerical values are entered into designated fields in the process of coding. Coding may be inherent in software programs for the physiological data and many of the electronic surveys. The coding may be generated by a computer program from measurements directly obtained through imaging or physiological monitoring, or entered into a computer by a patient or researcher from a printout or a questionnaire or survey into a database program. Most statistical programs contain data editors that permit the entry of data by a researcher as part of the statistical application. In such a situation, fields are designated and numerical values can also be entered into the appropriate fields without the use of an extra program. For mechanisms that translate and transfer source data to prepare it for analysis, generic programs such as Microsoft Excel provide basic to complex statistical analysis and visualization options. Other analytical tools maximize visualization such as open access “R” (www.r-project.org) and proprietary Tableau (www.tableau.com) with robust graphic capabilities.


Coding data is a precise operation that needs careful consideration and presents the researcher with challenges that warrant technical or cognitive applications. Coding data is a combination of cognitive decisions and mechanical clerical recording of responses in a numerical form with numerous places that errors can occur. There are several ways of reviewing and “cleaning” the data prior to analysis. Some computer programs allow for the same data to be entered twice called double-data entry or two-pass verification. This is done preferably by different people to check for errors, with the premise that if the double entry does not match, one entry is wrong. One also must check for missing data and take them into consideration in the coding and analyses. New versions of advanced statistical software help in these activities.


Another type of data coding can be described in the example of the process of translating data from documentation of patient care using coding strategies. Current research on coding nursing data using standardized nursing terminology from standardized codes is evident in several research studies in the literature (Englebright et al. 2014). For example, using precisely coded data from a standardized terminology can produce data that can be aggregated and statistically analyzed into meaningful information. In studies such as Saba and Taylor (2007), Moss and Saba (2011), and Dykes and Collins (2013), researchers have discussed mechanisms of aggregating nursing action types, e.g., assess, perform, teach, or manage, into aggregated information on the amount of time or effort a nurse spends in a day and concomitant costs associated.


Data Analysis


There are many ways to consider data analysis. These considerations are focused around the broad types of research of interest in nursing and general research goals or questions. These goals may require different statistical examinations: (a) descriptive and/or exploratory analyses, (b) hypothesis testing; (c) estimation of confidence intervals; (d) model building through multivariate analysis; and (e) path analysis and structural equation model building. Various types of nursing research studies may contain a number of these goals. For example, to test an intervention using an experimental or quasi-experimental design, one may first perform descriptive or exploratory analyses followed by tests of the hypotheses. Quality improvement, patient outcome, and survival analysis studies may likewise contain a number of different types of analyses depending on the specific research questions.


These analyses can all be calculated with traditional statistics packages that have evolved over multiple versions, each new version adding editing, layout, and exporting efficiencies. More recent enhancements have included modeling abilities with varying strengths in the visual graphic productions of the packages. Two of the most popular programs in use today are the IBM SPSS Statistics 24 (formerly Statistical Package for Social Sciences) (https://www.ibm.com/analytics/spss-statisticssoftware) and Statistical Analysis Services (SAS) (https://www.sas.com/en_us/software/stat.html); however, a variety of other packages and programs exist, such as STATA 15 (https://www.stata.com) or the open-access “R” software available for free download (www.r-project.org). R acts as a free alternative to traditional statistical packages such as SPSS, SAS, and Stata such that it is an extensible, open-source language and computing environment for Windows, Macintosh, UNIX, and Linux platforms. It performs a wide variety of basic to advanced statistical and graphical techniques at little to no cost to the user. These advantages over other statistical software encourage the growing use of R in cutting edge social science research (Muenchen, 2009).Which package one selects depends on the user’s personal preference, particular strengths, and limits of the applications including number of variables, options for analyses, and ease of use. These packages have given the user the power to manipulate large data sets with relative ease and test out statistical combinations that have exponentially improved the analyses possible in a fraction of time that it once took.


The different types of analyses required by the goals of the research will be addressed further. This description will be followed by examples of types of nursing research that incorporate some of these types of analyses.


Descriptive and Exploratory Analysis. The researcher may first explore the data means, modes, distribution pattern, and standard deviations, and examine graphic representations such as scatter plots or bar graphs. Tests of association or significant differences may be explored through chi-squares, correlations, and various univariate, bivariate, and trivariate analyses, and an examination of quartiles. During this analysis process, the researcher may recode or transform data by mathematically multiplying or dividing scores by certain log or factor values. Combining several existing variables can also create new variables. These transformations or “re-expressions” or “dummy-coding” allow the researcher to analyze the data in appropriate and interpretable scales. The researcher can then easily identify patterns with respect to variables as well as groups of study subjects of interest. Both commercial statistical packages IBM SPSS Statistics 24 (https://www.ibm.com/analytics/spss-statistics-software) and SAS (https://www.sas.com/en_us/software/stat.html) provide the ability to calculate these tests and graphically display the results in a variety of ways. With SPSS, the researcher can generate decision-making information quickly using a variety of powerful statistics, understand and effectively present the results with high-quality tabular and graphical output, and share the results with others using various reporting methods, including secure Web publishing. SAS provides the researcher with tools that can help code data in a reliable framework, extract data for quality assurance, exploration, or analysis, perform descriptive and inferential data analyses, maintain databases to track, and report on administrative activities such as data collection, subject enrollment, or grant payments, and deliver content for reports in the appropriate format. SAS allows for creating unique programming within the variable manipulations and is often the format for large publicly available data sets for secondary analysis. SAS product lists have expanded with potential for application in AI, particularly as it relates to business enterprises.


As part of exploratory analysis, simple, binary, and multiple regression analyses can be used to examine the relationships between selected variables and a dependent measure of interest. Modeling is a new area of these statistics application that manipulates variables into generalizable mathematical formulas. Printouts of correlation matrices, extensive internal tests of data assumptions on the sample, and regression analysis tables provide the researcher with condensed, readable statistical information about the relationships in question.


Hypothesis Testing or Confirmatory Analyses. Hypothesis testing and advanced analyses are based on an interest in relationships and describing what would occur if the null hypothesis is statistically rejected, leaving the alternative as true. These are conditional relationships based on the variables selected for study and the typical mathematical tables and software for determining P values are accurate only insofar as the assumptions of the test are met (Polit & Beck, 2017). Certain statistical concepts such as statistical power, type II error, selecting alpha values to balance type II errors, and sampling distribution are decisions that the researcher must make regardless of the type of computer software. For example, one Web-based application to calculate power is G*Power (http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/) which is a free, downloadable calculator to compute power. These concepts are covered in greater detail in research methodology courses and are outside the scope of the present discussion.


Model Building. An application used for a confirmatory hypothesis testing approach to multivariate analysis is structural equation modeling (SEM) (Byrne, 1984). Byrne describes this procedure as consisting of two aspects: (1) the causal processes under study are represented by a series of structural (i.e., regression) equations and (2) these structural relations can be modeled pictorially to enable a clearer conceptualization of the theory under study. The model can be tested statistically in a simultaneous analysis of the entire system of variables to determine the extent to which it is consistent with the data. If goodness of fit is adequate, the model argues for the plausibility of postulated relationships among variables (Byrne, 1984). Most researchers may wish to consult a statistician to discuss the underlying assumptions of the data and plans for testing the model.


IBM SPSS 22 offers Amos 22 (https://www.ibm.com/us-en/marketplace/structural-equation-modeling-sem), a powerful SEM and path analysis add-on to create more realistic models than if using standard multivariate methods or regression alone. Amos is a program for visual SEM and path analysis. User-friendly features, such as drawing tools, configurable toolbars, and drag-and-drop capabilities, help the researcher build structural equation models. After fitting the model, the Amos path diagram shows the strength of the relationship between variables. Amos builds models that realistically reflect complex relationships because any variable, whether observed (such as survey data) or latent (such as satisfaction or loyalty), can be used to predict any other variable.


Meta-Analysis. Meta-analysis is a technique that allows researchers to combine data across studies to achieve more focused estimates of population parameters and examine effects of a phenomenon or intervention across multiple studies. It uses the effect size as a common metric of study effectiveness and deals with the statistical problems inherent in using individual significance tests in a number of different studies. It weights study outcomes in proportion to their sample size and focuses on the size of the outcomes rather than on whether they are significant.


Although the computations can be done with the aid of a reliable commercial statistical package such as MetaAnalysis (Borenstein, Hedges, Higgins, & Rothstein, 2009), the researcher needs to consider the following specific issues in performing the meta-analysis (Polit & Beck, 2017): (1) justify which studies are comparable and which are not, (2) rely on knowledge of the substantive area to identify relevant study characteristics, (3) evaluate and account for differences in study quality, and (4) assess the generalizability of the results from fields with little empirical data. Each of these issues must be addressed with a critical review prior to performing the meta-analysis.


Meta-analysis offers a way to examine results of a number of quantitative research studies that meet meta-analysis researchers’ criteria. Meta-analysis overcomes problems encountered in studies using different sample sizes and instruments. The software application Comprehensive Meta-Analysis (https://www.meta-analysis.com/pages/features.php?cart=BD482503003) provides the user with a variety of tools to examine these studies. It can create a database of studies, import the abstracts or the full text of the original papers, or enter the researcher’s own notes. The meta-analysis is displayed using a schematic that may be modified extensively, as the user can specify which variables to display and in what sequence. The studies can be sorted by any variable including effect size, the year of publication, the weight assigned to the study, the sample size, or any user-defined variables to facilitate the critical review done by the researcher (Fig. 49.2).


Images


• FIGURE 49.2. Comprehensive Meta-Analysis (CMA) User Interface. (Published with permission of Biostat, Inc., https://www.meta-analysis.com/pages/features.php?cart=BD482503003.)


Graphical Data Display and Analysis. There are occasions when data need to be displayed graphically as part of the analysis and interpretation of the information or for more fundamental communication of the results of computations and analyses. Visualization software is becoming even more useful as the science of visualization in combination with new considerations of large data from the “Fourth Paradigm” unfolds (Hey, Tansley, & Tolle, 2009). These ideas begin with the premise that meaningful interpretation of data-intensive discoveries needs visualizations that facilitate understanding and unfolding of new patterns. Nurses are currently discovering new ways to present information in meaningful ways through these visualization techniques (Delaney, Westra, Monsen, Gillis, & Docherty, 2013).


Most statistical packages including SPSS, SAS, STATA, and R, and even spreadsheets such as Excel provide the user with tools for simple to complex graphical translations of numeric information, thus allowing the researcher to display, store, and communicate aggregated data in meaningful ways. Special tools for spatial representations exist, such as mapping and geographic displays, so that the researcher can visualize and interpret patterns inherent in the data. Geographic information system (GIS) technology is evolving beyond the traditional GIS community and becoming an integral part of the information infrastructure of visualization tools for researchers. For example, GIS can assist an epidemiologist with mapping data collected on disease outbreaks or help a health services researcher graphically communicate areas of nursing shortages. GIS technology illustrates relationships, connections, and patterns that are not necessarily obvious in any one data set, enabling the researcher to see overall relevant factors. ArcGIS Online system by ESRI (https://www.arcgis.com/home/index.html) is one of several GIS Web-based systems, some of which are open access, for management, analysis, and display of geographic knowledge, which is represented using a series of information sets. Tableau (www.tableau.com) is an individual and cloud-based subscription that can be used by the subscriber on any computers or laptops. Tableau includes extensive analytics and visualization exports. It also includes maps and globes with three-dimensional capabilities to describe networks, topologies, terrains, surveys, and attributes (Fig. 49.3).


Images


• FIGURE 49.3. Used with permission. Tableau Map Screenshot—Geographic Map of Ambulance Time to Hospital using Zip Codes on 5 Boroughs of NYC Ambulance Time. (From V. Feeg, the author. Used with permission)


In summary, the major emphasis of this section has provided a brief discussion about the range of traditions, statistical considerations, and computer applications that aid the researcher in quantitative data analysis. As computers have continued to integrate data management functions with traditional statistical computational power, the researchers have been able to develop more extensive and sophisticated projects with data collected. Gone are the days of the calculator or punch cards, as the computing power now sits on the researchers’ desktops or laptops, with speed and functionality at their fingertips. The future of machine learning to enhance AI capabilities of dynamic research is upon us.


THE QUALITATIVE APPROACH



Data Capture and Data Collection


The qualitative approach focuses on activities in the steps of the research process that differ greatly from the quantitative methods in fundamental sources of data, collection techniques, coding, analysis, and interpretation. Thus, the computer becomes a different kind of tool for the researcher in most aspects of the research beginning with the capture and recording of narrative or textual data. In terms of qualitative research requiring narrative content analysis, the computer can be used to record the observations, narrative statements of subjects, and memos of the researcher in initial word processing applications for future coding. Software applications that aid researchers in transcription tasks include text scanners, voice recorders, and speech recognition software (Table 49.2). New digital recorders are also on the market that use sophisticated and higher cost voice-recognition software. From these technologies, researchers or transcriptionists can easily manipulate the recording and type the data verbatim. Even iPhones and smartphones have high-quality recording applications that aid the qualitative research capture narrative statements. These narrative statements, like the quantitative surveys, can be either programmed for use in other applications or subjects’ responses can be entered directly into the computer.



TABLE 49.2. Voice and Dictation Apps for Interviews

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 29, 2021 | Posted by in NURSING | Comments Off on Computer Use in Nursing Research

Full access? Get Clinical Tree

Get Clinical Tree app for offline access