518 Chapter 32 • programmatiC evaluation
be used, what type of data needs to be collected, how the data will be analyzed and by whom, and how the evaluation results will be disseminated. Deliberate decisions about the evaluation design ensure that the evaluation plan is valid, reliable, timely, pervasive, and credible.
The use of a theoretical framework is essential to systematic program evaluation. Once measurable outcome criteria are selected, theoretical foundations assist in the identification of data to be collected, the timing of data collection, and how this data then directs and contributes to the evaluation of program outcomes. A valuable theoretical framework for programmatic evaluation is the widely known systems theory (Chen, 2005).
Systems theory includes the identification/analysis of five elements, applicable to programmatic evaluation. These elements are input, throughput, output, environment, and feedback. Each of these elements impacts all other elements and the analysis of each provides valuable data for effective overall programmatic evaluation (Figure 32-1).
According to the Chen (2005), the terms identified within systems theory provide a means to clarify the nature and characteristics of any given program. To be successful, any program must accomplish two functions. Internally, “inputs” must be transformed into desirable “outputs”; externally, the program must continuously and successfully interact with its environment to procure resources to support and sustain itself and meet the environmental need—in this case, the needs of the healthcare environment. Because of this essential element of environmental interaction, successful programs must exist as “open systems,” open to the Figure 32-1 Systems theory.
ENVIRO
R N
O M
N E
M N
E T
N
T E
N
E V
N I
V R
I ONMENT
ENVIRONMENT
T E
NV
N
Thr I
V R
I O
R N
O M
N E
M
oughput NT
N ENVIRONMENT
RONMENT ENVIRONM
N E
M N
E T
N
T EN
E V
N I
V R
I O
R N
O MENT ENVIRON
NT ENVIRONMENT ENVIRONMENT ENVIRONMENT
ON
O M
N E
M N
E T
N
T E
N
E V
N I
V R
I O
R NMENT ENVIRONME
M N
E T
N
T E
N
E V
N I
V R
I O
R N
O
NT
T E
N
E V
N I
V R
I O
R
Input N
O M
N EN
E T ENVIRONMENT EN
E VI
V R
I O
R N
O M
N
Output E
M N
E T
N
ON
O M
N E
M N
E T
N
T EN
E V
N I
V R
I O
R NMENT ENVIRONME
M N
E T
N
T EN
E V
N I
V R
I O
R N
O
NT ENVIRONMENT ENVIRONMENT ENVIRONMENT
ONMENT ENVIRONM
N E
M N
E T
N
T E
N
E V
N I
V R
I O
R N
O MENT ENVIRON
ENVIRONMENT
T E
NV
N I
V R
I O
R N
O M
N
Feedback E
M NT
N ENVIRONMENT
ENVIRONM
N E
M N
E T
N
T E
N
E V
N I
V R
I O
R N
O MENT
Theoretical Foundation 519
influence of their environments. This theoretical framework readily guides programmatic evaluation, both formative and summative, and in this manner requires professional healthcare programs to clearly demonstrate the role of each program component in the evaluative process.
Application to the Evaluative Process
Applying systems theory to the process of programmatic evaluation in professional education, inputs may be identified as students enrolling in the program, educational resources provided both from the educational environment and the external environment, faculty teaching in the program, and the curriculum. The throughput or transformation element of the program is viewed as including the implementation of the curriculum by the faculty, the use of classroom/teaching resources, and/or clinical practice settings available to the students. In addition, the administration of the program which involves the hiring/evaluating of faculty, ensuring that teaching resources are available to the students, and determining the conceptual framework/teaching philosophy that guides the teaching/learning is considered to be “throughput” within the context of the program. Outputs are the actual graduates of the program, measures of their knowledge acquisition in all settings where teaching/learning occurs, and their success in meeting licensure requirements and providing competent practice within a variety of healthcare environments. In addition, outputs may be viewed as the productivity of the faculty outside of the classroom, such as research and scholarly activities as well as service to the community.
Environment is identified as the community(ies) within which the program is located and provides services. This may include the academic environment where the program is established, and the political environment that influences the availability/accessibility of healthcare settings for student learning experiences, resources for such services, and availability of and support for graduates in professional nursing. In addition, the type of stakeholders associated with the educational and practice institutions and their philosophical view of education are considered to be environmental influences on program effectiveness. Finally, the direct consumer of this education, the students, significantly influence both the academic environment and the external environment, dependent on their learning needs and expectations of the program.
The fifth element, feedback, is essential to the creation of an effective educational program and valid programmatic evaluation. This informational loop provides data relative to characteristics inherent to its inputs, throughputs, and outputs (both formative and summative) and is used to make adjustments to all aspects of the program. Data is gathered from student evaluations both while 520 Chapter 32 • programmatiC evaluation
enrolled in the program (formative) and following graduation (summative); the environmental response to program throughputs and outputs; healthcare organization evaluations of program graduates; measures of healthcare quality as provided by these graduates. Utilizing this theoretical framework clearly provides a means to perform valid and accurate programmatic evaluation, valuable to both program administrators, national accreditation agencies/program stakeholders as well as a means to demonstrate how all elements of the program contribute to the evaluative process.
Process of Evaluation
Utilizing this theoretical framework, it is clear how each component of an educational program is essential to overall programmatic evaluation. Faculty frequently do not consider elements of their role in the development of teaching/
learning activities and their course materials as significantly impacting programmatic evaluation. However, to be effective, an educational program’s mission statement, philosophy, and conceptual framework must be reflected in all elements of the program—course descriptions, course objectives, teaching/learning activities, etc., which are developed by individual faculty. For example, if the mission statement addresses the need for graduates to serve a diverse client population, then the course objectives in each course must reflect teaching/learning that allows the students to care for a diverse client population within the context of course content. If cultural sensitivity is viewed as a program outcome for its graduates, then cultural sensitivity education must be a part of all courses within the curriculum.
If successful completion of licensure requirements is stated as a program outcome, then assessment in all courses must reflect activities to ensure students meet testing criteria and mastery of content requisite for meeting the licensure standards.
As stated previously, all elements of an educational program must be clearly guided by programmatic evaluation and contribute data to support that evaluative process. Feedback from the environment—healthcare environment needs and expectations—help to determine admission requirements for applicants to the program and faculty expertise for teaching in the program. Relative to the throughputs aspect of the program, classroom/educational environments and clinical experiences are selected to meet the learning needs of the students to ensure environmental needs are met and students successfully meet the licensure agency requirements. In addition, faculty research and service activities will be guided by community needs, student learning needs, and demands of the university environment for faculty effectiveness. Feedback from both the outputs (graduates) and the external environment provides data to support adjustments to curriculum and teaching/learning to better meet the community and learner Elements of Program Evaluation 521
needs. The use of a theoretical framework also assists in the determination of outcome criteria selected by the evaluator. Program outcomes, when reflected throughout the curriculum and all program elements, assist in the setting of admission criteria (i.e., cultural diversity of students) and also the selection of faculty to teach in the program. Faculty are selected based on their area of expertise to assist in meeting the needs of the current healthcare environment. In turn, curriculum development is guided by the needs of the current healthcare environment, the needs of the learners, and the research and service focus of faculty.
Teaching and learning activities are guided by the availability and diversity of practice environments, and also guide the selection of specific learning activities incorporated into the curriculum. In addition, community alliances enhance the diversity of the learning experiences offered to students and faculty, as well as provide resources to support and sustain the program.
ELEMENTS OF PROGRAM EVALUATION
Clearly, there are a wide variety of criteria that may be selected as program outcomes within the context of programmatic evaluation. Simpler is often better when selecting evaluative criteria, so long as the essential elements of the program are analyzed. In professional healthcare education, learning must be viewed as essential as a program outcome, since clinical practice is dependent on a sound knowledge base for that practice (DeSilets & Dickerson, 2008).
Outcome evaluation measures changes in clinical practice following a learning experience. Evaluation of learning incorporates not only the objectives of the learning experience but also the characteristics of the learner. Evaluation of learning enables faculty members to determine the progression of students toward meeting the educational objectives. Specifically, the goal is to discover to what degree learners have attained the knowledge, attitudes, or skills emphasized in a learning experience. As a result of the dearth of available instruments to measure the student’s brain to determine if learning has occurred, simulated or designed situations are developed to measure learning. Therefore, evaluation of learning is a value judgment based on the data obtained from the various designed measurements taken in the classroom and clinical settings. In addition, the evaluation methods should match the nature of the course and the outcomes. For example, if students are enrolled in a course that contains 45 clock hours of didactic and 150 clock hours of clinical, it would be important that most of the measurements used to evaluate this course would measure learning in the clinical area.
The process for evaluating learning is similar to the process for program evaluation, in that it is based on a planned design, and in turn contributes significantly to the overall programmatic evaluation. Deliberate planning and thought are 522 Chapter 32 • programmatiC evaluation
needed to decide what evaluation methods should be used in a course of study.
First, faculty members need to identify what is to be evaluated. What are the outcomes of learning? Inherent in this process is the specification of the domain of learning. In the health professions, learning occurs not only in the cognitive or knowledge domain, but also in the affective or value domain and in the psychomotor or competency domain. Each domain of learning requires different evaluation measures. For example, a multiple choice exam measures a student’s cognitive understanding of a concept but does not measure the student’s ability to perform a clinical skill.
In addition, in measuring the domain of a learner, the faculty must determine the complexity of the learning. Complexity of learning is determined when one considers the characteristics of the learner (i.e., level of learning, prerequisite courses, past clinical experiences). Integrated into this determination is the identification of the content or concepts associated with the learning experience. All of these factors clearly describe the behavior to be measured that indicates that learning has occurred. One should construct a matrix that identifies all of the factors to ensure that all concepts are integrated and the best measurement is chosen.
If licensure success rate is selected as a program outcome, data from curriculum, teaching/learning activities, and practice environments may be collected to support this criterion. Data relative to retention/attrition of students as well as student progression through the program may be used to evaluate admission requirements and make adjustments as needed. Cost-effectiveness of the program may be reflected through practice and learning environments as well as community partnerships utilized to enhance available resources. Data related to time from admission to graduation also includes curriculum, faculty expertise, assessment techniques, and teaching/learning activities.
Classroom Assessment
Formative evaluation provides valuable data when evaluating student learning and the teaching strategies being used. Classroom assessment is a type of formative evaluation that involves ongoing assessment of student learning and assists faculty in selecting teaching strategies (Melland & Volden, 1998). This technique involves both students and instructors in the continuous monitoring of student learning. The purpose of classroom assessment parallels the purpose of formative evaluation. The purpose is to collect data during the learning experience to make adjustments so that students can benefit from the modifications before the final measurement of learning occurs. This approach is learner centered, teacher directed, mutually beneficial, formative, context specific, ongoing, and firmly rooted in good practice (Angelo & Cross, 1993).
Elements of Program Evaluation 523
Classroom assessment differs from other measurements of learning in that it is usually anonymous and is never graded. It is context specific, meaning that the technique used to evaluate one class or a content-related learning experience will not necessarily work in another experience.
The use of classroom assessment provides feedback about learning not only to the faculty but to the student as well. Assessment techniques (e.g., asking student to identify the muddiest point discussed or a one-sentence summary of the discussion) are simple to use, take little time, and yet are fun for the student. An example of a popular assessment technique is muddiest point. Close to the end of the learning session, the student is asked to write, “What was the muddiest point in this lecture?” (or whatever teaching strategy was used). The faculty then reviews the responses to determine if a concept is mentioned frequently or if a pattern emerges indicating that a concept or content was misunderstood. Based on the results, the faculty may choose to address the “muddiest point” in the next class. Answering the question also causes students to reflect on the session and identify concepts needing further study.
Clinical Evaluation
Health professions are practice disciplines, and therefore student learning involves more than acquiring cognitive knowledge. Learning includes the practice dimension where the student demonstrates the ability to apply theory in caring for patients. Clinical evaluation addresses three dimensions of student learning—cognitive, affective, and psychomotor—and is the most challenging of the evaluative processes. Inherent in this process is the need to demonstrate progressive acquisition of increasingly complex competencies. Evaluating student learning and student competency in clinical is challenging. The faculty must make professional judgments concerning the student’s competencies in practice, as well as the higher-level cognitive learning associated with application. Yet, the clinical environment changes from one learning experience to another, making absolute comparisons among students even in the same clinical setting impossible. In addition, role expectations of the learners and evaluators are perceived differently. The evaluations of a student’s performance frequently are influenced by one’s own professional orientation and expectations. Evaluation in the clinical setting is the process of collecting data in order to make a judgment concerning the students’ competencies in practice based on standards or criteria. Judgments influence the data collected; therefore, it is not an objective process. Deciding on the quality of performance and drawing inferences and conclusions from the data also involves judgment by the faculty. It is a subjective process that is influenced by the bias of the faculty and student and by the variables present in the clinical 524 Chapter 32 • programmatiC evaluation
environment. These factors and others make evaluating the clinical experience a complex process.
In clinical evaluation, the faculty members observe performance and collect data associated with higher-order cognitive thinking, the influence of values (affective learning), and psychomotor skill acquisition. (This process is addressed in more depth in Chapter 25.) The judgment of a student’s performance in the clinical area can either be based on norm- or criterion-referenced evaluation.
With norm-referenced evaluation, the student’s clinical performance is compared with the performance of other students in the course, whereas criterion-referenced evaluation is the comparison of the student’s performance with a set of criteria. Regardless of the type of evaluation used, providing a fair and valid evaluation is challenging. Although the use of criterion-referenced tools reduces the subjectivity inherent to this process, using multiple and varied sources of data (i.e., observation, evaluation of written work, student comments, staff comments) increases the possibility that a valid evaluation occurs. Also, making observations throughout the designated experience in an effort to obtain a sampling of behaviors that reflect quality of care provided and the extent of student learning helps to validate the evaluation.
It has been established that, even with the best-developed evaluation criteria, clinical evaluation is subjective and therefore efforts must be made to ensure that the process is fair. Oermann and Gaberson (1998) addressed the following dimensions associated with fairness in clinical evaluation:
• Identifying the faculty’s own values, attitudes, beliefs, and biases that may influence the evaluation process
• Basing clinical evaluation on predetermined objectives or competencies
• Developing a supportive clinical environment
• Basing judgments on the expected competency according to curriculum and standards of practice
• Comparing the student’s present behavior performance with past performance, other students’ performances, or to the level of a norm reference group The process of evaluating a student’s performance in a clinical setting poses several challenges to evaluation theory. Extensive documentation exists in the nursing literature addressing clinical evaluation and providing examples of evaluation tools. Students are demonstrating their ability to apply knowledge in caring for patients in an uncontrolled environment, and therefore it is difficult for them to hide their lack of understanding or inability to “put it all together.” However, although this setting is ideal for learning, the variables that exist in the setting make each learning experience different. Faculty members also struggle with the concept of when the time for learning ends and the time for evaluation begins.
Again, the literature provides guidelines addressing this issue.
Elements of Program Evaluation 525
A solution to the challenges of clinical evaluation may exist within the context of clearly defining the parameters of formative and summative evaluation.
Although not without its flaws, this solution worked as long as the clinical experiences existed in the hospital setting and were defined by discrete units of time; however, educating students in a managed care environment has changed the settings and the focus of the clinical experience. Faculty members no longer have the security of the familiar hospital setting and the discrete time units. Patients receive health care in a variety of settings such as day surgery, outpatient clinics, community settings, and in the home. Patients admitted to the hospital stay shorter periods, require more extensive care, and present with more complex situations. Thus, many past strategies that were successful in clinical evaluation are no longer applicable.
Clinical Concept Mapping
Clinical concept mapping was developed by an educational researcher as an instructional and assessment tool for use in science education (Novak, 1990). In general, the technique is a hierarchical graphic organizer developed individually by students. It demonstrates their understanding of relationships among concepts. Key concepts are placed centrally, and subconcepts and clusters of data are placed peripherally. All concepts are linked by arrows, lines, or broken lines to demonstrate the association between and among the concepts and the data (Baugh & Mellot, 1998).
Clinical concept mapping is applicable in evaluating students in the clinical setting because it facilitates the linking of previously learned concepts to actual patient scenarios. The diagramming of the concepts allows faculty members to evaluate the student’s interpretation of collected data and how it applies to the student’s patient and to management of patient care. It also provides data for faculty members to evaluate the student’s ability to apply class content and concepts to implementing care. Faculty members are also able to evaluate the student’s ability to solve problems and to think critically. Clinical concept mapping can be applied to a variety of clinical settings (Bentley & Nugent, 1998) and to a variety of learning experiences (see Chapter 26).
Portfolio Assessment
Portfolio analysis can serve as an important component of the process of assessing student learning outcomes and the achievement of overall program outcomes. When used appropriately, portfolio assessment provides valid data 526 Chapter 32 • programmatiC evaluation
for clinical evaluation of students and may be used to clearly demonstrate a correlation between competencies gained and curricular or program outcomes. A portfolio is a compilation of documents demonstrating learning, competencies, and achievements, usually over a period of time. Used extensively in business to demonstrate one’s accomplishments, the portfolio is often used in education to track academic achievement of outcomes (Ryan & Carlton, 1990). Although portfolios are discussed here in relation to clinical evaluation, they also can be used in different aspects of program evaluation. Portfolios are valid measures in clinical evaluation in that students provide evidence in their portfolios to confirm their clinical competence and to document their learning. They may be used in either formative or summative evaluation. Portfolio assessments are a positive asset in clinical settings in which students are not directly supervised by faculty.
Nitko (1996) describes the use of portfolios in terms of best work and growth and learning portfolios. Best work portfolios provide evidence that students have mastered outcomes and have attained the desired level of competence (summative evaluation), thus contributing to the accreditation process. Growth and learning portfolios are designed to monitor students’ progress (formative evaluation).
Both types of portfolios reflect the philosophy of clinical evaluation.
Portfolios are constructed to match the purpose and objectives of the clinical experience. Faculty members need to clearly delineate the purpose and outcomes and to identify examples of work to be included. Likewise, the criteria by which the contents of the portfolio will be evaluated must be provided for the students.
Students need to understand that portfolios are a reflection of their learning and an evaluation of their performance. The portfolio can be used in conjunction with the clinical pathway (Chapter 35 for evaluation).
Although still in the exploratory stage, portfolios are evolving as effective measurements in outcome evaluation. If portfolios are used in clinical evaluation, then faculty members benefit from data that demonstrate the clinical progression of students through the curriculum toward the program outcomes. Although portfolio development has been shown to increase student responsibility for learning, increase faculty/student interaction, and facilitate the identification of need for curricular revision, they are time consuming to compile and present challenges related to document storage. In addition, faculty struggle with the lack of research-based evidence to establish validity and reliability of grading measures related to program outcome evaluation.
Clinical Journals
Teaching/learning in the clinical setting is broad and diverse, including much more than can be identified superficially. Journaling is a technique that has been Elements of Program Evaluation 527
successfully used to bring together those elusive bits of information and experience associated with the clinical experience (Kobert, 1995). Clinical journals provide an opportunity for students to not only document their clinical experience but also to reflect on their performance and knowledge, and demonstrate a level of critical thinking. Journals provide an avenue for students to express their feelings of uncertainty and to engage in dialogue with the faculty concerning the experience. Journaling also can be structured to include nursing care, problem solving, and identification of learning needs. Whereas journals provide valuable evaluation data, the challenge is to obtain from the students the quality of journal entries needed.
Hodges (1996) addressed this issue in a proposed model in which four levels of journal writing were identified. These levels of journal writing progressed from summarizing, describing, and reacting to clinical experience, then to analyzing and critiquing positions, issues, and views of others. Examples of journal entries that parallel this progression are moving from writing objectives or a summary to writing a critique or a focused argument. The key to this progression lies in providing a clear purpose for the journal entry. To think critically, students need to know what they are thinking about (Brown & Sorrell, 1993). Once faculty members have identified the desired outcome of the clinical journal, they can assist the students in attaining these outcomes by providing clear guidelines.
Although keeping a journal requires a substantial commitment of time by both faculty and students, it is a valuable evaluation tool for both groups. Controversy exists concerning whether journals should be used for evaluation of students’
learning or to be graded (Holmes, 1997). Some educators maintain that grading journals negates the students’ ability to be reflective and truthful concerning clinical experiences; however, as students document their evolution of clinical experiences, their journal entries are laden with expressions of self evaluation (Kobert, 1995). If journals are to be graded, then clear and concise criteria must exist that not only identify how they are graded but also what is to be included in the journal. Regardless of the decision to grade or not to grade them, clinical journals provide important evaluation data concerning the student’s performance in the clinical setting and can be used effectively to monitor the student’s development in terms of program outcomes.
In summary, evaluation of learning is an important component of the faculty teaching role and contributes significantly to overall programmatic evaluation.
Because the purpose of evaluation is to provide valid data concerning learning in all domains, a variety of measurements is needed. The key to successful evaluation is to match the evaluation tool with the learning in order to provide reliable and valid data on which to make judgments.
In addition to making a judgment concerning a student’s performance in clinical, it is important to remember that the other purpose of clinical evaluation 528 Chapter 32 • programmatiC evaluation
is to provide feedback to the student regarding his or her performance and to provide the student with an opportunity to improve in the needed areas. Clinical evaluation should be a consistent and frequent means of communicating the student’s progress. Using an adopted clinical evaluation tool ensures that all students are counseled using the same criteria. The evaluation process needs to be constructed so that active student participation is included. Feedback should be stated in the specific terms of the measurement tool and the outcomes of the course. Comments should be based on data and should not contain general global clichés such as “will make a good nurse.” Strengths, as well as areas needing improvement, should be documented. If a student needs to improve to pass the clinical experience, then the student should be given, in writing, those areas needing improvement with specific guidelines on what behavior is required to pass. Again, all comments should be stated in terms of the criteria on the evaluation tool.
CONCLUSION
This chapter has addressed some aspects of the role of evaluation in program development and student success. Clearly, evaluation is an important part of the faculty role, and contributes significantly to the overall success of professional academic programs. An understanding of evaluation and how it impacts the teaching/learning environment is critical. Proper use of evaluation techniques requires an awareness of both their limitations and their strengths and requires matching the appropriate measurement with the purpose or role of evaluation. In addition, the role of evaluation in the success of professional healthcare education programs and their ongoing existence to meet the needs of the health environment must not be under emphasized. As faculty and program administrators are increasingly held accountable to external stakeholders, program evaluation becomes increasingly significant to a program’s success and continued existence. All elements of a professional education as mentioned previously, contribute valuable data to the process of rigorous programmatic evaluation, and assist in the ongoing enhancement of the program offerings. Professional practice in healthcare environments is entering a new era with increased use of, and dependence on, technology and advancements in knowledge and skills acquisition. This new era will significantly alter the traditional roles of faculty and students. Inherent in this new era of teaching is the mandate to evaluate teaching and learning using less traditional methods to demonstrate success in meeting program outcomes and meeting the needs of the healthcare community.
References 529
REFERENCES
Angelo, T., & Cross, K. P. (1993). Classroom Assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass.
Baugh, N., & Mellott, K. (1998). Clinical concept mapping as preparation for student nurses’ clinical experiences. Journal of Nursing Education, 37(6), 253–256.
Bell, D. F., Pestka, E., & Forsyth, D. (2007). Outcome evaluation: Does continuing education make a difference? The Journal of Continuing Education in Nursing, 38(4), 186–190.
Bentley, G., & Nugent, K. (1998). A creative student presentation on the nursing management of a complex family. Nurse Educator, 23(3), 8–9.
Brown, H., & Sorrell, J. (1993). Use of clinical journals to enhance critical thinking. Nurse Educator, 18(5), 16–18.
Chen, H. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. London: Sage Publications.
Davidson, E. J. (2005). Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation.
Thousand Oaks, CA: Sage.
DeSilets, L. D., & Dickerson, P. S. (2008). Assessing competency: A new accreditation resource. The Journal of Continuing Education in Nursing, 39(6), 244–245.
Fink, A. (2005). Evaluation fundamentals: Insights into the outcomes, effectiveness, and quality of health programs. London: Sage Publications.
Gard, C. L., Flannigan, P. N., & Cluskey, M. (2004). Program evaluation: An ongoing systematic process. Nursing Education Perspectives, 25(4), 176–179.
Grumet, B. R. (2002). Quick reads: Demystifying accreditation. Nursing Education Perspectives, 23(3), 114–117.
Hodges, H. (1996). Journal writing as a mode of thinking for RN-BSN students: A leveled approach to learning to listen to self and others. Journal of Nursing Education, 35, 137–141.
Holmes, V. (1997). Grading journals in clinical practice. Journal of Nursing Education, 36(10), 89–92.
Kobert, L. (1995). In our own voice: Journaling as a teaching/learning technique for nurses. Journal of Nursing Education, 34(3), 140–142.
Melland, H., & Volden, C. (1998). Classroom assessment: Linking teaching and learning. Journal of Nursing Education, 37(6), 275–277.
Nitko, A. (1990). Educational assessment of students (2nd ed.). Englewood Cliffs, NJ: Prentice Hall.
Novak, J. (1990). Concept mapping: A useful tool for science education. Journal of Research in Science Teaching, 27(10), 937–949.
Oermann, K., & Gaberson, M. (1998). Evaluation of problem-solving, decision-making, and critical thinking: Context-dependent item sets and other evaluation strategies. In M. Oermann and K. Gaberson (Eds.), Evaluation and testing in nursing education. New York: Springer Publishing.
Ryan, M., & Carlton, K. (1990). Portfolio applications in a school of nursing. Nurse Educator, 22(1), 35–39.
Worral, P. S. (2008). Evaluation in healthcare education. In S. B. Bastable (Ed.), Nurse as educator: Principles of teaching and learning for nursing practice (3rd ed.). Sudbury, MA: Jones & Bartlett.

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree

