VALIDATION OF A CLINICAL JUDGMENT/ COMPETENCY TOOL FOR USE IN SIMULATION/ CLINICAL

2.50
Hdl Handle:
http://hdl.handle.net/10755/157267
Type:
Presentation
Title:
VALIDATION OF A CLINICAL JUDGMENT/ COMPETENCY TOOL FOR USE IN SIMULATION/ CLINICAL
Abstract:
VALIDATION OF A CLINICAL JUDGMENT/ COMPETENCY TOOL FOR USE IN SIMULATION/ CLINICAL
Conference Sponsor:Western Institute of Nursing
Conference Year:2010
Author:Adamson, Katie, RN
P.I. Institution Name:Washington State University
Title:Graduate Teaching Assistant
Contact Address:Nursing Building Room 335, P O Box 1495, Spokane, WA, 99210-1495, USA
Co-Authors:Suzan Kardong-Edgren; Shelly Quint
PURPOSES/AIMS:
The purpose of this study was to further refine an evaluation tool for measuring clinical judgment and competency of undergraduate nursing students in clinical or simulated medical surgical nursing course work.
RATIONALE/CONCEPTUAL BASIS/BACKGROUND:
Measuring student learning and performance in clinical nursing education is complex. Nurse educators and researchers have measured student clinical learning and performance with such things as with knowledge exams, objective, structured clinical examinations (OSCE) and student satisfaction surveys. Recent attempts to evaluate student learning in more meaningful and comprehensive ways have been rejected as too long and/ or cumbersome. The tool used in the present study incorporates the concept of Clinical Judgment into a revised, more user-friendly evaluation tool. The tool is based on the Lasater Clinical Judgment Rubric (LCJR) (Lasater, 2007). This revision of the LCJR attempted to ameliorate some of the previous critiques of the tool: too long and cumbersome and too much negative verbiage.
METHODS:
This study included 3 phases over 2 years. During the first phase, the investigators evaluated and refined the wording and scoring of the original LCJR with the assistance of a psychometrics consultant and BSN and ADN faculty. The second phase of the study involved two rounds of piloting the tool. During each round, the revised tool was distributed to BSN and ADN faculty across Washington State who used it to evaluate students in introductory medical surgical nursing courses. The investigators used the scores and feedback from each round of this phase to further refine the tool. The third phase of this study is ongoing and involves further psychometric analyses of the tool using filmed, simulated patient care scenarios.
RESULTS:
Eighty-one students in medical surgical nursing courses at 6 Washington State undergraduate nursing programs were evaluated using the tool. The most recent refinement of the tool yielded mean score of 19.3 out of 40 with a standard deviation of 4.36 for students in the 2nd quarter of nursing school and a mean score of 27.63 out of 40 with a standard deviation of 5.43 for students in the 4th quarter of nursing school. The reliability coefficient between studentsÆ scores and their quarter in nursing school was 0.826. Further refinements and psychometric analyses of the tool are ongoing.
IMPLICATIONS:
Psychometrically sound evaluation tools for measuring student performance in real and simulated clinical situations are an essential pre-requisite for establishing best practices for teaching in nursing education. This ongoing effort to develop and refine a reliable and valid evaluation tool will contribute to such work nursing education.
Repository Posting Date:
26-Oct-2011
Date of Publication:
17-Oct-2011
Sponsors:
Western Institute of Nursing

Full metadata record

DC FieldValue Language
dc.typePresentationen_GB
dc.titleVALIDATION OF A CLINICAL JUDGMENT/ COMPETENCY TOOL FOR USE IN SIMULATION/ CLINICALen_GB
dc.identifier.urihttp://hdl.handle.net/10755/157267-
dc.description.abstract<table><tr><td colspan="2" class="item-title">VALIDATION OF A CLINICAL JUDGMENT/ COMPETENCY TOOL FOR USE IN SIMULATION/ CLINICAL</td></tr><tr class="item-sponsor"><td class="label">Conference Sponsor:</td><td class="value">Western Institute of Nursing</td></tr><tr class="item-year"><td class="label">Conference Year:</td><td class="value">2010</td></tr><tr class="item-author"><td class="label">Author:</td><td class="value">Adamson, Katie, RN</td></tr><tr class="item-institute"><td class="label">P.I. Institution Name:</td><td class="value">Washington State University</td></tr><tr class="item-author-title"><td class="label">Title:</td><td class="value">Graduate Teaching Assistant</td></tr><tr class="item-address"><td class="label">Contact Address:</td><td class="value">Nursing Building Room 335, P O Box 1495, Spokane, WA, 99210-1495, USA</td></tr><tr class="item-email"><td class="label">Email:</td><td class="value">kaadamson@wsu.edu</td></tr><tr class="item-co-authors"><td class="label">Co-Authors:</td><td class="value">Suzan Kardong-Edgren; Shelly Quint</td></tr><tr><td colspan="2" class="item-abstract">PURPOSES/AIMS: <br/>The purpose of this study was to further refine an evaluation tool for measuring clinical judgment and competency of undergraduate nursing students in clinical or simulated medical surgical nursing course work.<br/>RATIONALE/CONCEPTUAL BASIS/BACKGROUND: <br/>Measuring student learning and performance in clinical nursing education is complex. Nurse educators and researchers have measured student clinical learning and performance with such things as with knowledge exams, objective, structured clinical examinations (OSCE) and student satisfaction surveys. Recent attempts to evaluate student learning in more meaningful and comprehensive ways have been rejected as too long and/ or cumbersome. The tool used in the present study incorporates the concept of Clinical Judgment into a revised, more user-friendly evaluation tool. The tool is based on the Lasater Clinical Judgment Rubric (LCJR) (Lasater, 2007). This revision of the LCJR attempted to ameliorate some of the previous critiques of the tool: too long and cumbersome and too much negative verbiage. <br/>METHODS: <br/>This study included 3 phases over 2 years. During the first phase, the investigators evaluated and refined the wording and scoring of the original LCJR with the assistance of a psychometrics consultant and BSN and ADN faculty. The second phase of the study involved two rounds of piloting the tool. During each round, the revised tool was distributed to BSN and ADN faculty across Washington State who used it to evaluate students in introductory medical surgical nursing courses. The investigators used the scores and feedback from each round of this phase to further refine the tool. The third phase of this study is ongoing and involves further psychometric analyses of the tool using filmed, simulated patient care scenarios. <br/>RESULTS: <br/>Eighty-one students in medical surgical nursing courses at 6 Washington State undergraduate nursing programs were evaluated using the tool. The most recent refinement of the tool yielded mean score of 19.3 out of 40 with a standard deviation of 4.36 for students in the 2nd quarter of nursing school and a mean score of 27.63 out of 40 with a standard deviation of 5.43 for students in the 4th quarter of nursing school. The reliability coefficient between students&AElig; scores and their quarter in nursing school was 0.826. Further refinements and psychometric analyses of the tool are ongoing. <br/>IMPLICATIONS: <br/>Psychometrically sound evaluation tools for measuring student performance in real and simulated clinical situations are an essential pre-requisite for establishing best practices for teaching in nursing education. This ongoing effort to develop and refine a reliable and valid evaluation tool will contribute to such work nursing education.<br/></td></tr></table>en_GB
dc.date.available2011-10-26T19:42:59Z-
dc.date.issued2011-10-17en_GB
dc.date.accessioned2011-10-26T19:42:59Z-
dc.description.sponsorshipWestern Institute of Nursingen_GB
All Items in this repository are protected by copyright, with all rights reserved, unless otherwise indicated.