Evaluation of Interrater Reliability on a Clinical Judgment Rubric: A Tale of Three Experts

2.50
Hdl Handle:
http://hdl.handle.net/10755/316846
Category:
Abstract
Type:
Presentation
Title:
Evaluation of Interrater Reliability on a Clinical Judgment Rubric: A Tale of Three Experts
Author(s):
Cazzell, Mary A.; Anderson, Mindi; Frye, Linda; Taylor, Tim
Lead Author STTI Affiliation:
Delta Theta
Author Details:
Mary A. Cazzell, PhD, RN, email: mary.cazzell@cookchildrens.org; Mindi Anderson, PhD, RN, CPNP-PC, CNE, CHSE, ANEF; Linda Frye, MSN, RN; Tim Taylor, BSN, RN
Abstract:

Poster presented on: Saturday, April 5, 2014, Friday, April 4, 2014

Introduction: The purpose was to evaluate interrater reliability of the Lasater Clinical Judgment Rubric (LCJR) used to evaluate nursing student performances during a pediatric medication administration Objective Structured Clinical Evaluation (OSCE). The science of nursing education research in simulation can only be advanced when psychometrically established measures are used.

Methods: Standardized rater training was provided to three raters using a LCJR training video. The raters of varying backgrounds (academic versus clinical)  scored 160 videotaped OSCEs of senior-level nursing students performing pediatric medication administration using an OSCE checklist correlated to indicators of clinical judgment on the LCJR. The LCJR includes 11 items to rate clinical judgment in four areas of effectiveness (Beginning, Developing, Accomplished, and Exemplary) under four major categories (Noticing, Interpreting, Responding, and Reflecting). 

Results: Moderate interrater reliability (ICC = 0.53) was obtained for total LCJR scores by all three raters. Scoring by two raters (academic and clinical) achieved the strongest interrater reliability results for: Information Seeking (ICC= 0.75), Making Sense of Data (ICC=0.97), and Interpreting (ICC=0.76). The lowest interrater reliability findings were for Prioritizing Data across and between all raters (ICC=0.05). Using paired-sample t tests, the two raters (academic vs. clinical) demonstrated no significant differences in scoring psychomotor skills (hand hygiene/gloving, intravenous and oral medication administration), affective domain skills (communication, professional behaviors and dress), or total LCJR scores.

Discussion/Conclusion: The strongest interrater reliability statistics were for “yes/no” performance items. The lowest scores across and between all raters were for checking for medications that were due (Prioritizing Data). Considerations for establishing interrater reliability of clinical judgment tools must include: clinical versus academic background of raters, correlation of a simulation scenario to concepts measured by the evaluation instrument, complexity of checklist and/or overlapping of scoring rubric categories, and consistency of rater training related to expected benchmarks for student population.

Keywords:
interrater reliability; clinical judgment rubric; simulation evaluation
Repository Posting Date:
13-May-2014
Date of Publication:
13-May-2014
Conference Date:
2014
Conference Name:
Nursing Education Research Conference 2014
Conference Host:
Sigma Theta Tau International, the Honor Society of Nursing; National League of Nursing
Conference Location:
Indianapolis, Indiana, USA
Description:
Nursing Education Research Conference 2014 Theme: Nursing Education Research, held in Hyatt Regency Indianapolis
Note:
This is an abstract-only submission. If the author has submitted a full-text item based on this abstract, you may find it by browsing the Virginia Henderson Global Nursing e-Repository by author. If author contact information is available in this abstract, please feel free to contact him or her with your queries regarding this submission. Alternatively, please contact the conference host, journal, or publisher (according to the circumstance) for further details regarding this item. If a citation is listed in this record, the item has been published and is available via open-access avenues or a journal/database subscription.  Contact your library for assistance in obtaining the as-published article

Full metadata record

DC FieldValue Language
dc.language.isoen_USen_GB
dc.type.categoryAbstracten_GB
dc.typePresentationen_GB
dc.titleEvaluation of Interrater Reliability on a Clinical Judgment Rubric: A Tale of Three Expertsen_GB
dc.contributor.authorCazzell, Mary A.en_GB
dc.contributor.authorAnderson, Mindien_GB
dc.contributor.authorFrye, Lindaen_GB
dc.contributor.authorTaylor, Timen_GB
dc.contributor.departmentDelta Thetaen_GB
dc.author.detailsMary A. Cazzell, PhD, RN, email: mary.cazzell@cookchildrens.org; Mindi Anderson, PhD, RN, CPNP-PC, CNE, CHSE, ANEF; Linda Frye, MSN, RN; Tim Taylor, BSN, RNen_GB
dc.identifier.urihttp://hdl.handle.net/10755/316846-
dc.description.abstract<p>Poster presented on: Saturday, April 5, 2014, Friday, April 4, 2014</p><b>Introduction: </b>The purpose was to evaluate interrater reliability of the Lasater Clinical Judgment Rubric (LCJR) used to evaluate nursing student performances during a pediatric medication administration Objective Structured Clinical Evaluation (OSCE). The science of nursing education research in simulation can only be advanced when psychometrically established measures are used. <p><b>Methods: </b>Standardized rater training was provided to three raters using a LCJR training video. The raters of varying backgrounds (academic versus clinical)  scored 160 videotaped OSCEs of senior-level nursing students performing pediatric medication administration using an OSCE checklist correlated to indicators of clinical judgment on the LCJR. The LCJR includes 11 items to rate clinical judgment in four areas of effectiveness (Beginning, Developing, Accomplished, and Exemplary) under four major categories (Noticing, Interpreting, Responding, and Reflecting).  <p><b>Results: </b>Moderate interrater<b> </b>reliability (ICC = 0.53) was obtained for total LCJR scores by all three raters. Scoring by two raters (academic and clinical) achieved the strongest interrater reliability results for: Information Seeking (ICC= 0.75), Making Sense of Data (ICC=0.97), and Interpreting (ICC=0.76). The lowest interrater reliability findings were for Prioritizing Data across and between all raters (ICC=0.05). Using paired-sample t tests, the two raters (academic vs. clinical) demonstrated no significant differences in scoring psychomotor skills (hand hygiene/gloving, intravenous and oral medication administration), affective domain skills (communication, professional behaviors and dress), or total LCJR scores. <p><b>Discussion/Conclusion:</b> The strongest interrater reliability statistics were for “yes/no” performance items. The lowest scores across and between all raters were for checking for medications that were due (Prioritizing Data). Considerations for establishing interrater reliability of clinical judgment tools must include: clinical versus academic background of raters, correlation of a simulation scenario to concepts measured by the evaluation instrument, complexity of checklist and/or overlapping of scoring rubric categories, and consistency of rater training related to expected benchmarks for student population.en_GB
dc.subjectinterrater reliabilityen_GB
dc.subjectclinical judgment rubricen_GB
dc.subjectsimulation evaluationen_GB
dc.date.available2014-05-13T16:43:50Z-
dc.date.issued2014-05-13-
dc.date.accessioned2014-05-13T16:43:50Z-
dc.conference.date2014en_GB
dc.conference.nameNursing Education Research Conference 2014en_GB
dc.conference.hostSigma Theta Tau International, the Honor Society of Nursingen_GB
dc.conference.hostNational League of Nursingen_GB
dc.conference.locationIndianapolis, Indiana, USAen_GB
dc.descriptionNursing Education Research Conference 2014 Theme: Nursing Education Research, held in Hyatt Regency Indianapolisen_GB
dc.description.noteThis is an abstract-only submission. If the author has submitted a full-text item based on this abstract, you may find it by browsing the Virginia Henderson Global Nursing e-Repository by author. If author contact information is available in this abstract, please feel free to contact him or her with your queries regarding this submission. Alternatively, please contact the conference host, journal, or publisher (according to the circumstance) for further details regarding this item. If a citation is listed in this record, the item has been published and is available via open-access avenues or a journal/database subscription.  Contact your library for assistance in obtaining the as-published articleen_GB
All Items in this repository are protected by copyright, with all rights reserved, unless otherwise indicated.