2.50
Hdl Handle:
http://hdl.handle.net/10755/308538
Category:
Abstract
Type:
Presentation
Title:
Graduate-Level Online Grading Rubrics: Validity and Reliability
Author(s):
Raymond, Roberta A; Hunter, Kathleen M.; Sisk, Rebecca
Lead Author STTI Affiliation:
Phi Pi
Author Details:
Roberta A Raymond, PhD, RN, rraymond@chamberlain.edu; Kathleen M. Hunter, PhD, RN-BC, CNE; Rebecca Sisk, PhD, RN, CNE
Abstract:

Session presented on: Monday, November 18, 2013

Grading rubrics are frequently used in online educational programs as a part of the assessment of student learning.  With the diversity of faculty teaching in these programs, there is a concern with unbiased and impartial feedback for students.  Faculty members at an online Master of Science in nursing program have been working on a program of research to assure the inter-rater reliability and content validity of the rubrics used to grade threaded discussions (TDs).  

This presentation will describe research conducted on the TD grading rubric thus far. The following questions were addressed in the initial phase of the research: “Do faculty members apply the TD grading rubric similarly, and does the grading rubric reflect the student knowledge and skills intended by faculty?” Faculty members randomly selected weekly posts of 20 students in an evidence-based practice nursing course, resulting in 196 sets of posts related to TD questions. Two faculty members simultaneously applied the rubric to each student’s set of posts. Cohen’s kappa was used to estimate inter-rater reliability.

The first study, completed in 2011, resulted in a Cohen’s kappa of 0.227 with overall agreement of 56% between the two raters indicating inconsistency in grading.   Based on these results, the rubric was revised with specific rubric guidelines rewritten for students and faculty. 

The study was repeated in 2012 using two different raters and a different course.  There were minimal changes in the outcomes of the study.  The Cohen’s kappa was 0.049 with an overall agreement of 52%.  Faculty is currently investigating the specific issues in the second study causing the low Cohen’s Kappa.

The research group will then revise the guidelines and directions for the grading rubric to develop accurate guidelines for grading TDs and assignments and to pursue a program to continually improve grading and feedback for students.

Keywords:
Inter-rater Reliability; Online; Grading Rubrics
Repository Posting Date:
19-Dec-2013
Date of Publication:
19-Dec-2013
Conference Date:
2013
Conference Name:
42nd Biennial Convention
Conference Host:
Sigma Theta Tau International, the Honor Society of Nursing
Conference Location:
Indianapolis, Indiana, USA
Description:
42nd Biennial Convention 2013 Theme: Give Back to Move Forward. Held at the JW Marriott

Full metadata record

DC FieldValue Language
dc.language.isoen_USen_GB
dc.type.categoryAbstracten_GB
dc.typePresentationen_GB
dc.titleGraduate-Level Online Grading Rubrics: Validity and Reliabilityen_GB
dc.contributor.authorRaymond, Roberta Aen_GB
dc.contributor.authorHunter, Kathleen M.en_GB
dc.contributor.authorSisk, Rebeccaen_GB
dc.contributor.departmentPhi Pien_GB
dc.author.detailsRoberta A Raymond, PhD, RN, rraymond@chamberlain.edu; Kathleen M. Hunter, PhD, RN-BC, CNE; Rebecca Sisk, PhD, RN, CNEen_GB
dc.identifier.urihttp://hdl.handle.net/10755/308538-
dc.description.abstract<p>Session presented on: Monday, November 18, 2013</p>Grading rubrics are frequently used in online educational programs as a part of the assessment of student learning.  With the diversity of faculty teaching in these programs, there is a concern with unbiased and impartial feedback for students.  Faculty members at an online Master of Science in nursing program have been working on a program of research to assure the inter-rater reliability and content validity of the rubrics used to grade threaded discussions (TDs).   <p>This presentation will describe research conducted on the TD grading rubric thus far. The following questions were addressed in the initial phase of the research: “Do faculty members apply the TD grading rubric similarly, and does the grading rubric reflect the student knowledge and skills intended by faculty?” Faculty members randomly selected weekly posts of 20 students in an evidence-based practice nursing course, resulting in 196 sets of posts related to TD questions. Two faculty members simultaneously applied the rubric to each student’s set of posts. Cohen’s kappa was used to estimate inter-rater reliability. <p>The first study, completed in 2011, resulted in a Cohen’s kappa of 0.227 with overall agreement of 56% between the two raters indicating inconsistency in grading.   Based on these results, the rubric was revised with specific rubric guidelines rewritten for students and faculty.  <p>The study was repeated in 2012 using two different raters and a different course.  There were minimal changes in the outcomes of the study.  The Cohen’s kappa was 0.049 with an overall agreement of 52%.  Faculty is currently investigating the specific issues in the second study causing the low Cohen’s Kappa. <p>The research group will then revise the guidelines and directions for the grading rubric to develop accurate guidelines for grading TDs and assignments and to pursue a program to continually improve grading and feedback for students.en_GB
dc.subjectInter-rater Reliabilityen_GB
dc.subjectOnlineen_GB
dc.subjectGrading Rubricsen_GB
dc.date.available2013-12-19T17:32:38Z-
dc.date.issued2013-12-19-
dc.date.accessioned2013-12-19T17:32:38Z-
dc.conference.date2013en_GB
dc.conference.name42nd Biennial Conventionen_GB
dc.conference.hostSigma Theta Tau International, the Honor Society of Nursingen_GB
dc.conference.locationIndianapolis, Indiana, USAen_GB
dc.description42nd Biennial Convention 2013 Theme: Give Back to Move Forward. Held at the JW Marriotten_GB
All Items in this repository are protected by copyright, with all rights reserved, unless otherwise indicated.