2.50
Hdl Handle:
http://hdl.handle.net/10755/153181
Type:
Presentation
Title:
Poor Samples in Exploratory Factor Analysis Degrade Evidence-Based Practice
Abstract:
Poor Samples in Exploratory Factor Analysis Degrade Evidence-Based Practice
Conference Sponsor:Sigma Theta Tau International
Conference Year:2007
Author:Owen, Steven V., PhD
P.I. Institution Name:University of Texas Health Science Center at San Antonio
Title:Professor
Co-Authors:Robin D. Froman, RN, PhD, FAAN
[Evidence-based Presentation] Aims:á Exploratory factor analysis (EFA) is a common analysis and offers important construct validity evidence (Pedhazur & Schmelkin, 1991). But after a half century of debate, there are still no dependable guidelines for reasonable sample sizes in EFA. The usual EFA approach lacks statistical inference tests, so power analyses do not apply. Generally, a sample too small gives unstable conclusions, but securing a large sample for validation is resource-intensive. Few studies have examined effects of using different sample sizes with the two most common factor extraction methods: principal components analysis (PCA) and principal axis factoring (PAF). We review sample size recommendations from the past 50 years and give empirical analyses of the effect of using differing sample sizes in PCA and PAF. Sample & Analyses: Four large data sets, each with N > 450, provide the empirical basis for the study. Randomly selected subsamples from each database, gradually increasing in size, are subjected to PCA and PAF. Results are compared to show how sample size influences extraction outcomes. Findings & Discussion: Each data set shows important differences between PCA and PAF as a function of sample size: number of dimensions extracted, dimensional stability within the data, and percent of variation or covariation explained by the final solutions. In light of the outcomes, we offer recommendations for estimating sample sizes required for trustworthy results in validation studies. The recommendations hold promise for health care researchers who seek evidence-based approaches for their psychometric work.
Repository Posting Date:
26-Oct-2011
Date of Publication:
17-Oct-2011
Sponsors:
Sigma Theta Tau International

Full metadata record

DC FieldValue Language
dc.typePresentationen_GB
dc.titlePoor Samples in Exploratory Factor Analysis Degrade Evidence-Based Practiceen_GB
dc.identifier.urihttp://hdl.handle.net/10755/153181-
dc.description.abstract<table><tr><td colspan="2" class="item-title">Poor Samples in Exploratory Factor Analysis Degrade Evidence-Based Practice</td></tr><tr class="item-sponsor"><td class="label">Conference Sponsor:</td><td class="value">Sigma Theta Tau International</td></tr><tr class="item-year"><td class="label">Conference Year:</td><td class="value">2007</td></tr><tr class="item-author"><td class="label">Author:</td><td class="value">Owen, Steven V., PhD</td></tr><tr class="item-institute"><td class="label">P.I. Institution Name:</td><td class="value">University of Texas Health Science Center at San Antonio</td></tr><tr class="item-author-title"><td class="label">Title:</td><td class="value">Professor</td></tr><tr class="item-email"><td class="label">Email:</td><td class="value">owensv@uthscsa.edu</td></tr><tr class="item-co-authors"><td class="label">Co-Authors:</td><td class="value">Robin D. Froman, RN, PhD, FAAN</td></tr><tr><td colspan="2" class="item-abstract">[Evidence-based Presentation] Aims:&aacute; Exploratory factor analysis (EFA) is a common analysis and offers important construct validity evidence (Pedhazur &amp; Schmelkin, 1991). But after a half century of debate, there are still no dependable guidelines for reasonable sample sizes in EFA. The usual EFA approach lacks statistical inference tests, so power analyses do not apply. Generally, a sample too small gives unstable conclusions, but securing a large sample for validation is resource-intensive. Few studies have examined effects of using different sample sizes with the two most common factor extraction methods: principal components analysis (PCA) and principal axis factoring (PAF). We review sample size recommendations from the past 50 years and give empirical analyses of the effect of using differing sample sizes in PCA and PAF. Sample &amp; Analyses: Four large data sets, each with N &gt; 450, provide the empirical basis for the study. Randomly selected subsamples from each database, gradually increasing in size, are subjected to PCA and PAF. Results are compared to show how sample size influences extraction outcomes. Findings &amp; Discussion: Each data set shows important differences between PCA and PAF as a function of sample size: number of dimensions extracted, dimensional stability within the data, and percent of variation or covariation explained by the final solutions. In light of the outcomes, we offer recommendations for estimating sample sizes required for trustworthy results in validation studies. The recommendations hold promise for health care researchers who seek evidence-based approaches for their psychometric work.</td></tr></table>en_GB
dc.date.available2011-10-26T12:05:52Z-
dc.date.issued2011-10-17en_GB
dc.date.accessioned2011-10-26T12:05:52Z-
dc.description.sponsorshipSigma Theta Tau Internationalen_GB
All Items in this repository are protected by copyright, with all rights reserved, unless otherwise indicated.