<< Chapter < Page | Chapter >> Page > |
Future academic potential of applicants is assessed typically by results from a standardized examination based on national norms. Used most commonly by doctoral programs in educational leadership are the Graduate Record Examination (GRE) (Educational Testing Service, n.d.) and the Miller’s Analogies Test (MAT) (The Psychological Corporation, 2004) as a means for satisfying basic admission requirements to a doctoral program in educational leadership. Both of these instruments yield percentile scores that are somewhat compatible regardless of the time when the test was taken through the use of national norms.
Assumed by most doctoral programs and by most universities within the selection process is that these academic measures enjoy high predictive validity. This assumption is so well ingrained that few, if any, doctoral programs in educational leadership or institutions of higher education ever test this assumption at the department or program level. As a result of this assumption and of this neglect in effort, several voids have been perpetuated in current knowledge.
One void concerns the actual utility of each academic predictor (GPAs and standardized test scores) for differentiating between successful and unsuccessful applicants relative to their probable success for a particular doctoral program in educational leadership. Interestingly, this concern has not been overlooked by publishers of the standardized tests. As noted by the leading publisher,“Departments using GRE scores for graduate admission, fellowship awards, and other approved purposes are encouraged to collect validity information by conducting their own studies”(Educational Testing Service, n.d.)
This recommendation, as made by a leading organization, highlights the fact that a single universal cut score fails to exist in practice. Importantly, this recommendation indicates clearly that validity is program specific rather than test score specific. That is, what is an acceptable score for one doctoral program in educational leadership may well be an unacceptable score for another such program.
Another void concerns the interrelationship among academic predictors used to delimit an applicant pool. Grade-point averages and results from standardized test scores are seldom completely independent measures. Those students who do well or those who do poorly on one type of academic predictor tend to mirror their performance, at least moderately, on other academic predictors.
Implied by any interrelationship among academic predictors is the notion that differential weights are needed among academic predictors to distinguish between those applicants likely to be unsuccessful and those likely to be successful. For this reason, it is impossible to inform applicants about a specific cut score on any academic predictor without considering their unique academic history.
In a recent study, Young (2005a) assessed the predictive validity of the most common academic predictors used to delimit an applicant pool for educational leadership (Creighton&Jones, 2001) and assessed the relative weights of these predictors in light of their interrelationships. This investigator found that all academic predictors have some utility for a particular doctoral program in educational leadership but varied considerably in their relative importance given their interrelationships. Within this study, discriminant analyses were used to develop linear equations for differentiating among those individuals rejected from a program, those admitted but not graduating, and those graduating.
Notification Switch
Would you like to follow the 'The handbook of doctoral programs: issues and challenges' conversation and receive update notifications?