<< Chapter < Page | Chapter >> Page > |
Pairing | Mean | Std. Dev. | Std. Error | Sig (2-tailed) |
Mentor – USA | -.10648 | .52441 | .07136 | .142 |
Mentor-Student | .19811 | .86096 | .11826 | .100 |
USA - Student | .30660 | .78392 | .10768 | .006* |
The analysis of the variance for each of the three assessments (PIMA, USA and ISA) demonstrated no significant differences in the means for ISLLC Standards 1, 3 or 6. There were significant differences in responses for ISLLC Standards 2, 4 and 5.
In each of the cases where significant means were identified, the responses of the university supervisors were involved. ISLLC standard 2 responses showed differences between the mentor responses and the university supervisors. Standard 4 revealed differences between mentor responses and the university supervisors, as well as student responses and the university supervisors. ISLLC 5 responses demonstrated significant differences between the student responses and the university supervisors as well.
From the results of this research, it is clear that both the mentoring principals and the interns themselves were consistent with regards to the means in their scoring on the internship assessments. Though individual results may have varied, as a group the differences in mean scores between the two groups were statistically insignificant. The university supervisors on the other hand were consistent in their scoring in three of the six ISLLC standards, but showed significant differences in their mean scores in half of the standards compared to either the mentor scores or the individual intern scores.
The implications of the data review have generated three recommendations for program improvement from the researchers. The first is for the program to design a common rubric for completing the PIMA, USA, and ISA. A rubric with accompanying information on it’s use would hopefully prevent divergent scores on the three assessments. Second, additional validity and reliability measures need to coincide with the implementation of a scoring rubric. Third, the data results from this study should be correlated with other program performance indicators, such as scores on the School Leaders Licensure Assessment (SLLA).
Further study could be conducted in the effectiveness of the three assessments after the subsequent creation and implementation of the rubric to coincide with the assessments. This study could take two forms: first, it could repeat the process used in this study again to see if the means between the three groups still vary after the implementation of a rubric; second, it could compare means from the first test group who were evaluated without a rubric, and the latest group who did make use of a scoring rubric.
A final area for further research relates directly to the professors who supervise the internship program. Because the university supervisors were involved in all of the areas of significant difference in this study, it is imperative that the researchers delve deeper into the cause. If further research can determine the reasons behind the differences in scoring, the program will then be able to better evaluate its students.
Notification Switch
Would you like to follow the 'Education leadership review special issue: portland conference, volume 12, number 3 (october 2011)' conversation and receive update notifications?