<< Chapter < Page Chapter >> Page >

First, assessments with more tasks or items typically have higher reliability. To understand this, consider two tests: one with five items and one with 50items. Chance factors influence the shorter test more than the longer test. If a student does not understand one of the items in the first test the totalscore is very highly influenced (it would be reduced by 20 percent). In contrast, if there was one item in the test with 50 items that was confusing,the total score would be influenced much less (by only 2 percent). Obviously, this does not mean that assessments should be inordinately long, but, onaverage, enough tasks should be included to reduce the influence of chance variations. Second, clear directions and tasks help increase reliability. Ifthe directions or wording of specific tasks or items are unclear, then students have to guess what they mean, undermining the accuracy of their results. Third,clear scoring criteria are crucial in ensuring high reliability (Linn&Miller, 2005).

Validity

Validity is the evaluation of the “adequacy and appropriateness of the interpretations and uses of assessment results” for a given group ofindividuals (Linn&Miller, 2005, p. 68). In plain language, validity refers to the accuracy of a test to measure what it is designed or intended tomeasure. For example, is it appropriate to conclude that the results of a mathematics test on fractions given to English Language Learners accuratelyrepresents their understanding of fractions? Obviously, other interpretations are possible. For example, that the immigrant students have poor English skillsrather than mathematics skills.

It is important to understand that validity refers to the interpretation and uses made of the results of an assessment procedure not of the assessmentprocedure itself. For example, making judgments about the results of the same test on fractions may be valid if the students all understand English well.Validity involves making an overall judgment of the degree to which the interpretations and uses of the assessment results are justified. Validity is amatter of degree (e.g. high, moderate, or low validity) rather than all-or none (e.g. totally valid vs invalid) (Linn&Miller, 2005).

Three sources of evidence are considered when assessing validity – content, construct and criterion. Content validity evidence is associated with the question: How well does the assessment include the content or tasks it is supposed to? For example, suppose your educationalpsychology instructor devises a mid-term test and tells you this includes chapters one to seven in the textbook. Obviously, all the items in the testshould be based on the content from educational psychology, not your methods or cultural foundations classes. Also, the items in the test should cover contentfrom all seven chapters and not just chapters three to seven – unless the instructor tells you that these chapters have priority.

Teachers have to be clear about their purposes and priorities for instruction before they can begin to gather evidence related to content validity. Contentvalidation determines the degree that assessment tasks are relevant and representative of the tasks judged by the teacher (or test developer) torepresent their goals and objectives (Linn&Miller, 2005). It is important for teachers to think about content validation when devising assessment tasksand one way to help do this is to devise a Table of Specifications. A Table of Specifications identifies the number of items (i.e. questions) on theassessment that are associated with each educational goal or objective.

Construct validity evidence is more complex than content validity evidence. Often we are interested in making broader judgments about students’performances than specific skills such as doing fractions. The focus may be on constructs such as mathematical reasoning or reading comprehension. A constructis an abstract or theoretical characteristic of a person we assume exists to help explain behavior. For example, we use the concept of test anxiety toexplain why some individuals when taking a test have difficulty concentrating, have physiological reactions such as sweating, and perform poorly on tests butnot in class assignments. Similarly, mathematical reasoning and reading comprehension are constructs as we use them to help explain performance on anassessment. Construct validation is the process of determining the extent to which performance on an assessment can be interpreted in terms of the intendedconstructs and is not influenced by factors irrelevant to the construct. For example, judgments about recent immigrants' performance on a mathematicalreasoning test administered in English will have low construct validity if the results are influenced by English language skills that are irrelevant tomathematical problem solving. Similarly, construct validity of end-of-semester examinations is likely to be poor for those students who are highly anxiouswhen taking major tests but not during regular class periods or when doing assignments. Teachers can help increase construct validity by trying to reducefactors that influence performance but are irrelevant to the construct being assessed. These factors include anxiety, English language skills, and readingspeed (Linn&Miller 2005).

A third form of validity evidence is called criterion-related validity. Criterion related validity is the extent to which a student’s score on atest relates to another measure of the same content or construct. Criterion related validity is further delineated into two sub-types depending on when theother measure is given to students. If the other measure is given at the same time, we use the term concurrent validity. If it is given at some point in the future, we use the term predictive validity . Selective colleges in the USA use the ACT or SAT among other measures to choose who will be admitted because these standardized tests help predictfreshman grades, i.e. they are high in the predictive type of criterion-related validity.

Reference

Linn, R. L.,&Miller, M. D. (2005). Measurement and Assessment in Teaching 9th ed. Upper Saddle River, NJ: Pearson.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Oneonta epsy 275. OpenStax CNX. Jun 11, 2013 Download for free at http://legacy.cnx.org/content/col11446/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Oneonta epsy 275' conversation and receive update notifications?

Ask