Chapter 7: Developing assessment instruments

“Learner-centered assessments are to be criterion-referenced (i.e., linked to instructional goals and an explicit set of performance objectives derived from the goals).” (p.137)

“You may wonder why test development appears at this point in the instructional design process rather than after instruction has been developed. The major reason is that the test items must correspond one to one with the performance objectives.” (p.138)
Remember the system design slides. This is a key idea – the one-to-one correspondence between test items and learning objectives.

“The main purpose for a criterion-referenced test is to examine a person’s or group’s achievement in a carefully defined content area; thus, it is focused on specific goals and objectives within a given content area. In contrast, norm-referenced tests are used to compare the relative performance of learners in larger areas of content, such as a year’s content within a specific subject area; for example, mathematics or reading.” (p.138-139)

” criterion-referenced tests are the backbone of the assessment used for decision making in the development and evaluation of particular instruction” (p.139)

“It should be noted that if there are no significant entry skills identified during the instructional analysis, then there is no need to develop corresponding objectives and test items.” (p.139)

“The purpose of a pretest is not necessarily to show a gain in learning after instruction by comparison with a posttest, but rather to profile the learners with regard to the instructional analysis.” (p.139)

This too is important. The idea here is that the pretest is used to influence the actual instruction, not the design of the module.

“The purpose for practice tests is to provide active learner participation during instruction.” (p.140)

This misses a point – one of the reasons to do practice tests is for learning to practice how to write the test, as test writing is a skill in and of itself.

“Posttests are administered following instruction, and they are parallel to pretests, except they do not include items on entry skills.” (p.142)

“It is critical that test items measure the exact behavior described in the objective.” (p.143)

This is often much easier said than done!

“Test items and assessment tasks must be tailored to the characteristics and needs of the learners, including such considerations as learner needs, vocabulary and language levels, developmental levels for setting appropriate task complexity, motivational and interest levels, experiences and backgrounds, special needs, and freedom from bias (e.g., cultural, racial, gender).” (p.144)

“In creating test items and assessment tasks, designers must consider the eventual performance setting as well as the learning or classroom environment.” (p.145)

“Learners can be nervous during assessment, and well-constructed, professional-looking items and assessment tasks can make the assessment more palatable to them.

Test-writing qualities focusing on assessment-centered criteria include correct grammar, spelling, and punctuation, as well as clearly written and parsimonious directions, resource materials, and questions.” (p.145)

“In constructing the test, a major question that always arises is, ‘What is the proper number of items needed to determine mastery of an objective?'”(p.146)

“Another important question to consider is, ‘What type of test item or assessment task best assesses learner performance?'” (p.146)

“Objective tests include test items that are easy for learners to complete and designers to score. The answers are short and typically scored as correct or incorrect, and judging correctness of an answer is straightforward. Objective formats include completion, short answer, true/false, matching, and multiple choice. Test items that should be scored using a checklist or rubric, including essay items, are not considered to be objective items, and they are described in the next section on alternative assessments.” (p.147)

“Developing alternative assessment instruments used to measure performance, products, and attitudes does not involve writing test items per se, but instead requires writing directions to guide the learners’ activities and constructing a rubric to frame the evaluation of the performances, products, or attitudes.”
Alternative assessments are often called authentic assessments.

“In addition to writing instructions for learners, you must develop a rubric to guide your evaluation of performances, products, or attitudes.” (p.149)

Note that rubrics are tools used to measure. They are grading tools, rather than assessment instruments. I don’t use rubrics – here is why:


Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Foundations of Instructional Design by Rebecca J. Hogue is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book