Week 3 Discussion 2 Issues in Testing
Week 3 Discussion 2
Issues in Testing
Discussion instructions:
Select one of the stages of test development. Discuss that selected stage and the items that must be done in the development of a test.
Be sure to put all information in your own words, cite accordingly, and include a list of references used in APA format. Your post should be at least 300 words. Respond to at least two of your classmates’ postings by Day 7. a
Week 3 Guidance
Chapter 6: Item Development
General Item Writing Guidelines
- Provide clear directions
- Present the question, problem, or task in as clear and straightforward a manner as possible
- Develop items and tasks that can be scored in a decisive manner
- Avoid inadvertent cues to the answers
- Arrange the items in a systematic manner
- Ensure that individual items are contained on one page
- Tailor the items to the target population
- Minimize the impact of construct-irrelevant factors
- Avoid using the exact phrasing from study materials
- Avoid using biased or offensive language
- Use a print format that is clear and easy to read
- Determine how many items to include
Constructing Test Items
Objective Items
- Keep the reading difficulty and vocabulary level of test items as simple as possible
- Be sure each item has a single correct or best answer on which experts would agree
- Be sure each item deals with an important aspect of the content area, not with trivia
- Be sure each item is independent
- Avoid the use of trick questions
- Be sure the problem posed is clear and unambiguous
True-False Items
- Ensure that the item is unequivocally true or false
- Avoid the use of specific determiners or qualified statements
- Avoid ambiguous and indefinite terms of degree or amount
- Avoid the use of negative statements, and particularly double negatives
- Limit true-false statements to a single idea
- Make true and false statements approximately equal in length
Multiple-Choice Items
- Be sure the stem of the item clearly formulates a problem
- Include as much of the item as possible in the stem, and keep options as short as possible
- Use the negative only sparingly in an item
- Use novel material in formulating problems that measure understanding or ability to apply principles
- Be sure that there is one and only one correct or clearly best answer
- Be sure wrong answers are plausible
- Be sure no unintentional clues to the correct answer are given
- Use the option “none of these” or “none of the above” only when the keyed answer can be classified as unequivocally correct or incorrect
- Avoid the use of “all of the above” in the multiple-choice item
Matching Items
- Keep the set of statements in a single matching exercise homogeneous
- Keep the set of items relatively short
- Have the student choose answers from the column with the shorter statements
- Use a heading for each column that accurately describes its content
- Have more answer choices than the number of entries to be matched
- Arrange the answer choices in a logical order (if one exists)
- Specify in the directions both the basis for matching and whether answer choices can be used more than once
Preparing the Test for Use
- First, it can be a good idea to prepare more items than you need. That way, you can choose the best items – not every question will be a “good one.” In addition, it is important to proofread the items carefully. I always ask a colleague to review my work.
- Once you have selected your test items, arrange them so they are easy to read. Try not to split an item over two pages, and don’t crowd them on the page. In addition, it is a good idea to plan the layout of the test so that a separate answer sheet can be used to record answers. Using a separate answer sheet will make your test easier to score. Furthermore, it is helpful to group items of the same format together. Thus, put the true/false items together, the multiple-choice items together, etc.
- Along these lines, it is important to write a set of specific directions for each item type. This way, the test-taker will know that they are supposed to do with each type of item. It is also important to be sure that one item does not provide clues to the answer of another item. Finally, be sure that the correct responses form an essentially random pattern. You don’t want 4 “A” answers in a row, answers to go in order from “A”, “B”, “C”, and “D”, and you don’t want all of the answers to be one letter choice.
- There are also specific considerations for writing items for scales and research. Here are a few references you might benefit from exploring:
- http://psych.wfu.edu/furr/362/FURR%20SC&P%20Ch%203.pdf (Links to an external site.)Links to an external site.
- http://www.slideshare.net/jtneill/survey-design-ii (Links to an external site.)Links to an external site.
- http://www.gerardkeegan.co.uk/resource/surveymeth1.htm (Links to an external site.)Links to an external site.
Chapter 7: Item Analysis
- Item difficulty = number of correct responses/number of examinees
- Item discrimination: how well an item can accurately distinguish between test taskers who differ on the construct being measured
- Distracter analysis: effective distracters should attract more examinees in the bottom group than in the top group
Chapter 15: The Problem of Bias in Psychological Assessment
Culture and Test Administration
- As you’re likely aware, the US is becoming more diverse as time goes on. In fact, it is predicted that individuals who are currently minority members of the population will become the majority of the US population by 2050 (Leung & Barnett, n.d.). Recent data revealed that culturally diverse students comprised more than 43% of the US public school population (Leung & Barnett, n.d.).
- To account for the increased cultural diversification of the US, the APA ethics and code of conduct “makes clear each psychologist’s obligations for providing ethical and competent services in our work with individuals of diverse backgrounds” (Leung & Barnett, n.d.). In other words, it is necessary that all individuals, regardless of race, ethnicity, native language, etc. are able to receive adequate professional services.
- In general, individuals who are not native English speakers may not perform as well on standardized tests, which in turn may result in inaccurate test results. Thus, a student may be placed in a remedial class when it is not necessary, may be diagnosed with a learning disorder when none exists, etc. Research conducted in this area has found biases in several different areas of testing:
Bias in construct validity
- When a test is shown to measure different hypothetical constructs for one group than another (Sattler, 1992)
Bias in content validity
- When an item or subscale is more difficult for members of one group than members of another group, after the general ability level of the two groups is controlled for (Reynolds, 1998)
Bias in item selection
- When items/tasks selected are based on the learning experiences/language of the dominant group
Bias in predict/criterion-related validity
- When the inference drawn from the test score is not made with the smallest feasible random error (Gregory, 2004)
Additional Resources
- American Psychological Association (2013). Rights and responsibilities of test takers: Guidelines and expectations. Retrieved from http://www.apa.org/ethics/code/ (Links to an external site.)Links to an external site.
- Encyclopedia of Children’s Health (2004). Psychological tests. Retrieved from http://www.healthofchildren.com/P/Psychological-Tests.html (Links to an external site.)Links to an external site.
- Furr, M. R. (2011). Scale construction and psychometrics for social and personality psychology. London, UK: Sage Publications. Retrieved from http://psych.wfu.edu/furr/362/FURR%20SC&P%20Ch%203.pdf (Links to an external site.)Links to an external site.
- Gregory, R. J. (2004). Psychological testing: History, principles, and applications. Boston: Allyn & Bacon.
- Keegan, G. (n.d.). The survey method. Retrieved from http://www.gerardkeegan.co.uk/resource/surveymeth1.htm (Links to an external site.)Links to an external site.
- Neill, J. (2012). Survey design. Retrieved from http://www.slideshare.net/jtneill/survey-design-ii (Links to an external site.)Links to an external site.
- Leung, C. V. V., & Barnett, J. E. (n.d.). Multicultural assessment and ethical practice. Retrieved from http://www.dr-charlton.com/MulticulturalAssessmentandEthicalPractice.pdf (Links to an external site.)Links to an external site.
- McGrath, J. (2009). Ethics in psychological testing. Retrieved from http://voices.yahoo.com/ethics-psychological-testing-3344130.html (Links to an external site.)Links to an external site.
- McIntire, S. A. & Miller, L. A. (2007). Foundations of psychological testing: A practical approach. 2nd ed. Thousand Oaks, CA: Sage Publications.
- Reynolds, C. R. (1998). Cultural bias in testing of intelligence and personality. In A. Bellack & M. Hersen (Series Eds.) & C. Belar (Vol. Ed.), Comprehensive clinical psychology: Sociocultural and individual differences. New York: Elsevier Science.
- Sattler, J. M. (1992). Assessment of children (3rd ed.). San Diego: Jerome M. Sattler.
- Thorndike, R. M. & Thorndike-Christ, T. M. (2009). Measurement and evaluation in psychology and education (8th ed.). Upper Saddle River, NJ: Prentice Hall.
- Whiting, G., & Ford, D. (n.d.). Cultural bias in testing. Retrieved from http://www.education.com/reference/article/cultural-bias-in-testing/ (Links to an external site.)Links to an external site.