Jump to content

Linear-on-the-fly testing

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Me, Myself, and I are Here (talk | contribs) at 17:13, 15 March 2024 (top: bold). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Linear-on-the-fly testing, often referred to as LOFT, is a method of delivering educational or professional examinations. Competing methods include traditional linear fixed-form delivery and computerized adaptive testing. LOFT is a compromise between the two, in an effort to maintain the equivalence of the set of items that each examinee sees, which is found in fixed-form delivery, while attempting to reduce item exposure and enhance test security.

Fixed-form delivery, which most people are familiar with, entails the testing organization determining one or several fixed sets of items to be delivered together. For example, suppose the test contains 100 items, and the organization wished for two forms. Two forms are published with a fixed set of 100 items each, some of which should overlap to enable equating. All examinees that take the test are given one of the two forms.

If this exam is high volume, meaning that there is a large number of examinees, the security of the examination could be in jeopardy. Many of the test items would become well known in the population of examinees. To offset this, more forms would be needed; if there were eight forms, not as many examinees would see each item.

LOFT takes this to an extreme, and attempts to construct a unique exam for each candidate, within the given constraints of the testing program. Rather than publishing a fixed set of items, a large pool of items is delivered to the computer on which the examinee is taking the exam. Also delivered is a computer program to pseudo-randomly select items so that every examinee will receive a test that is equivalent with respect to content and statistical characteristics,[1] although composed of a different set of items. This is usually done with item response theory.

References

[edit]
  1. ^ Luecht, R.M. (2005). Some Useful Cost-Benefit Criteria for Evaluating Computer-based Test Delivery Models and Systems. Journal of Applied Testing Technology, 7(2). "Archived copy" (PDF). Archived from the original (PDF) on 2006-09-27. Retrieved 2006-12-01.{{cite web}}: CS1 maint: archived copy as title (link)