** Item analysis refers to statistical methods used for selecting items for inclusion in a psychological test. The concept goes back at least to Guildford (1936). The process of item analysis varies depending on the psychometric model adopted. For example, classical test theory or Item Response Theory will call for different procedures. Item analysis provides a way of measuring the quality of questions - seeing how appropriate they were for the respondents how well they measured their ability.**

**Classical test theory is based on a set of assumptions regarding the properties of test scores. Although different models of CTT are based on slightly different sets of assumptions, all models share a fundamental premise postulating that the observed score of a person on a test is the sum of two unobservable components, true score and measurement error. True score is generally defined as the expected value of a person’s observed score if the person were tested an infinite number of times on an infinite number of equivalent tests. Therefore, the true score reflects the stable characteristic of the object of measurement (i.e., the person). Measurement error is defined as a random “noise” that causes the observed score to deviate from the true score.**

kelias

**Reliability and Parallel TestsTrue score and measurement error, by definition, are unobservable. However, researchers often need to know how well observed test scores reflect the true scores of interest. In CTT, this is achieved by estimating the reliability of the test, defined as the ratio of true score variance to observed score variance. Alternatively, reliability is sometimes defined as the square of the correlation between the true score and the observed score. Although they are expressed differently, these two definitions are equivalent and can be derived from assumptions underlying CTT.**

**Classical test theory assumes linearity—that is, the regression of the observed score on the true score is linear, the following assumptions are often made by classical test theory:**

●The expected value of measurement error within a person is zero.

● The expected value of measurement error across persons in the population is zero.

● True score is uncorrelated with measurement error in the population of persons.

● The variance of observed scores across persons is equal to

the sum of the variances of true score and measurement error.

● Measurement errors of different tests are not correlated.

**In psychometrics, item response theory (IRT), also known as latent trait theory, strong true score theory, or modern mental test theory, is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals’ performances on a test item and the test takers’ levels of performance on an overall measure of the ability that item was designed to measure. Several different statistical models are used to represent both item and test taker characteristics. Unlike simpler alternatives for creating scales and evaluating questionnaire responses, it does not assume that each item is equally difficult.**

kelias

**IRT is based on the idea that the probability of a correct/keyed response to an item is a mathematical function of person and item parameters. The person parameter is construed as (usually) a single latent trait or dimension. Examples include general intelligence or the strength of an attitude. Parameters on which items are characterized include their difficulty (known as "location" for their location on the difficulty range), discrimination (slope or correlation) representing how steeply the rate of success of individuals varies with their ability, and a pseudoguessing parameter, characterising the (lower) asymptote at which even the least able persons will score due to guessing (for instance, 25% for pure chance on a multiple choice item with four possible responses).**

**The 1 parameter logistic model (1PL) **

also known as the Rasch model, only uses item difficulty as a parameter for calculating a person’s ability.**The 2 parameter logistic model (2PL) **uses both item difficulty and item discrimination (the extent which the item is measuring the underlying psychological construct) as parameters.