click below
click below
Normal Size Small Size show me how
I/O Personnel Psych
LA Tech, Psych. 516, Test 1, chapter 7
Question | Answer |
---|---|
methods of Validation | (1) what a test or other procedure measures (i.e., the hypothesized underlying trait or construct) (2) how well it measures (i.e., the relationship between scores from the procedure and some external criterion measure) |
Content-Related Evidence | • Inferences about validity based on content-related evidence are concerned with whether or not a measurement procedure contains a fair sample of the universe of situations it is supposed to represent. |
Three assumptions underlie the use of content-related evidence: | 1.area of concern to user can be meaningful, definable universe of responses 2.sample is drawn from universe n purposeful, meaningful way 3.sample & sampling process can be defined w/precision to let user judge how well it typifies perf. in the universe |
Content-Validity Index (CVI) | equal # of incumbents and supervisors are presented with a set of test items and asked to indicate whether or not each item 1.essential for the job 2.useful, but not essential 3.not relevant to job. The average rating from each person comprises the ___. |
Positive contribution of content-related evidence of validity | 1.improved domain sampling & job analysis procedures 2.better behavior measurement 3.role of expert judgment in confirming the fairness of sampling and scoring procedures and in determining the degree of overlap between separately derived content domains. |
Criterion validity | an indication of future behavior (i.e., predicting future behavior or performance) |
Two types of criterion studies: | predictive study, concurrent study |
predictive study | oriented toward the future and involves a time interval during which events take place (e.g., using an ACT score to predict GPA or “is it likely that Laura will be able to do the job?”) |
concurrent study | oriented toward the present and reflects only the status quo at a particular time (e.g., using existing managers scores to see if you match them or “can Laura do the job now?”) |
steps to a Predictive study | 1.Measure candidate for job 2.select w/o results of meas. (hire all- see if scores discriminate between high& low performers) 3.get measurements of criterion perfor. at later date. 4.Assess strength of relationship between predictor & criterion. |
Measures of criterion need to be | relevant to the job (e.g., even if you have a great measure of job performance, i.e., general intelligence, if the criterion is something like how many cups of water an employee drinks, then the test is not good.) |
Range Restriction | if measurements are given to employees only after screening or only if they score high in an assessment, then they are later measured against a criterion. the measure and criterion may b weak predictor: the sample may be biased |
Construct-Related Evidence | Do the test questions assess the construct in question (e.g., do intelligence questions on the ACT or SAT really measure intelligence or are they measuring academic achievement instead?) |
Convergent validation | Does this new test of intelligence that I just made correlate well with the SAT and ACT (e.g., Does someone who scores a 25 on the ACT also score the equivalent on the SAT and my new test?) |
Discriminant validation | Does this new test of depression NOT correlate with happiness measures (e.g., As people’s score increase on my new depression score, do they also decrease on a happiness scale?) |
Meta-analyses | process of taking multiple studies of same construct or measure and seeing total effect. W/ a single study, results may b ^ or lower that what is actual, but taking an aggregate sample, the ^ & lows should cancel out & the ^ accurate effect should emerge. |
validation | investigative process of gathering or evaluating the necessary data. |
criterion related evidence | testing hypothesis that test scores are related to performance on some criterion measure. The criterion is a score or a rating that is either available at the time of predictor measurement or will become available at a later time. (predictive/concurrent) |