Clinical Psychology Word Scramble
|
Embed Code - If you would like this activity on your web page, copy the script below and paste it into your web page.
Normal Size Small Size show me how
Normal Size Small Size show me how
| Question | Answer |
| Kazdin (1982) | Symptom substitution and Generalization |
| Symptom Substitution | Treatments that focus on symptom reduction with targeting the "underlying causes" of dysfunction risk merely having reduced symptoms replaced by non-targeted symptoms |
| Generalization | Changes in one behavior during treatment relate to changes in other, similar domains of behavior |
| Empirically testing Symptom Substitution | Operationally define construct, identify constituent parts, post alternative explanations, review studies to determine evident support |
| New Model: Response Covariance | Alternative to symptom substitution. Two or more correlated behavioral responses to treatment. Response to treatment affects other responses. Responses can go in either direction (one behavior improves but other worsens/improves) |
| Kazdin (1982) Take-Home Message | Treatment produces changes in sets of behaviors that relate changes (both positive and negative) in related or distinct sets of other behaviors. |
| Examples of poor decision making? | False Dilemma and Appeal to Ignorance |
| False Dilemma | Situation in which only two alternatives are considered |
| Appeal to Ignorance | Something is true because it has not yet been proven false. |
| What can lead to poor decision-making? | "gut" feelings |
| Correlation | Relation between two variables, no necessary direction of the relation (variable X leads to variable Y, or vice versa) |
| Causation | Levels of a variable directly/indirectly influence a second variable's levels |
| Mediation | Two variables are related in some way, and a third variable explains WHY the relation exists. |
| Moderation | Two variables are related under some circumstances, and a third variable influences the DIRECTION or MAGNITUDE of this relation. |
| Prevention | Decreasing likelihood that an outcome occurs |
| Intervention | Decreasing an outcome that has already occurred |
| Stages of Ethics | -Institution approval -Consent -Compensation -Debriefing -Reporting -Publication -Sharing data |
| Psychologists cannot address research Q's strictly using: | Experimental Approaches |
| How do designs vary? | Ability to draw inferences between "cause" all the way to "effect" Also vary by number of participants. |
| Two factors involved in understanding outcomes of research. | Internal Validity & External Validity |
| Internal Validity | Does the data tell you what you think it tells you? |
| Impact 1 on Internal Validity | History: what participants went through during the study but had nothing to do with variable of interest |
| Impact 2 on Internal Validity | Maturation: participants changed over course of study in ways that had nothing to do with study |
| Impact 3 on Internal Validity | Testing: completing measures might change things |
| Impact 4 on Internal Validity | Instrumentation: changing measures over course of study, esp. as participants aged. |
| Impact 5 on Internal Validity | Statistical Regression: how many times did participants complete the measures. |
| Impact 6 on Internal Validity | Selection Bias: recruiting participants or assigning them to conditions |
| Attrition | Form of selection bias due to a lot of participants dropping out before end of study. This causes bias in data interpretation. |
| External Validity | Will the study's outcomes apply to people who were not in the sample; generalization |
| Impact 1 on External Validity | Sample Characteristics: matching between sample and rest of people who are targets. |
| Impact 2 on External Validity | Stimulus Characteristics/Settings: what if study was conducted somewhere else, would there be similar outcomes? |
| Impact 3 on External Validity | Reactivity: change in participants behavior b/c they are in a study. |
| Impact 4 on External Validity | Timing: assessments at other periods would have same results? |
| Case Studies | are detailed descriptions about someone, usually with a new treatment. |
| Case Studies are great for generating what? | Research hpyotheses |
| Case studies cannot rule what out? | Threats to internal validity |
| Single-Case studies can partially rule out? | Internal Validity issues |
| Single-Case studies | Have multiple measurements of outcome (before,after, during). Exact manipulated variable Need to introduce and then remove treatment. |
| Single-Case studies detect what kind of patterns? | Patterns between manipulated variable and measurement outcomes |
| Correlation designs are not the same as? | Correlational Anallyses |
| Correlational Designs have no: | Experimental manipulation or random assignment |
| Quasi-Experimental Designs | Reseacher-based manipulation of a variable, such as treatment condition |
| Quasi-Experimental Designs have no: | Random assignment to experimental conditions |
| Quasi-Experimental Designs cannot rule out: | Extraneous influences (variation among participants in different conditions) |
| Experimental Designs do have: | Random assignment and manipulation |
| Experimental Designs allow for: | unambiguous interpretations of effects of manipulation on outcome |
| Which study design provides the best protection against internal validity | Experimental Design |
| Randomized controlled trials | Treatment studies |
| Which is a quantitative review | a Meta-Analysis |
| Meta-Analysis is a: | Group of studies addressing the same topic; main findings. |
| Effect Size | The average results across studies using a common scale |
| Probability Sampling | Interest in ensuring that research sample represents population |
| Non-probability Sampling | No specific interest in representing a population |
| Sample size needs to be large enough to: | Ensure statistical power to detect hypothesized effects |
| Measurement Reliability | The degree of consistency in measurement |
| 3 Examples of Measurement Reliability | Internal Consistency, Test-retest reliability, and interrater reliability. |
| Internal Consistency | Items on test relate highly with each other |
| Test-retest Reliability | Measure are stable over time |
| Interrater Reliability | Different observers provide similar scores about the same person's behaviors |
| Measurement Validity | Degree to which the construct of interest is accurately measured |
| 3 Examples of Measurement Validity | Face Validity, Predictive Validity, Convergenet Validity |
| Face Validity | Does it look like a measure of the construct? |
| Predictive Validity | Predicting the development of construct from child to adult; does childhood diagnosis predict adult diagnosis? |
| Convergent Validity | Does the measure relate to other measures of the same and similar contructs? |
| Common Kinds of Measures | Self-Report, Informant report, trained rater, observation, psychophysiological, archives, performance-based. |
| Statisitical Conclusion Validity | Aspects of data analysis that impact validity of conclusions drawn |
| Threats to Statistical Conclusion Validity | Low Statistical Power, Multiple Comparisons, Measurement Unreliability. |
| Low Statistical Power | No significant effects b/c the sample size was too small |
| Multiple Comparisons | Significant effects due to chance, given the number of tests conducted |
| Measurement Unreliability | No significant effects b/c the measures were unreliable |
| Statistical Significance | p < .05 |
| An effect that is not statistically significant: | Reveals little about how meaningful a finding is |
| Psychological measures scores | Measures yield scores that do not have direct relation to the real world |
| Definition of Statistical Significance | The degree an effect had a meaningful impact on the "real world" functioning of participants |
Created by:
roxandsocks
Popular Psychology sets