click below
click below
Normal Size Small Size show me how
Statistics
Interpreting and Utilizing Clinical Statistics
Question | Answer |
---|---|
T:FA pooly designed study can not be statistically evaluated | False. It can be evaluated |
T:F, A poorly appied statistical test can never be evaluated | True. |
What type of statistics makes an assumption from a sample to a population | Inferential Statistics |
What type of stats summarizes the essential characteristics of a data set | Descriptive |
P-value and Confidence interval are used in what type of statistics | Inferential |
What are 2 examples of measurements that descriptive stats evaluates | central tendency and variability |
Average for a set of numbers | mean |
middlemost value ina set of ranked data | median |
most frequently occuring number in a data set | mode |
What are 3 measures of variability and be able to describe all 3 (page 16 of notes) | range, standard deviation, standard error of the mean (SEM) |
What measure of variability relates the sample to the population | SEM |
Does hypothesis testing establish "causality?" | yes |
Does hypothesis testing establish certainty? | No |
T:F, Causality and certainty are synonomous | False |
What does the null hypothesis state? | That the treatment being evaluated has no affect on the outcome of interest |
What type of error is described by "A false positive" | Type 1 |
What is a type 2 error | when you fasly conclude that there is no difference when in fact there really is. false negative |
What type of error is seen when you reject the null when you really should have accepted it | type 1 |
What is alpha | Risk of experiencing a type 1 error. it is the risk we are willing to take that we will find a chance result to be significant |
What is usually the accepted alpha value | 5% |
T:F, alpha needs to be established a priori, however beta does not have to be established a priori | Flase. Both need to be established a priori |
What is the risk we take at making a type 1 error | alpha |
What is the risk of making a type 2 error | Beta |
How would you calculate Beta | 1- power |
What is usually the normal designation for a value of beta | 20 % |
T:F, Hypothesis testing establishes whether the outcome of interest is due to chance alone or another factor | true |
What is another name for hpothesis of no difference | null hypothesis |
What are two ways to increase power | increase alpha (significance level) or increase sample size OR make assumptions (specifically one sided assumptions) or reduce variability) |
Which type of tailed test accounts for the possibility that the drug will be inferior to the control | 2 sided test |
What's one example of a downfall of a one-sided test | it only tests a benefit from a certain study, but does not relay information on a downfalls. Eg. a drug is beneficial will be seen, however it if is harmful it will not be seen |
LDL cholesteril is an example of what kind of measurement | continuous |
In a normal distribution what are the percentages of information found within 1, 2, and 3 standard deviations | 1 SD- 68.3%, 2 SD- 95.5%, 3SD- 99.7% |
What is the mean and SD in a normal distribution? | mean = 0, SD = 1 |
T:F, the central limit theorem states that as the sample size decreases and becomes sufficiently large, the distribution of the sample means tend toward normal distribution | False. This will happen as the sample size INCREASES |
If a study were to be conducted on whether placeo or drug A was better. A sample of 20 people were randomly assined. Results were p<.005 and confidence interval was 95%. Could you conclude that the values were normally distributed? | NO. normal distribution is determined by sample size. there has to be a sample size of at least 30 to be normally distributed |
What is the difference between parametric and nonparametric? | parametric = normall distributed, continuous data. Nonparametric = not nor. dist., nominal data or ordinal data |
What is the probability of the observed result or a more extreme result occurring by chance alone | p-value |
the p-value is the risk of experiencing what type of error, a fale positive or a fale negative? | false positive |
How would you interpret a p<.05. What does ttha mean? How would you describe that value in words? | It means that there is a less than 5% chance that what we observed was due to chance alone. |
If the p-value is to high, what should we do to the null hypothesis | you have insufficient evidence to reject the null = accept the null hypothesis |
T:F, The p-value is an important tool in determing clinical significance | FALSE! The p-value infers nothing about clinical significance! It relates to statistical significance! |
Is the p-calue determined post-hoc or a priori | post hoc |
T:F, the smaller the sample size the more narrow the CI | FaLSE, the larger the sample size the more narrow the confidence interval |
Do you want the CI to be small or large | small |
What value gives you statistical significance | p-value |
what value gives you clinical significance | CI |
What is the equation used to determine relative risk reduction? | (High - low)/ high |
What is the equation for absolute risk reduction? | hi-low |
what is the equation for number needed to treat | 1/absolute risk reduction |
What is the relative risk reduction? | the reduction of risk from 1 therapy to anoter. |
what is the aboslute risk reduction | absolute differences between the probabilities of the treatment even rate and control even rate |
How would you define NNT | the number if subjects needed to treat over a period of time in order to see the benefits of a therapy in 1 subject |
Which one of the following is used to make clinical decisions, NNT, ARR, RRR. | NNT |
What does the number needed to harm mean | the number needed to treat before you experience one adverse reaction |
What is the equation for NNH | 1-ARR |
What is the relationship between sample size and the ability to detect a difference? | Smaller sample sizes detect a large difference. large sample sizes detect small differences |