How are Resources Evaluated?
Appraisal is the third step in the Evidence Based Medicine process.
It requires that the evidence found be evaluated for its validity and clinical usefulness.
What is validity?
Internal validity is the extent to which the experiment demonstrated a cause-effect relationship between the independent and dependent variables.
External validity is the extent to which one may safely generalize from the sample studied to the defined target population and to other populations.
What is reliability?
Reliability is the extent to which the results of the experiment are replicable. The research methodology should be described in detail so that the experiment could be repeated with similar results.
Critically Appraised Topics (CATs)
CATs are critical summaries of a research article. They are concise, standardized, and provide an appraisal of the research.
If a CAT already exists for an article, it can be read quickly and the clinical bottom line can be put to use as the clinician sees fit. If a CAT does not exist, the CAT format provides a template to appraise the article of interest.
Sample questions for evaluating a study:
♦Has the study's aim been clearly stated?
♦Does the sample accurately reflect the population?
♦Has the sampling method and size been described and justified?
♦Have exclusions been stated?
♦Is the control group easily identified?
♦Is the loss to follow-up detailed?
♦Can the results be replicated?
♦Are there confounding factors?
♦Are the conclusions logical?
♦Can the results be extrapolated to other populations?
♦Hypothesis - a statement that is believed to be true but has not yet been tested.
♦Independent variable - the component of an experiment that is controlled by the researcher (for example - a new therapy).
♦Dependent variable - the component of an experiment that changes, or not, as a result of the independent variable (for example - the existence of a disease).
♦Bias - prejudice or the lack of neutrality. A systematic deviation from the truth that affects the conclusions and occurs in the process or design of the research.
♦Confounding -a mixing of the effects within an experiment because the variables have not been sufficiently separated. Possible confounding variables should be discussed in the report of the research.
♦See also Study Design Terminology from the Levels of Evidence tab.