NREPP
 
  •  

Home > Learning Center > NREPP Glossary

NREPP Glossary

The following definitions have been drawn from numerous sources and are tailored specifically for content on the NREPP Web site. The terms defined here may have slightly different meanings in other settings.

 View All  |  A  |  B  |  C  |  D  |  F  |  G  |  I  |  L  |  M  |  O  |  P  |  R  |  S  |  V 

Adaptation
A modest to significant modification of an intervention to meet the needs of different people, situations, or settings.
Adverse effect
Any harmful or unwanted change in a study group resulting from the use of an intervention.
Attrition
The loss of study participants during the course of the study due to voluntary dropout or other reasons. Higher rates of attrition can potentially threaten the validity of studies. Attrition is one of the six NREPP criteria used to rate Quality of Research.
Baseline
The initial time point in a study just before the intervention or treatment begins. The information gathered at baseline is used to measure change in targeted outcomes over the course of the study.
Co-occurring disorders
In the context of NREPP, substance abuse and mental disorders that often occur in the same individual at the same time (e.g., alcohol dependence and depression); also known as comorbid disorders.
Comparative effectiveness research
Comparison group
A group of individuals that serves as the basis for comparison when assessing the effects of an intervention on a treatment group. A comparison group typically receives some treatment other than they would normally receive and is therefore distinguished from a control group, which often receives no treatment or "usual" treatment. To make the comparison valid, the composition and characteristics of the comparison group should resemble that of the treatment group as closely as possible. Some studies use a control group in addition to a comparison group.
Confidence interval
In scientific studies, a range that conveys the precision of a measurement. Sometimes abbreviated as CI, a confidence interval is a range of values above and below the average measurement for a sample. For example, the average measurement of a depression score for a sample of people might be reported as 75 ± 5. The confidence interval therefore ranges from 70 to 80. This value allows researchers to have confidence that the actual average measurement of a depression score for all possible people is from 70 to 80, even though the average measurement for the sample is 75. Although a variety of confidence intervals can be calculated, the 95% confidence interval is most commonly used. A 95% confidence interval indicates that researchers could repeat the sampling procedure many times, and the calculated confidence interval would include the true average measurement for all possible people 95% of the time.
Confounding variables
In an experiment, any characteristic that differs between the experimental group and the comparison group and is not the independent variable under study. These characteristics or variables "confound" the ability to explain the experimental results because they provide an alternative explanation for any observed differences in outcome. In assessing a classroom curriculum, for example, a confounding variable would exist if some students were taught by a highly experienced instructor while other students were taught by a less experienced instructor. The difference in the instructors' experience level makes it harder to determine if the differences in student outcomes (e.g., grades) were caused by the effects of the curriculum or by the variation in instructors. The likelihood that confounding variables might have affected the outcomes of a study is one of the six NREPP criteria used to rate Quality of Research.
Control group
A group of individuals that serves as the basis of comparison when assessing the effects of an intervention on a treatment group. Depending upon the study design, a control group may receive no treatment, a "usual" or "standard" treatment, or a placebo. The composition and characteristics of the control group should resemble that of the treatment group as closely as possible to make the comparison valid.
Core components
The most essential and indispensable components of an intervention (core intervention components) or the most essential and indispensable components of an implementation program (core implementation components).
Cultural appropriateness
In the context of public health, sensitivity to the differences among ethnic, racial, and/or linguistic groups and awareness of how people's cultural background, beliefs, traditions, socioeconomic status, history, and other factors affect their needs and how they respond to services. Generally used to describe interventions or practices.
Cultural competence
In the context of public health, the knowledge and sensitivity necessary to tailor interventions and services to reflect the norms and culture of the target population and avoid styles of behavior and communication that are inappropriate, marginalizing, or offensive to that population. Generally used to describe people or institutions. Because of the changing nature of people and cultures, cultural competence is seen as a continual and evolving process of adaptation and refinement.
Dissemination
The targeted distribution of program information and materials to a specific audience. The intent is to spread knowledge about the program and encourage its use.
DSM (Diagnostic and Statistical Manual of Mental Disorders)
The Diagnostic and Statistical Manual of Mental Disorders, or DSM, is the standard reference handbook used by mental health professionals in the United States to classify mental disorders. There have been multiple revisions of the DSM since it was first published by the American Psychiatric Association. The most recent version is the DSM-5, or Fifth Edition, which was published. Other versions include the DSM-IV, or Fourth Edition, and a text revision of the DSM-IV (DSM-IV-TR). Earlier editions that may be referenced in NREPP include the DSM-III and the DSM-III-R.
Effect size
In scientific studies, a number used to express the difference between two groups or the difference between the same group over time on a common scale. It refers to the magnitude of the result. Calculating an effect size is a way to determine the effectiveness of an intervention or to compare the effectiveness of different interventions.
Evidence-based
Demonstrating effectiveness in empirical research that meets a standard of scientific rigor. NREPP's criteria for effectiveness and scientific rigor are embodied in its minimum review requirements, which include the stipulation that an intervention must have demonstrated one or more positive behavioral outcomes (p ≤ .05) in substance abuse and/or mental health in at least one study using an experimental or quasi-experimental design. The standard of rigor used by NREPP has changed over time; past minimum requirements are available in NREPP's Federal Register notices, and current minimum requirements are available on the Submissions page.
Experimental
A study design in which (1) the intervention is compared with one or more control or comparison conditions, (2) subjects are randomly assigned to study conditions, and (3) data are collected at both pretest and posttest or at posttest only. The experimental study design is considered the most rigorous of the three types of designs (experimental, quasi-experimental, and preexperimental).
Externalizing behaviors
Social behaviors and other external cues that reflect an individual's internal emotional or psychological conflicts. Examples include spontaneous weeping, "acting out," and uncharacteristic aggression. Reduction of externalizing behaviors is a frequently used measure of the success of treatment or intervention for mental or emotional disorders.
Fidelity
Fidelity of implementation occurs when implementers of a research-based program or intervention (e.g., teachers, clinicians, counselors) closely follow or adhere to the protocols and techniques that are defined as part of the intervention. For example, for a school-based prevention curriculum, fidelity could involve using the program for the proper grade levels and age groups, following the developer's recommendations for the number of sessions per week, sequencing multiple program components correctly, and conducting assessments and evaluations using the recommended or provided tools.
Generalizability
The extent to which a study's results can be expected to occur with other people, settings, or conditions beyond those represented in the study. Threats to generalizability include lack of randomization, effects of testing, multiple-treatment interference, selection-treatment interference, effects of experimental arrangements, experimenter effects, and specificity of variables.
Implementation
The use of a prevention or treatment intervention in a specific community-based or clinical practice setting with a particular target audience.
Implementation team
A core set of individuals charged with providing guidance through full implementation of the intervention. This team helps ensure engagement of the stakeholders, increases readiness for implementation, ensures fidelity to the intervention, monitors outcomes, and addresses barriers to implementation.
Indicated
One of the three categories (Universal, Selective, Indicated) developed by the Institute of Medicine to classify preventive interventions. Indicated prevention strategies focus on preventing the onset or development of problems in individuals who may be showing early signs but are not yet meeting diagnostic levels of a particular disorder.
Internal validity
The degree to which the intervention or experimental manipulation was the cause of any observed differences or changes in behavior.
Internalizing behaviors
Behaviors that reflect an individual's transfer of external social or situational stresses to emotional, psychological, or physical symptoms. One well-known internalizing behavior is a child's development of stomach cramps when the parents argue; another is insomnia during a high-stress situation at work. Reduction of internalizing behaviors is a frequently used measure of the success of treatment or intervention for mental or emotional disorders.
Intervention
A strategy or approach intended to prevent an undesirable outcome (preventive intervention), promote a desirable outcome (promotion intervention) or alter the course of an existing condition (treatment intervention).
Legacy Programs
The label used by SAMHSA for all former Effective and Promising Programs, which were reviewed as part of the Center for Substance Abuse Prevention's Model Programs Initiative. Summaries for these Legacy Programs are listed in the Legacy Programs section of the NREPP Web site.
Logic model
A tool that allows key stakeholders to develop a strategic plan to address an identified community problem.
Mental health promotion
Attempts to (a) encourage and increase protective factors and healthy behaviors that can help prevent the onset of a diagnosable mental disorder and (b) reduce risk factors that can lead to the development of a mental disorder.
Mental health treatment
Assistance to individuals for existing mental health conditions or disorders.
Meta-analysis
A statistical procedure for combining the results of two or more studies on the same topic.
Missing data
Data or information that researchers intended to collect during a study that was not actually collected or was collected incompletely. Missing data may occur, for example, when survey respondents do not answer all questions in a survey, or when the researchers "throw out" or exclude survey questions because the responses do not meet validation checks. Missing data can threaten the validity and reliability of a study if steps are not taken to compensate for or "impute" (replace with calculated data) the missing information. Missing data are one of the six NREPP criteria used to rate Quality of Research.
Outcome
A change in behavior, physiology, attitudes, or knowledge that can be quantified using standardized scales or assessment tools. In the context of NREPP, outcomes refer to measurable changes in the health of an individual or group of people that are attributable to the intervention.
Outcome evaluation
An evaluation to determine the extent to which an intervention affects its participants and the surrounding environments. Several important design issues must be considered, including how to best determine the results and how to best contrast what happens as a result of the intervention with what happens without the program.
P-value
In scientific studies, a number representing the statistical probability that an outcome is due simply to chance alone rather than as a result of the intervention. Typically, if the p-value is less than or equal to 1 in 20 (i.e., ≤ 5 in 100, or p ≤ .05), researchers can conclude that the outcome of the study is not due to chance alone.
Preexperimental
A study design in which (1) there are no control or comparison conditions and (2) data are collected at pretest or posttest only; includes simple observational or case studies. The preexperimental study design provides the most limited scientific rigor of the three types of designs (experimental, quasi-experimental, and preexperimental).
Process evaluation
An evaluation to determine whether an intervention has been implemented as intended.
Program drift
A threat to fidelity due to compromises made during implementation.
Program fit
The degree to which a program matches a community’s needs, resources, and implementation capacity.
Psychometrics
The construction of instruments and procedures for measurement.
Quality assurance
Activities and processes used to check fidelity and the quality of implementation.
Quality of Research
One of the two main categories of NREPP ratings. Quality of Research (QOR) is how NREPP quantifies the strength of evidence supporting the results or outcomes of the intervention. Each outcome is rated separately. This is because interventions may target multiple outcomes, and the evidence supporting the different outcomes may vary. These QOR ratings are followed by brief "Strengths and Weaknesses" statements where reviewers comment on the studies and materials they reviewed and explain what factors may have contributed to high or low ratings. For more information on the scientific reviewers who rate QOR and how ratings are derived, see the NREPP page on Review Process Quality of Research.
Quasi-experimental
A study design in which (1) the intervention is compared with one or more control or comparison conditions, (2) subjects are not randomly assigned to study conditions, and (3) data are collected at pretest and posttest or at posttest only; includes time series studies, which have three pretest and three posttest data collection points. The quasi-experimental study design provides strong but more limited scientific rigor relative to an experimental design.
Ratings
NREPP provides two types of ratings for each intervention reviewed: Quality of Research and Readiness for Dissemination. Each intervention has multiple Quality of Research ratings (one per outcome) and one overall Readiness for Dissemination rating. QOR and RFD ratings are followed by brief "Strengths and Weaknesses" statements where reviewers comment on the studies and materials they reviewed and explain what factors may have contributed to high or low ratings.
Readiness for Dissemination
One of the two main categories of NREPP ratings. Readiness for Dissemination (RFD) is how NREPP quantifies and describes the quality and availability of an intervention's training and implementation materials. More generally, it describes how easily the intervention can be implemented with fidelity in a real-world application using the materials and services that are currently available to the public. For more information on the reviewers who rate RFD and how ratings are derived, see the NREPP page on Review Process Readiness for Dissemination.
Reliability of measure
The degree of variation attributable to inconsistencies and errors involved in measures or measurements. Key types include test-retest, interrater, and interitem. Reliability of measures is one of the six NREPP criteria used to rate Quality of Research.
Replication
The original investigator(s) or an independent party has used the same protocol with an identical or similar target population, and/or has used a slightly modified protocol with a slightly different population, where results are consistent with positive findings from the original evaluation.
Selective
One of the three categories (Universal, Selective, Indicated) developed by the Institute of Medicine to classify preventive interventions. Selective prevention strategies focus on specific groups viewed as being at higher risk for mental health disorders or substance abuse because of highly correlated factors (e.g., children of parents with substance abuse problems).
Substance abuse prevention
Attempts to stop substance abuse before it starts, either by increasing protective factors or by minimizing risk factors.
Substance abuse treatment
Assistance to individuals for existing substance abuse disorders.
Sustainability
The long-term survival and continued effectiveness of an intervention.
Symptomatalogy
The combined symptoms or signs of a disorder or disease.
Systematic review
A literature review that attempts to collect, summarize, and present results of individual studies, and then synthesizes findings on a specific topic.
Universal
One of the three categories (Universal, Selective, Indicated) developed by the Institute of Medicine to classify preventive interventions. Universal prevention strategies address the entire population (national, local community, school, neighborhood), with messages and programs to prevent or delay the use/abuse of alcohol, tobacco, and other drugs.
Validity of measure
The degree to which a measure accurately captures the meaning of a concept or construct. Key types include pragmatic/predictive, face, concurrent/criterion, and construct. Validity of measures is one of the six NREPP criteria used to rate Quality of Research.