Home > Reviews & Submissions > Review Process
Review Process
Pre-screening and Approval for Review
NREPP identifies programs for review in three ways:
- Environmental scans: SAMHSA and NREPP contractor staff conduct environmental scans (including literature searches, focus groups, public input, and interviews) to identify potential interventions for review.
- Agency nomination: SAMHSA identifies programs and practices addressing specific agency priorities.
Programs identified through the open submission process are prioritized for review.
Re-Review Process
- The changes to the review process will apply to all re-reviews of programs currently posted on NREPP. Approximately 110 programs posted on NREPP will be reviewed each year over the next 3 years. Program contacts/developers will be notified at least 60 days prior to the re-review date that their program has been selected for re-review. The re-review will follow the procedures presented on this page.
Literature Search and Screening
NREPP contractor staff contact the developer to request any additional evaluation studies and information about resources for dissemination and implementation (RFDI). Applicants will be asked to complete the RFDI checklist. Although SAMHSA has determined that programs no longer need RFDI materials to be reviewed, programs with such materials will be prioritized for review.
To ensure a comprehensive report, a literature search is conducted to identify other relevant evaluation studies. All evaluation materials are screened and NREPP contractor staff determine which studies and outcomes are eligible for review. SAMHSA has determined that all eligible outcomes will be reviewed, but that programs with positive impacts on outcomes and populations of interest will be prioritized over programs without positive impacts.
All evaluation studies that meet minimum criteria, including being published within the past 25 years, and falling within a 10-year time frame—as defined by the most recent eligible article of a study—are eligible for review.
Expert Review
NREPP contractor staff identify two certified reviewers to conduct the review (Note: re-reviews of programs posted on NREPP may be completed by one reviewer). Reviewers must complete a Conflict of Interest form to confirm no conflict exists that would require recusal.
Review packets are sent to reviewers to assess the rigor of the study and the magnitude and direction of the program's impact on eligible outcomes. Studies identified during the submission process or by the literature search may be included in the review, but not all evaluation studies are necessarily included in the review packet. For instance, studies that will not be reviewed include those that do not meet the minimum criteria or assess only outcomes not included in the NREPP outcome taxonomy. The outcome taxonomy currently includes 16 mental health, 12 substance abuse, and 27 wellness outcomes, relevant to SAMHSA’s mission. An iterative process was applied to condense an extensive list of over 1000 outcome tags (extracted from studies reviewed on NREPP) into aggregate constructs. All outcomes were reviewed by scientists with expertise in the field of behavioral health and by SAMHSA staff.
Reviewers independently review the studies provided and calculate ratings using the NREPP Outcome Rating Instrument.
Outcomes are assessed on four dimensions. Study reviewers assign numerical values to each dimension in the NREPP Outcome Rating Instrument (with the exception of effect size). To support consistency across reviews, the dimensions include definitions, and the NREPP Outcome Rating Instrument provides other guidance that reviewers consider when rating findings. Reviewers also make note of any other information that should be highlighted as being of particular importance.
The study reviewer is responsible for making a reasonable determination as to the strength of the methodology, fidelity, and program effect, based on the provided documentation and his/her specialized knowledge with regard to program evaluation and subject matter. If the reviewers' ratings differ by a significant margin, a consensus conference to discuss and resolve the differences may be held.
In addition to this review by certified reviewers, NREPP staff also assess programs' conceptual frameworks.
NREPP Methodology
To assess the strength of evidence, each eligible reported effect or measure is rated on four dimensions; each dimension includes multiple elements (for instance, the strength of the methodology dimension includes design/assignment and attrition). Not all reported effects or measures will be assessed. For instance, those measured by a single item that is either subjective or not well-established are excluded. Those for which no effect size can be calculated are excluded. Also, findings not related to an NREPP outcome of interest are excluded. Once all eligible measures or effects are rated, all scores for those falling into one outcome are combined. This includes reported measures or effects across studies.
The effect size calculations are based on standard statistical methods. NREPP staff calculate Hedges g effect sizes for both continuous and dichotomous (yes/no) outcomes. Whenever possible, NREPP staff calculate an effect size that is adjusted for baseline differences.
Evidence Classes and Outcome Ratings
The outcome rating is based on the evidence class and the strength of the conceptual framework. The graphic below summarizes all of the components of the outcome rating.
Components of the Final Outcome Rating
The evidence class for each reported effect is based on a combination of evidence score and effect class.
- Evidence score is based on the rigor and fidelity dimensions and is rated as strong, sufficient, or insufficient.
- Effect class is based on the confidence interval of the effect size:
- Favorable: Confidence interval lies completely within the favorable range
- Probably favorable: Confidence interval spans both the favorable and trivial range
- Trivial: Confidence interval lies completely within the trivial range or spans the trivial and favorable range
- Possibly harmful: Confidence interval spans both the harmful and trivial range
The conceptual framework is based on whether a program has clear goals, activities, and a theory of change.
These two dimensions are then combined to categorize programs into one of five evidence classes as depicted below.
DESCRIPTION OF EVIDENCE CLASSES | |
Evidence Class | Evidence Description |
Class A | Highest quality evidence with confidence interval completely within the favorable range |
Class B | At least sufficient quality evidence with confidence interval completely within the favorable range |
Class C | At least sufficient quality evidence with confidence interval spanning both the favorable and trivial range |
Class D | At least sufficient quality evidence with confidence interval completely within the trivial range |
Class E | At least sufficient quality evidence with confidence interval spanning both the harmful and trivial range |
Class F | At least sufficient quality evidence with confidence interval completely within the harmful range |
Class G | Limitations in the study design preclude from reporting further on the outcome |
The evidence classes for each reported effect within an overall outcome are then pooled into an overall outcome score. Next, the overall outcome score is linked with the conceptual framework score to determine the final outcome rating for each outcome (see table below). To be rated effective for an outcome, a program must have a strong conceptual framework and strong evidence of a favorable program effect.
OUTCOME RATING | ||
Outcome Evidence Rating | Icon | Definition |
Effective | The evidence base produced strong evidence of a favorable effect. | |
Promising | The evidence base produced sufficient evidence of a favorable effect. | |
Ineffective | The evidence base produced sufficient evidence of a trivial, possibly harmful, or wide-ranging effect. | |
Inconclusive | * | Limitations in the study design or a lack of effect size information preclude from reporting further on the effect. |
*Outcomes rated as “inconclusive” are not depicted with an icon and program profiles are not prepared for programs with ‘inconclusive’ ratings for every outcome.
Reporting
The ratings and descriptive information are compiled into a program profile.
A courtesy copy of the program profile is shared with the developer or submitter of the program for review, who may suggest revisions to the profile. All completed profiles will be published on the NREPP website except those for which outcome ratings could not be determined due to inconclusive evidence.*
The final program profile is submitted to SAMHSA for review, approval, and posting on the NREPP website.
* Programs that are reviewed—but that for which there is only inconclusive evidence for outcomes—will be listed by name on the NREPP website.