Study contained allconflict resolution quarterly doi crq bradshaw
DAVID ROSEBOROUGH
MARK S. UMBREIT
CONFLICT RESOLUTION QUARTERLY, vol. 24, no. 1, Fall 2006 © Wiley Periodicals, Inc. 87 and the Association for Conflict Resolution • DOI: 10.1002/crq.159
88 BRADSHAW, ROSEBOROUGH, UMBREIT
CONFLICT RESOLUTION QUARTERLY • DOI: 10.1002/crq
Effect of Victim Offender Mediation on Juvenile Offender Recidivism 89
CONFLICT RESOLUTION QUARTERLY • DOI: 10.1002/crq
90 BRADSHAW, ROSEBOROUGH, UMBREIT
To be included in the meta-analysis, each study had to (1) focus on juvenile offenders, (2) examine VOM outcomes and recidivism, and (3) use a VOM group and a comparison group. The search identified twenty-two studies in the area of VOM and recidivism (including Carr and Nelson, 2000; Cosden, Casas, and Wolfe, 1999; Dick, 1999; Hitao, 1999; Lee, 1999; Niemeyer and Schichor, 1996; Nugent and Paddock, 1996; Roy, 1993; Schichor, Sechrest, and Matthew, 2000; Schneider, 1990; Stone, 2000; Stone, Helms, and Edgeworth, 1998; Umbreit, 1994; Umbreit and Coates, 1993; URSA Institute, 1993; and Winamaki, 1997). Of these, fifteen studies met the selection criteria. Studies were excluded when they did not have a comparison group, used a pretest-posttest design, or included both adults and juveniles in the sample. The sample for this study included 9,172 juveniles from twenty-one service sites.
CONFLICT RESOLUTION QUARTERLY • DOI: 10.1002/crq
Seven variables were identified that might moderate the impact of VOM on reoffense: (1) quality of research design, (2) type of comparison group, (3) type of offense, (4) definition of reoffense, (5) source of the study, (6) sample, and (7) length of follow-up. Differences in reoffense rates between groups could be caused by the lack of equivalence in the initial formation of the groups. Therefore, it is important that methodological procedures such as random assignment, matching, and use of statistical evaluation of equivalence between groups are used in the study. Quality of research design was coded 1 � use of methodological procedures to ensure equivalent groups, and 2 � no methods were used.
Some comparison groups consisted of youth who refused to partici-pate in VOM while others were assigned to alternative treatment pro-grams. Differences in reoffense rates could be influenced by the nature of the comparison group. Comparison groups of youth who refused partici-pation were coded 0, and groups that were assigned to other treatments were coded 1. Type of offense, that is, property offense versus person-related offense, could more accurately moderate rates of reoffense. Sam-ples that had only property offenses were coded 0, and those with both property and person offenses were coded 1. Definition of reoffense was coded 1 if the offender was adjudicated guilty during the follow-up period and coded 0 if reoffense was defined as any official contact with law enforcement, court, or arrest. Research that reports negative or nonsignif-icant results, program evaluation reports, master’s theses, and dissertations are frequently not published and can bias results due to their exclusion from the literature. Published articles were coded 1, and unpublished research was coded 0. Samples in some studies consisted of only VOM-referred clients. This could bias toward reoffense rates if persons with a more serious history of offenses were excluded. If a study contained all
The primary analyses consisted of computation of effect sizes. Effect sizes were computed as the mean of the criterion group, VOM, minus the mean of the contrast group, divided by the pooled standard deviation of the treat-ment and contrast groups. For studies that reported percentages, the effect sizes were computed using the probit transformation of differences in pro-portions to effect size calculations (Glass, McGraw, and Smith, 1981). For those studies that reported nonsignificant differences but did not report means and standard deviations, a zero was entered for the effect size of that outcome measure. Effect sizes were adjusted to correct for bias attributable to studies with small sample sizes and to ensure that these studies did not inflate overall effect sizes. Weighting procedures described by Hedges and Olkin (1985) were used to combine effect sizes from different studies to give greater weight to studies with larger sample sizes that provide greater reliability.
The effect size reflects the distance the average VOM client was from the average contrast client expressed in standard deviation units. An effect size of 1.00 would indicate that the average VOM client would have been one standard deviation higher than the contrast group. Effect sizes approx-imating zero would indicate no differential advantage for either treatment. Negative effect sizes would suggest that VOM treatment was less effective than the contrast treatment.
Effect of Victim Offender Mediation on Juvenile Offender Recidivism 93
Figure 1. VOM Recidivism Effect Sizes
effect size for all studies was M � .34, SD � .46. Eleven studies had posi-tive effect sizes, two studies had no treatment effect, and two studies had negative effect sizes. See Figure 1 for a graphic depiction of the effect sizes.
Following the primary analysis of effect sizes, the first step was to test the homogeneity of effect sizes combined across all of the studies. As Durlak (1995) has noted, the Q statistic assesses whether the effects in the meta-analysis vary primarily due to sampling error or to systematic differ-ences among the studies and sampling error. If the effects of the group of studies are homogenous, it suggests that they come from the same popu-lation, and analysis of group means and correlations is warranted. The Q statistic for all studies was Q � 16.26, p � .05 ns, indicating homo-geneity of the reviewed studies.
Results from meta-analyses can be positively biased in the estimation of treatment effects because journals rarely publish papers that report on non-significant or negative results. This enhances the possibility of Type I errors in finding more positive results than is really the case if all existing studies are included in the review. Fail-Safe Ns for each group of effect sizes were calculated using the formula outlined by Rosenthal (1979). The Fail-Safe N represents the number of additional studies in a meta-analysis that would be necessary to reduce the mean effect size to .20, a small effect size (Wolf, 1986). The Fail-Safe Ns reported in this study indicate that confi-dence can be placed in the findings of the effect sizes (see Table 1).
Discussion
Variable:Recidivism | N | M | 95% Confidence Interval |
|
---|---|---|---|---|
Overall Average | 14 | 0.65 |
|
of year two the effect size was .000. Further research on the longitudinal effects with various subpopulations of juvenile offenders is needed.
The quality of the research methods used to create initially equivalent groups was a significant moderator of effect size. The seven studies that used random assignment, matching, or statistical methods to create equiv-alent groups had significantly lower effect sizes than groups that did not account for initial equivalence in groups. These studies, for the most part, also used a narrowly defined measure of reoffense, adjudicated guilty. Use of this more conservative measure may decrease the number of false positives in reoffenses.
96 BRADSHAW, ROSEBOROUGH, UMBREIT
Checklist (Achenbach, 1991) to measure juvenile behavior is important to broaden the scope of measuring delinquent behavior beyond reoffense. Fourth, while there were no significant moderating effects by type of comparison group, type of offense, definition of reoffense, source of the study, and sample effect size, further research using other types of designs is needed to identify potential moderating variables and begin to describe what works for whom in VOM. Fifth, criminal justice programs and psy-chologically based treatment interventions are the common responses to delinquent behavior. It is an unfortunate dichotomy given the fact that many juveniles adjudicated guilty who participate in VOM also have comorbid psychiatric conditions that need treatment. For those juveniles who are receiving treatment and participating in VOM there is a potential factor of multiple treatments that confound interpretation of research results. Conversely, it may be an important area of research to evaluate the effectiveness of a combination of restorative justice approaches and empirically supported psychological treatments.
Braithwaite, J. “Restorative Justice Is Republican Justice.” In G. Bazemore and L. Walgrave (eds.), Restorative Juvenile Justice: Repairing the Harm of Youth Crime. Monsey, N.Y.: Criminal Justice Press, 1999.
Carr, C., and Nelson, P. Centinela Valley’s Victim Offender Restitution Services: A Report for the Administrative Office of the Courts, Judicial Council of California. Los Angeles, 2000.
Effect of Victim Offender Mediation on Juvenile Offender Recidivism 97
Durlak, J. “Understanding Meta-Analysis.” In P. Grimm and R. Yarnold (eds.), Reading and Understanding Multivariate Statistics. Washington, D.C.: American Psychiatric Press, 1995.
Latimer, J., Dowden, C., and Musie, D. The Effectiveness of Restorative Justice Practices: A Meta-Analysis. Ottawa: Research and Statistics Division, Depart-ment of Justice, 2001.
Latimer, J., and Klienknecht, S. The Effects of Restorative Justice Programming: A Review of the Empirical. Ottawa: Research and Statistics Division, Depart-ment of Justice, 2000.
Niemeyer, M., and Schichor, D. “A Preliminary Study of a Large Victim Offender Reconciliation Program.” Federal Probation, 1996, 60, 30–34.
Nugent, W., and Paddock, J. “Evaluating the Effects of a Victim Offender Rec-onciliation Program on Reoffense.” Research on Social Work Practice, 1996, 6, 155–178.
98 BRADSHAW, ROSEBOROUGH, UMBREIT
Roy, S. “Two Types of Juvenile Restitution Programs in Two Midwestern Counties: A Comparative Study.” Federal Probation, 1993, 57, 48–53.
Stone, S., Helms, W., and Edgeworth, P. Cobb County Juvenile Court Mediation Program Evaluation. Cobb County, Ga.: Cobb County Juvenile Program, 1998.
Umbreit, M. S. “Crime Victims Seeking Fairness, Not Revenge.” Federal Probation, 1989, 53 (3), 52–57.
Wolf, F. Meta-Analysis: Quantitative Methods for Research Synthesis. Thousand Oaks, Calif.: Sage, 1986.
William Bradshaw is an associate professor at the College of Social Work, University of Tennessee-Knoxville.