(image by Lea Suzuki, courtesy the Chronicle)
Two years after a bitter fight over its creation, more than 3,200 people have come before the court, which is now lauded by its early critics. Unlike jail, supporters say, the program gives San Francisco’s underserved residents the support they need to clean up and avoid trouble.
“Those are things we can do faster than most and do more effectively than most,” said Commissioner Everett Hewlett Jr., who has overseen the court since January.
Each day, Hewlett hears from as many as 75 people such as Hicks who have agreed to enter the center’s program. Seventy percent of program participants have abused drugs or alcohol, and 36 percent are homeless. An additional 38 percent live in residential hotels.
Two questions remain unanswered in the article. The first is whether the Public Defender’s Office, once a staunch antagonist of the CJC (and of problem-solving courts in general), has come around (I hope it has). The other one, which will occupy us today, is whether any research team is following the defendants after their involvement with the court ends, so that recidivism can be measured.
A few words about recidivism studies: Merely following the defendants and calculating their recidivism rates is not enough. Whenever presented with recidivism rates, unless the rates are zero, the question is, “compared to what?”. Different offenses and offenders have different recidivism rates, and if a program is measured for its success, it has to be compared to the alternative — in this case, an ordinary criminal process.
The ideal time to have started a project like this would have been when the court was operating under a pilot model, because then many defendants committing comparable offenses were being sent to the Hall of Justice. Of course, the best setting for such a thing would be random allocation of defendants to the CJC and the Hall of Justice, but I hardly need to explain why that would be extremely problematic from an ethics perspective; while random allocation creates a natural experiment that is good for science, it is ethically questionable to condition people’s fate upon their random allocation to this proceeding or the other (which is not to say that there aren’t research teams doing it out there). But even if such random allocation is impossible or undesirable, you still want to match the experiment group, CJC clients, to a control group of criminally-processed people. As Mark Mitchell and Janina Jolley explain so well in their book Research Design Explained, matched pairs technique requires rigorous adherence to detail, because each member of the experiment group needs to be matched to a member of the control group in terms of all the important variables. In our case, the fact that people voluntarily decide whether to go to the CJC or to trial is a problem. The matching should take into account not only information about the offense (severity, circumstances, type) and the offender (demographics, criminal history) but also the proceeding. Are people who opt for the Hall of Justice for a jury trial different in an important way to people who opt for the CJC? They probably are, in important ways. This is a very difficult thing to do, and the critique is often that matching is not perfect because this variable or another was not taken into account. The statistical test for matched pairs comparison assumes that the pairing was effective and rigorously done. There are some ways to control for this, but they are not perfect.
Nevertheless, following up on the recidivism rates of CJC graduates will provide helpful information, because it will at least allow some comparisons to the general recidivism rates, or to similar studies done in comparable cities. If such a study is currently being done that the CCC blog does not know about, I invite the CJC personnel or the research team to tell us about their design and progress.