Skip to main content

Comment: Making valid claims in social science research: A comment on Jenness and Calavita


By Tom Tyler, Yale Law School

I am writing to comment on several methodological issues raised by the article by Valerie Jenness and Kitty Calavita, entitled “It depends on the outcome”: Prisoners, grievances, and perceptions of justice”. I am pleased that the methodology blog for Law and Society Review has been created and provides a forum to discuss research design issues. I will address three aspects of the study: operationalization of the variables; statistical analysis; and inclusiveness of the literature review.

The Jenness/Calavita paper studies California prisons using data collected through interviews with prisoners. The paper says that it tests the perceptual procedural justice model, in particular there are frequent references to the Tyler model, in a prison setting. The study concludes that “prisoners privilege the actual outcome of disputes as their barometer of justice” showing “the dominance of substantive outcomes” (from the abstract)”.

I agree with Jenness and Calavita that prisons are an important and relatively neglected arena of perceived procedural justice research. I also agree that there are settings in which perceived procedural justice will be less important and perhaps not important at all. However, I want to raise a set of questions about whether this study actually shows that its particular prison context is one of those settings.

Appropriate operationalization of ideas

No person can own an idea, irrespective of whether that idea is procedural justice or legitimacy, nor can anyone say how ideas should be operationalized. So it is perfectly reasonable to study procedural justice/legitimacy in ways that are not related to my own theories or operationalizations. The key point is to be clear that that is what you are doing.

It is important to distinguish creating a new theoretical model or a new way of operationalizing ideas vs. testing an existing model. As an example Tankebe (2013) critiques the Tyler model of procedural justice and legitimacy on conceptual grounds. However, when Reisig, Tankebe and Wang (2016) test the Tyler model they take pains to measure the ideas as they were operationalized in the original model. If you do not do this then you cannot know if a failure to replicate is due to the original theory being wrong or due to differences in how the theory is operationalized in the different research projects.

While the Jenness paper talks at considerable length about the Tyler model and research that it has inspired, the study itself does not use Tyler-like measures to assess the independent variable (perceived procedural justice). The paper criticizes perceived procedural justice theory and cites me at many points. My theory says that perceived procedural fairness matters. The authors then claim that their study demonstrates that perceptions of procedural justice do not matter in the prison context.

The problem is that the authors do not measure and use perceived procedural justice as conceptualized in these Tyler-like models in their analysis. In the paper the authors tested perceptual procedural justice theory in prisons using previously collected data. That study contained no Tyler-like measures of procedural justice. The measures that represent procedural justice in real studies of procedural justice are well known and used in many studies. They ask people things like “how fairly they were treated” and “whether decisions were made fairly”. Despite their claim that they are testing procedural justice theory not one single item from any procedural justice scales was used in this study. In fact, the items used did not even contain the words fairness or justice. Here is what Jenness/Calavita considers to be an item that measures procedural justice: “How satisfied were you with the way [your grievance] was managed? (p.54)”.

The authors test procedural justice theory without asking about procedural justice in the way it is widely studied in the literature. This means that their study does not provide evidence about what would be found about the impact of procedural justice in prisons if the more traditional approach wwas used. Further, the authors provided no evidence that these unrelated items (that contain no reference to fairness and justice) have something to do with procedural justice like showing that they are correlated. They provide no evidence whatsoever about how their operationalizations are related to traditional approaches.

I think it is perfectly appropriate for the authors to say that they think a different way of operationalizing procedural justice is better. But they need to be clear that they are not testing the models of procedural justice they outline in their introduction. Whether those widely articulated and studied models are valid in a prison context is unexamined in this study.

The authors also use several procedural features and again infer without any evidence that those influence whether prisoners feel fairly treated. On page 14 they use a set of indicators such as whether the prisoner “had an official hearing” to measure if they felt fairly treated. Again, there is no evidence that this indicator is related to perceptions of procedural justice. This methodological approach is especially puzzling since the authors specifically critique Bierie (2013) because he does not “measure prisoners’ perceptions of procedural justice (p. 10)”. Instead Bierie relies upon exactly the type of objective data (ex: speed of response) that these authors critique and then go on to use themselves.

The authors cite my work and specifically say that their findings show it is wrong. It is important to emphasize that my work studies perceived procedural fairness in a particular way and it is a manner which they do not use or study. Because the authors do not measure perceived procedural justice in a way that is similar to that in most prior studies, their conclusions are not relevant to either testing my theory or determining whether that form of perceived procedural justice matters in prisons.

The dangers of substituting intuition for statistical analysis
Ironically, even if we were to accept the author’s novel definitions we need to qualify their conclusion. Look at Table 3 on page 66. In this table satisfaction with how the dispute is managed (procedural justice) influences satisfaction with grievance outcome. Overall the association reported is beta = 3.21, p < .001 (page 66). Procedural justice does matter. The authors’ argument is just that it is less important than the actual outcomes of the dispute. The title of the paper is that it “depends upon the outcome”. Actually it depends upon both procedural justice and the outcome.

Is outcome more important than procedure? On page 65 the authors say that outcome is “considerably larger”. This is of course not a statistical test. Both procedure and outcome are highly significant (p < .001) in Model 3. There is no evidence presented to support the argument that one is statistically larger than the other. Visually comparing the magnitude of unstandardized coefficients is not a valid approach to determining which one matters more. The unstandardized coefficient reflects the underlying metric of the measures. To adjust for this an appropriate procedure corrects by standardizing the variables.

It is true that 5.38 is bigger than 3.90, but it also has a larger standard error (0.91 vs. 0.61). In statistics we consider the ratio of the coefficient to the standard error to compare strength. With procedures we have (3.90/0.61=6.39) and with outcomes (5.38/0.91=5.91). This would suggest that in terms of standardized units procedure matters more than outcomes. But as I mention, the “look it’s bigger” test is not very valid. Statistically it is likely that these two numbers are not significantly different from each other. So procedure matters as much as outcomes.

If the argument is that both procedures and outcomes matter approximately equally then that conclusion does not distinguish this prison setting from many others. It is typically the case that both matter. Certainly this does not justify saying that outcomes matter more.

A broader literature review

Another aspect of my concern is with the authors’ presentation of the existent literature. The authors question the importance of procedural justice theories in a prison context based upon the results of their study. Others have similarly raised questions about various aspects of the theory of perceptual procedural justice in a recent exchange about its application to the arena of policing (Hagan & Hans, 2017; Nagin & Telip, 2017; Tyler, 2017). In that exchange, Hagan and Hans note that there are over 30,000 references to my work on procedural justice in the last five years.

Given the number and diversity of studies in this area, there is no simple way to easily characterize any single dependent variables as being central to procedural justice studies. But, one particularly relevant point noted by Tyler (2017) is that recent research has focused upon dependent variables that are legally relevant, including decision acceptance, rule adherence, cooperation and legitimacy. These dependent variables are not considered in the Jenness and Calavita study. It is true that some early procedural justice studies (Heinz, 1985; Casper, Tyler & Fisher, 1988) use procedural satisfaction as a dependent variable. However, more recent studies have tended not to use this measure for the reasons Jenness and Calavita note. Procedural satisfaction is highly correlated to outcome satisfaction. This paper ignores most recent studies in this area.

As noted more recent studies focus on other relevant issues besides procedural satisfaction. An issue that I think is particularly relevant to prisons is prisoner violence in prison and upon release. I think that our concern in understanding the impact of prison conditions is with their impact upon behaviors that we care about such as violence. If procedural justice influences behaviors such as rule following in prison (i.e. violence) and recidivism upon release, it matters.

Interestingly the literature on procedural justice and violence in prisons has been recently reviewed (Bierie & Mann, 2017). These authors give a more complete presentation of the findings. They identify a set of complicated longitudinal studies of prisoners and randomized control trials and conclude that “these studies show that when prisons are run in procedurally just ways, the result is reduced prison violence, increased prosocial change, and lowered recidivism (page 482)”.

In other words, disagreements about whether procedural satisfaction is an appropriate dependent variable aside it seems clear that the authors’ limited focus on procedural satisfaction is too narrow to allow them to conclude that procedural justice does not matter in this or any prison setting. If procedural justice does not influence procedural satisfaction, but does affect violence in and after prison as research suggests is true in prison and it shapes similar behaviors in other non-prison settings, then procedural justice is important.

Research summary

The key question is what the goal of this study is. If it is to conceptualize procedural justice in a new way, the authors should be clear about that. If it is to test my own (and many others’) model, flowing from Thibaut and Walker (1975) then the authors ought to use the items used in that body of work to test it.

The irony of the Jenness and Calavita study is that it characterizes my own procedural justice theory very well in the discussion, but then does not use a measurement strategy that fits that model. There is a big gap between the theory and what is tested. This makes the test an inappropriate one for evaluating the theory. It is important to recognize that this paper does not test perceptual procedural justice as it is widely operationalized in the existing literature. If tests a different theory created by the authors and presented for the first time in this paper.

This study also misuses statistical analysis and concludes, contrary to the authors’ own evidence, that outcomes matter more than procedures (see Table 3). There is a reason that people use statistics. People see what they want to see through intuition. Their own statistical analysis does not support the authors’ statements.

The study is finally problematic for picking one of many potential consequences of procedural justice and ignoring a set of outcomes which have received support in the broader procedural justice literature. The authors’ argument leaves the mistaken impression that there is no other literature which studies this issue when in fact a number of studies show that the procedural justice of prisons has important influences on attributes and behaviors.

For these reasons I would argue the authors have not made a strong case for their argument that perceived procedural justice is not important in this prison setting.



References
Bierie, D.M. (2013). Procedural justice and prison violence: Examining complaints among Federal Inmates (2000-2007). Psychology, Public Policy and Law, 19, 15-29.

Bierie, D.M. & Mann, R.E. (2017). The history and future of prison psychology. Psychology, Public Policy and Law, 23, 478-489.

Casper, Jonathan D., Tom Tyler, & Bonnie Fisher (1988) Procedural Justice in Felony Cases. Law & Society Review, 22, 483–507.

Nagin, D.S.  & Telep, C.W. (2017).  Procedural justice and legal compliance, Annual Review of Law and Social Science, 13, 5-28.

Hagan, J. & Hans, V. (2017). Procedural justice theory and public policy. Annual Review of Law and Social Science, , 13, 1-3.

Heinz, Anne M. (1985) Procedure v. Consequences, in Talarico, S., ed., Courts and Criminal Justice. Beverly Hills: Sage Publications. 13–34.

Tankebe, J. (2013). Viewing things differently: The dimensions of public perceptions of legitimacy. Criminology, 51, 103-135.

Tankebe, J., Reisig, M.D. & Wang, X. (2016).  A multidimensional model of police legitimacy.  Law and Human Behavior, 40(1), 11-22.

Tyler, T.R. (2017). Procedural justice and policing: A rush to judgment? Annual Review of Law and Social Science, 13, 29-53.




Popular posts from this blog

How to Tell When to Send Your Paper into a Journal

By Susan Sterett and Paul Collins

A group of faculty and graduate students in the Five College Seminar in Legal Studies in Western Massachusetts talked on a beautiful Friday afternoon about submitting a manuscript to a journal, something that feels so scary to some people they won’t do it. Other people send things in readily, and have tricks to manage any difficulties. If you don’t send it in, you won’t get it in the conversations you want to be part of. The academic conversation will be the worse for it. Still, how do you know? Especially because we are often the harshest judges of our work. Here are some alternatives the group came up with:
When an advisor, or colleague, or coauthor says it’s time;When you have gathered feedback on your work at a conference or working group and revised;When you’ve checked that it fits with the structure and format of articles in the journal you want to send it to, and it engages issues the journal engages;When you can’t stand to look at it any…

End impunity! Reducing conflict-related sexual violence to a problem of law

By Anette Bringedal Houge & Kjersti Lohne, Department of Criminology and Sociology of Law, University of Oslo
(Image from Global Summit to End Sexual Violence in Conflict, 2014, hosted by UK Foreign & Commonwealth Office)

In our recent article, End impunity! Reducing conflict-related sexual violence to a problem of law, we question the taken-for-granted center-stage position of international criminal justice in international policy responses to conflict-related sexual violence. We address how central policy and advocacy actors explain such violence and its consequences for targeted individuals in order to promote and strengthen the fight against impunity. With the help of apt analytical tools provided by framing theory, we show how the UN Security Council and Human Rights Watch construct a simplistic understanding of conflict-related sexual violence in order to get their message and call for action across to wider audiences and constituencies – including a clear and short caus…

Sociolegal Studies, Disaster, Climate Change

By Susan Sterett
 The devastation in Puerto Rico and the US Virgin Islands, Houston and Florida, the hurricanes, the fires in California, the fires in British Columbia, are not visible enough in sociolegal scholarship, to our loss. Students and others find the overlap of humanitarian assistance, weather events, and climate change compelling; they also lose. Anthropologists who work internationally have pointed out the difficult governance in humanitarian assistance outside the United States: what is the life that is saved? What are the tools essential to saving lives? What kind of governing does lifesaving justify? How do the NGOs who contract governing in disaster, including in disastrous states, bring law? Humanitarian assistance is where many young people want to be, and it looks like where the help is. It’s often militarized, and governs in exception. Often left unacknowledged is the role of law. Yet people and organizations bring law in catastrophe and humanitarian gove…