January 29, 2020
By: David L. Weimer
Sometimes conferences actually do stimulate research that otherwise might not be done. Six years ago, I was enjoying a breakfast conversation at the SBCA annual research conference with our immediate past president, Clark Nardinelli. At that time, he was overseeing the Regulatory Impact Analysis (RIA) of Federal Drug Administration (FDA) rules implementing the Food Safety Modernization Act, which included provisions to reduce the risks posed to people and pets from adulterated pet food. He complained that, unlike the case of human mortality risks, he had no sound basis for monetizing changes in mortality risks for dogs and other pets. As dogs do not freely make tradeoffs between risks and wages in labor markets, I suggested that finding a value of statistical dog life would likely require a contingent valuation study. Would his unit provide the approximately $50 thousand dollars needed to do the contingent valuation survey? Unfortunately, he had nothing but praise for the suggestion.
I spent the next few years unsuccessfully trying to find funding for the survey. Eventually, I decided that it would be a good use of the research funds remaining in my professorship account. As I had enjoyably learned much from my work in the past with my former students, Hank Jenkins-Smith and Carol Silva, I asked them to join me in the project. They brought their extensive experience in survey research generally, and contingent valuation specifically, to the project. They in turn recruited their colleagues Deven Carlson and Joseph Ripberger at the University of Oklahoma and I recruited my former student Simon Haeder, with whom I had many discussions about the project while I was searching for funding. We contracted with Qualtrics for survey implementation.
After considerable discussion, we decided to use the dichotomous choice/referendum method to elicit willingness to pay (WTP) for a vaccine that would reduce mortality risk from a hypothetical canine influenza threat. That is, respondents are given a random bid (price) for the good being valued (the vaccine) and asked if they would be willing to pay this amount. Variation in the rates of acceptance for different bids allows for a mean WTP to be extracted from the sample. If the sample is representative of the general population, then this method produces the mean WTP we seek.
In our base case scenario, the vaccine would reduce the risk from 12 to 2 percent. We also did sample splits for a quantitative scope test (reduction of risk from 12 to 6 percent), a qualitative scope test (suffering rather than no suffering before death), an externality test (vaccination would reduce death risk to other dogs), and a discretionary income prompt. If respondents are answering the elicitation as if it were an economic choice, then the quantitative scope test offering the less effective vaccine would yield a smaller mean WTP and the qualitative scope test with the worse risk without the vaccine would yield a larger mean WTP. We expected the externality test to produce a higher mean WTP if people are at all altruistic and we expected the discretionary income prompt to help respondents keep in mind that to pay for the vaccine they would have given up something, perhaps leading to a lower mean WTP.
We sought to apply good contingent valuation craft. For example, as people often have difficulty assessing changes in risks, we provided graphics to help convey the magnitude of mortality risks with and without the vaccine. We included a follow-up question to those who accepted bids asking about the certainty of their acceptance so we could adjust the responses to reduce non-commitment bias in our statistical analyses. We also included follow-up questions asking about the plausibility of canine influenza threatening their dogs and of a vaccine reducing the threat.
Beyond common practice, we innovated in two ways to help increase the engagement of respondents with the survey. First, somewhat surprisingly, our Institutional Review Boards allowed us to ask respondents for the names of the dogs on whose behalf they were responding. We anticipated that this would help avoid confusion for respondents who have more than one dog. Further, we could use dogs’ names in questions to help respondents relate the hypotheticals to their own situations. In the subsequent analysis, we were able to compare the distributions of the most common male and female dog names in our sample with those from firms that annually publish such lists to help assess external validity.
Second, we wanted to know how long respondents expected their dogs to live. Expected longevity could directly affect willingness to pay for reductions in mortality risks. It would also be relevant in converting value of statistical dog life to the value of dog life year. However, as we were concerned that respondents might have difficulty making such an estimate, we decided to give them a reference point. To do so, we asked the weight of the dog (smaller dogs tend to live longer) and its current life. We then used this information to recover the expected remaining longevity from a life table for dogs and provided it to the respondent in the question asking about expected longevity.
Taking account of all aspects of our statistical results, we concluded that, on average, U.S. residents who viewed their dogs as pets were making decisions affecting their dogs’ mortality risks as if they were valuing their dogs’ lives at $10,000. That is, for regulatory and other analytical purposes, we recommend this amount as the value of statistical dog life (VSDL). We see this as a starting point for further research. We have hundreds of studies estimating the value of statistical life (VSL), at least a few more focusing on man’s best friend would seem to be a good analytical investment. And of course, we shouldn’t ignore cats!
We were pleased that the Journal of Benefit-Cost Analysis expeditiously secured constructive referee reports that allowed us to move quickly toward publication. Our research will appear in the next issue as “Monetizing Bowser: A Contingent Valuation of the Value of Statistical Dog Life.” See The Conversation for a non-technical discussion of the study and its implications.
Dave Weimer is the Edwin E. Witte Professor of Political Economy at the University of Wisconsin–Madison. He has served as president of the Society for Benefit-Cost Analysis and the Association for Public Policy Analysis and Management. He is the author of Behavioral Economics for Cost-Benefit Analysis (Cambridge University Press, 2017).