What mortality/injury rate is acceptable to meet safety standards in the quality of our roadway infrastructure? In our cars? In the products we use everyday? The homes and buildings we occupy? The toys we let our children play with?
With the recent introductions of Data.gov, Recovery.gov under the Barack Obama Administration and the Program Assessment and Rating Tool (PART) from ExpectMore.gov under the George W. Bush Administration, each paired with recent advances in technological capabilities to quantify and measure results in just about every facet of our lives, I would like to challenge the public and nonprofit sectors to do more.
Measurement from my perspective, it seems, to many is the equivalent of raw numbers and deriving meaning from cases we can count through quantitative analysis. How many widgets did the factory make within a given time frame? Or put another way, how many unwed mothers who completed the program and raised themselves out of poverty and alleviated their need for welfare? To most, the answer to this production line-type of question is all that they need to know to make a policy decision or a vote, up or down, to sustain the program’s existence.
I question whether the program is achieving its intended outcomes. Do these women who have “successfully completed” the program remain at a level where they can remain self-subsistent? How long? How has the program changed the lives of its participants? Is society better because of the program’s existence? The answer to these programs involves a series of qualitative queries that are both time consuming and often obtuse to aid the policy maker in forming and making a given decision. As such, many shy away from these types of studies.
In the nonprofit world, as an employee in this sector, I find this a daily occurrence and often a topic of conversation related to new media and return on investment (ROI) along with a myriad of other topics. Many nonprofit executives and often those in the Development department of any given nonprofit are interested in the “bottom line.” However, there is more to a nonprofit’s existence than purely survival. They seek to make a change in the world. To mitigate some problem with society; poverty, disabilities, homelessness, establishing clean drinking water where there is none available, building homes for needy families, etc. Perhaps this is best discussed using a theory.
Burt Weisbrod’s heterogeneity public goods theory is a purely economic model (Anheier p 120-1). He distinguishes that goods are either public, private or a combination known as quasi-public goods (Anheier p 117-21). Within this discussion an identifier of what is public or private is whether or not a good has the characteristic of excludability or rivalry (Anheier p 117). A pure public good is one that is non-excludable (no person can be excluded from using the good – think of swimming in a public river) and non-rival (an individual cannot use another person’s share of the good, the resource cannot be exhausted by another’s use of the good and it is no less available from usage – think of breathing air) and a purely private good is one that is exactly the opposite, exclusionary and rival. Nonprofits fall somewhere in the middle becoming “gap-fillers,” when a good is not purely excludable or rival (Anheier p 121). It can be non-excludable and rival or excludable and non-rival as a quasi-public good (Anheier p 121).
Nonprofits seek out and attack the difficult-to-measure problems in society and it is one of the main reasons why I am so attracted to them. Sure, the government or a for-profit can attack the problem, but a “market failure” occurs where neither step in. Market failure occurs due to “lack of perfect competition” for a good where efficiencies are lost in the process of delivery (Anheier p 119). Either the problem is not occurring in significant number for the government to tackle it or the generation of profit is not possible from acting on the problem and thus, the private sector disregards the problem. At any rate, what is most problematic for me is the way in which we measure outcomes and outputs.
I agree with Weisbrod’s statement that “The danger is that easily measured outputs or outcomes will be measured while others remain unmeasured and, in effect, valued at zero. Resources will then be misallocated, too few going into the provision of such subtle outputs as tender loving care in a nursing home, appreciation of art and music, [and] education in cultural values.”
Don’t let this quote fool you, however. Nonprofits are just as susceptible as the private and public sectors are to seek out what is most readily and easily measurable. For instance, using new media, how many Facebook fan page/Cause supporters do we have? How many hours do we spend monitoring each network daily? How many times is a given e-mail message forwarded/a given link clicked/message opened? Does this truly measure our intended outcomes as a result of our activities or are they merely outputs and products from production? Do for-profits only measure the number of widgets or do they look for an end-game in the profit they derive from making the product? What about the context of the conversation and engagement with fan page/Cause supporters? What constitutes “tender loving care,” using the quote above and at what level will a CEO or nonprofit board consider acceptable TLC without risking the future survival of the organization?
The answers to these questions are and always will remain complex but they are what we should be seeking when we seek to measure our results to learn from them and derive meaning from them. Rather than use them when the need for a decision presents itself, we should be seeking to learn from our measures so that we can better a given program or make modifications or perhaps merge/discontinue the program so that others might survive. The point is this, we should seek more than numbers when presented with data.
Consider this a treatise, if you will, that measurement is more than numbers, facts and figures. Can we derive something meaningful from the data collected? The answer to this is in the eye of the beholder and for me, it is something worth questioning whenever I am presented with data or a report of any kind trying to establish causality that a given intervention had an impact.
Works Cited
Anheier, Helmut K. Nonprofit Organizations: Theory, Management, Policy. New York: Routledge, 2005.
A fraternity brother of mine, who is a member of the SigEp Feds brought a recent article to my attention regarding new steps that the Office of Management and Budget are pursuing in the realm of performance measurement and program evaluation. You can read it here: http://bit.ly/25NIdK
What do you think?
I personally like the idea of pragmatism when it comes to evaluation and measurement. Not every program is alike and should not be measured the same way. Just filling in a check box does not yield anything if it does not also render anything meaningful from the data.