This is my third (and probably last) post about strategic funder initiatives. Specifically, I’m focused on foundations as the funding entity, since that is what I am most familiar with at this point in my career.
The last two posts on this topic were about grantees’ need for technical assistance and ideas about how to pull off a great technical assistance project. I had a few remaining thoughts about outcomes that I’d like to put out there now.
Funding Outcomes
First, I’d like to say that I’m not a huge fan of the word Outcomes when used in the context of grant reporting or funding accountability. I think that the original meaning of “outcomes”, at least when used from a funder’s perspective, is a product.
What was accomplished with that money?
How did it further our mission?
Who are the populations served with our money?
Are those populations better off than they would have been without our money?
Update, September 2015: For an entertaining (and slightly jaded) take on what a foundation means when they encourage “outcomes” tracking, check out this short post, “DO FOUNDATIONS REALLY CARE ABOUT OUTCOMES AND EVALUATION?” by Prentice Zinn, Director of GMA Foundations.
I do not believe that the term Outcomes is most commonly used to indicate a simple product any more. It has morphed into something larger and more difficult to define as grant makers, advocates, and the media seem to throw the word around willy nilly. Outcomes sometimes means a bunch of demographic numbers (we served 2,000 men and 1,500 women last year). Sometimes it means a determined need which was derived from information gathered over a certain time period (we know we need extra shelter beds for single men whenever the temperature drops below 40 degrees). There are long term outcomes (we’re eliminating chronic homelessness), short term outcomes (we handled 27 foreclosure cases last week), and some nearly branded types of outcomes…United Way likes to use logic models and “shared outcomes”. Whenever I start working with a new client and the term Outcomes pops out during information management discussions, I ask them to explain what an outcome is. Sometimes this makes me seem inexperienced, but that word could mean almost anything.
But I am charging ahead and discuss Outcomes in this post. There really isn’t a better word that I can think of to discuss a standardized set of data points that can be compiled and analyzed for purposes of judging success.
I got the idea for this post from another published Center for Effective Philanthropy article called “Room for Improvement, Foundation’s Support of Nonprofit Performance Assessment.” The article makes the argument that Foundation-funded grantees would like higher quality, more meaningful performance assessments from their funders. It makes the point well, even though it lacks the detail I always crave and boasts a few too many random pictures of rocks.

(a pretty good picture of my favorite rock)
What caught my eye was the survey finding that, “the majority of nonprofits [surveyed] believe that their foundation funders primarily care about how nonprofit performance information can be helpful to funders, as opposed to how it can also be useful to the nonprofits they are funding.”
Duh
Of COURSE we should structure our year-end grant outcomes report to include measurements that have meaning and can help the grantee organization managers self-assess. It just makes sense. The grantee organizations obviously share our mission and focus. If they did not, we would not be funding them. Therefore it shouldn’t be that difficult to align grant outcomes data with data that the executive director might find useful for internal program monitoring. Some of the year-end grant reports I’ve seen are pretty far off the mark in this regard, which is probably one big reason why the grantee organizations dread year-end reporting. They generate a lot of information that is meaningless to them, just to fulfill a funding obligation.
I do not object to requiring strict outcomes and holding grantees to high standards one bit. I understand that there will be a few occasions when a funder needs information from a group of grantees that those funded organizations would not otherwise track. The reality, however, is that many funders collect a lot of information, both narrative and statistical, that they don’t utilize and don’t truly need. It’s sort of a double-whammy-knuckle-sandwich move to: a) require outcomes reporting that you’ll do virtually nothing with, and b) collect data that means nothing to the organizations who have to produce it.
Knuckle sandwich talk aside, I don’t believe that funders do this out of spite or a desire to make grantees prove their dedication.
Outcomes Should Evolve
Year-end grant outcomes reports are typically solidified and standardized in the early years of a grant, then never again adequately questioned for continued validity over future grant years. As a funder, why not revisit the outcomes requirements every few years? While it’s nice to have certain foundational outcome measurements that can be compared from one year to the next over long time periods, it is also wise to ask, “why are we collecting this specific piece of information?” If no one knows why or the answer is something like, “we’ve always done it that way”, consider eliminating it.
I Know You’re Busy, But…
this is important. It takes more time and a lot more thought to compose a year-end funding report that produces meaningful outcomes for the grantees. Some surveying and meetings might be necessary. Advance planning and coordination. Skype sessions or conference calls. The funder is going to have to do the work to discover what outcome measurements have value to the executive directors and how those measurements line up with their own needs. They won’t match up 100% of the time. In some instances the grantee organizations may not have any internal outcomes measurements and could use the collaborative assistance developing some rather than having them blindly prescribed. But even if they only match up 50% of the time, the process of collaboration adds a great deal of transparency and understanding to the grant reporting process. Both the funder’s and the grantee organization’s measurements will inevitably improve. And, with goals and intent aligned through meaningful end products, the byproduct of all of this is a stronger delivery system.
Share
My final brief thought on this, which I know I’ve typed before but I am driven to type it again, is that funders should distribute their compiled raw data back to the networks who produced it in the first place. Polished annual publications geared for fund raising is not what I’m talking about. I mean share the aggregated raw data files. Make them available for download by the grantee organizations who want them. (This assumes that you are not collecting any sort of personal identifiers – WHICH YOU SHOULDN’T BE.) Share the outcomes data and encourage others to think of how they might analyze and compare it for advocacy purposes or to strengthen grant applications for new funders. You might even be inspired to use the data in new and different ways for your own purposes.

Leave a comment