Program Evaluation for Fundraisers: An Interview with Maggie Miller
Maggie Miller is a Denver based consultant who has worked in the field of program evaluation for nearly 20 years. She is known for her creativity and ability to maintain positive relationships with groups from different cultures during longitudinal studies. She is an activist for neighborhood schools and has a special interest in helping schools tune into the voices of the students and families as they make programmatic decisions.
In this interview you’ll learn how to think about reporting to different types of stakeholders across your fundraising strategies. You’ll also hear how to put together your quarterly impact report and what to do with unfavorable data. Ultimately, you'll learn about the relationship between program evaluation and revenue growth.
What is program evaluation? What are some of the different ways nonprofits evaluate their programs?
There's really only one way to find out if the organization is achieving its goals through the programs, which is to do program evaluation. The alternative is to sit back and say, I think we're doing a good job, but not having any evidence to prove it.
Different ways of doing program evaluation include: doing surveys with participants in the program, or interviews with them, or doing focus groups. What they all have in common is the purpose: to collect data on the programs' progress towards achieving its goals.
At Seed, we divide the fundraising strategies into five main categories: grant writing, major giving, annual giving, events, and corporate sponsorships. Can you talk a little about how program evaluation supports these strategies separately? What information would you report to a grant maker that might be different from what you would report to a major donor?
When an organization writes a grant, it is usually writing it about a specific program because most foundations aren't necessarily wanting to support overhead, operating costs, staff time. So the grant usually comes in the form of "please support this program."
So obviously, if I were a grant-maker, I would say, "well, what's the point of the program, what are you trying to achieve? Then prove to me that you're achieving this through the program."
I don't want to just see you handing out bag lunches and telling me that you've handed out a lot of bag lunches. Why are you handing out the bag lunches? Is it so that the kids at the school can concentrate better in their morning classes? I would want to hear more about that. So doing program evaluation to demonstrate outcomes is really important for grant writing.
My sense about major donors, is that they have a loyalty to the organization itself, that the organization itself has credibility. So for major giving, they would want to hear what the organization is up to, but there's enough belief that the organization is achieving its outcomes that maybe the report is not program specific.
Annual giving, I would imagine is the same way. There's that loyalty to that organization. We are annual givers to some organization, me personally, on a very small basis, just because I like and believe in the organization and I don't necessarily need to see a report.
Events are funny because as a person who goes to events, what I notice is that I get some data and then I get some real heartwarming stories. In pure program evaluation, we don't cherry pick. We report the good, the bad, and the ugly. Because frankly it's the bad and the ugly that can help us improve our programs. When you're talking about going to a major event, it's ok to cherry pick and say "here's what we've been up to in terms of the number of people we served or the number of participants who came to our programs," and then to tell a heartwarming story or even have a speaker talk about how the program affected them. It's OK to cherry pick in the delivery, but it's good to have program evaluation to have something to pick from.
Corporate sponsorships, I don't know too much about that, but I imagine it's a combination. I imagine that some corporations are really all about supporting a particular program and others might be about supporting the organization in general.
From your perspective, what type of reports should organizations generate for their stakeholders outside of that annual report? How would you counsel an organization to put together a quarterly impact report?
I think of evaluation as an ongoing process that starts when a program is a twinkle in the eye of the program staff. Then, part of planning the program is thinking about what the program is supposed to accomplish. Then, measuring those accomplishments and progress toward those accomplishments every step of the way. Then, at key critical points like at the end of a programmatic year, harvesting data about outcomes.
An impact report is interesting because how can you make impact in 3 months? What I would recommend for that is to use the data that you've been harvesting all along the way in terms of the number of participants in the program or goods delivered, that is, the “outputs” of the program, or even the little short-term outcomes. At every 3 months, you're really talking about little milestones along the way toward ultimate impact.
I'll think of an example from a client I have who is working with staff and teachers at a school who are then working with students on mindfulness and social-emotional learning and resilience. Well, obviously a kid’s life isn't going to be transformed month to month, but I can tell you that every couple of months, we've been able to harvest some of the data we've received from interviewing teachers and seen amazing milestones along the way. So if a teacher is saying, "when I'm rushing into the classroom, having just gone off this other experience, I've got my mind going a million miles an hour and I've learned in this program how to calm my mind, I can tell that then what I bring to the kids is more calmness and it shows up in the way they show up with me." That's only progress toward the bucket of gold at the end of the rainbow, but it's impactful. So sharing those stories of progress can be very powerful.
Have you ever had clients report on struggles that they're having? Or report on metrics that aren't going so well?
Yeah, it depends on the audience. You really want that transparency. My favorite had to do with a program for homeless people and one of the activities was handing out bus tokens. They were learning that the women in the program were selling the bus tokens for money. (They didn't know what they were using the money for.) That was very necessary for them to know. That was very necessary for them to know. They really needed to think about that because how did that relate to what they were ultimately trying to achieve? Did they want to change their daily activities to be better aligned with what they wanted to achieve? So in terms of reporting to themselves as learning, that was really important.
Did they have to talk about that at their big event?
Did they have to talk about that to a major donor or a grantor?
Absolutely. Because it demonstrates that they're a learning organization and willing to learn from data.
Have you seen organizations raise more money after they've gone through a process of clarifying their program outcomes or really doing any of this evaluation work with you?
Good news, I was just actually talking to a former client this morning. It was about a year ago that we really got clear on their outcomes and how they were communicating those, and which were not their outcomes. We looked carefully at their program activities to make sure they were aligned— this is called a logic model— then we made sure that what they were trying to gather related to their outcomes. Then just very functionally, did they have their spreadsheets set up so they could dump their data in on a regular basis?
Then, even though I may have seen them socially, I didn't know anything, hopefully it all worked out. I just learned this: from a major foundation in Denver, they just got a ginormous grant over several years. They let me know that they felt that it was directly related to the fact that they could make a clear case to the foundation of what their outcomes and impact were as well as outputs.
By the way I'm getting into evalueese speech so I'll just say that:
Outputs are the immediate results of the activities: so how many kids receive a free lunch.
Outcomes are what's so great about receiving a free lunch, for example, being better able to concentrate in their class.
And then impact is the effect on the world that so many kids have received a free lunch and are better able to concentrate in their class.
How does an organization get in touch with you?
I always say to anybody I meet and students in my evaluation classes, I'll have coffee for free with anyone. My belief is that no matter where you are in your lifecycle as an organization, there are things you can do evaluatively that can be helpful. Sometimes the belief is an organization isn't ready for evaluation activities, like they don't have a strategic plan. However, sometimes they can collect data that can inform their strategic plan.
Or let's say they have a strategic plan but they're not really sure that the program's even going in the right direction. There's an approach to evaluation called developmental evaluation where the evaluator works as a thought partner with the key staff and stakeholders and it's almost like frequent collection of data and very informal reporting. And the reporting isn't reporting out, it's a conversation about what are we going to do next with the data we've collected so far.
So yes, people can find me at maggiemiller.org and I would love to have coffee, or even just a phone call!