7. Charity research guide – part 2

This post is part 2 of Taylor's Charity Research Guide, explaining how to research a charity's effectiveness and efficiency.

Estimated reading time: 13-15 minutes

This is the second chapter of my Charity Research Guide series, designed to help you make informed donation decisions. In Part 1, I introduced six key criteria for evaluating charities and provided a downloadable table for tracking your research. Feel free to revisit that chapter for an overview.

In Part 2, we’ll dive into criteria 3 and 4 — examining how to assess a charity’s effectiveness and efficiency.

As we discussed in Chapter 5, determining whether an organization is effective and efficient is essential for identifying which ones will do the most good with our donations. This involves assessing whether the organization is achieving its mission, how impactful its activities are relative to its spending, and how much waste is present in its operations.

3. Effectiveness

According to Canada Helps’ 2024 Giving Report, which highlights giving trends in Canada, 70% of Canadians say they have a lot or some trust in charities. However, about 30% hesitate to donate more due to concerns about whether the funds will be used effectively and efficiently. Overall, this is quite a high level of trust and support for charities. For comparison, only 53% of Canadians trust journalists, and just 34% trust governments. Additionally, 89% of Canadians believe that charities generally improve the quality of life for people.

Given this high level of trust, Canadians may not scrutinize charities closely enough to ensure they are selecting the most effective ones for their donations. We should foster a culture of critical engagement with charities, grounded in data and measurable impact, to prevent our trust from being exploited through poor performance.

To do so, we must start with effectiveness – specifically, understanding whether the organization is achieving its mission. To research if an organization is effective, we should begin with investigating whether the charity publishes the results of its activities. Such information is typically found in Annual Reports, Impact Reports, or similar publications available on the organization’s website. These documents should provide various statistics and insights into the progress of their programs. If these reports are unavailable, it’s an immediate red flag – after all, why would a charity doing good work not want to share its accomplishments?

We may also come across organizations that publish their results but lack key details. This could suggest the charity is not measuring its results effectively. As Peter Drucker said, “If you can’t measure it, you can’t manage it.” Without measuring its impact, how can an organization know if it’s making a difference? Alternatively, the charity may be measuring the data but not transparently sharing it –  another serious red flag.

If the results are available and seem robust, the next step is to evaluate whether they report meaningful outcomes or simply outputs. Outputs describe what was done, while outcomes describe what changed – making outcomes far more significant. If you can’t find any outcomes, that’s a red flag. For example, an education charity might report as an output that they provided 500 kids with schoolbooks. While this sounds impressive at first, it doesn’t reflect a meaningful outcome – did the books actually help the kids learn? What if the books were in a different language and the children couldn’t read them?

To evaluate outcomes, charities must take specific actions, such as conducting post-intervention monitoring and evaluation to determine whether the work truly met the needs. This often involves surveys with the beneficiaries to gather feedback. Charities can also establish ongoing feedback mechanisms, allowing recipients to provide continuous input. Additionally, they may reference previous studies that demonstrate the effectiveness of similar interventions.

There is also a gold standard for evaluating impact, though I’ll be honest, you’ll rarely find it: an independent impact evaluation. Most of the results reporting you see on an organization’s website is not independent—it’s typically collected by the organization itself, which introduces bias and the potential for manipulation to highlight the best outcomes. An independent evaluator, however, can be trusted to provide an honest assessment of the program’s real impact. The best evaluations use scientific methods, such as Randomized Control Trials (RCTs), to establish causation between the intervention and the outcome. While RCTs are expensive and often impractical, other evaluation methods may be used as alternatives.

Although this gold standard is rarely achieved, we can still look for efforts to establish causation between interventions and outcomes. If a charity provides before-and-after comparisons that highlight the changes in a person’s life before and after the intervention, this is a strong indicator that good methods are being applied. Similarly, comparing the beneficiary to a similar person who did not receive support is also an effective approach.

Returning to our example from Part 1, let’s take a look at the charity True North Aid to assess whether they are effectively achieving their mission. I was able to easily find their 2024 Impact Report, on their homepage, suggesting they are transparently sharing these results. After reviewing the report, I found several outputs listed, including:

  • engaged with over 165 communities
  • supported more than 25,000 people
  • provided over 100,000 pounds of supplies
  • completed 24 community initiative projects
  • distributed 201 beds and bedding sets

Overall, these figures give me confidence that the organization is accomplishing its mission. However, they mainly describe what was done, not the actual change created. I’m curious how the organization knows their work has had the intended effect. Digging deeper into the report, I find a few examples of outcomes, such as some select quotes from beneficiaries. For instance, one participant in a hockey exchange program said, “The program not only helped me grow as a player but also as a person. I developed a deeper understanding of different perspectives and made lifelong friendships.” While this is a positive outcome for one person, what about the others? Could the organization have shared this one positive response while overlooking less favorable ones?

Overall, True North Aid provides very limited outcomes reporting. There’s little evidence of surveys or beneficiary feedback, aside from a few selected quotes. I also see no references to previous studies demonstrating the effectiveness of their methods, nor is there an independent evaluation to review.

Let’s input this information into our table.

TRUE NORTH AID
Research Criteria Research Notes
3. Effectiveness:
– Does it measure its impact and publish results?
– Do they report meaningful outcomes, not only outputs?
– Are there independent evaluations or studies?
– Yes, publishes results in annual report
– Outcomes are impressive, meeting mission goals
– Minimal evidence of outcomes, especially long-term

4a. Efficiency – impact per dollar

Peter Drucker once said, “Efficiency is doing things right, effectiveness is doing the right things.” Lucky for you—two Drucker quotes in one chapter! In the last section, we covered the “doing the right thing” aspect, ensuring an organization’s activities align with its goals. Now, let’s turn to “doing things right.” This means understanding how well tasks are being carried out and how much waste exists.

In Chapter 5, we discussed the concept of impact per dollar and why it’s more meaningful than focusing solely on administrative overheads. Impact per dollar can be measured at either the output or outcome level. For example, if a food bank spends $10,000 to distribute 5,000 kg of food to 1,000 people, the cost is 0.5 kg of food per $1 donated. This is an output-based calculation, as it reflects the activity done, but not the change it created. However, if 80% of recipients reported improved health outcomes (800 people), in this case, the impact per dollar is 800/$10,000, or 0.08 people with improved health per dollar donated. This is a much more informative metric.

Most organizations don’t perform impact per dollar calculations, so you won’t often find them on a charity’s website. While we can try to do these calculations ourselves, we must rely on the organization’s output, outcome reporting, and financial data, which are often incomplete. For small organizations, these calculations are relatively straightforward, but for larger ones, it’s much harder, especially if they don’t break down the costs of different programs.

It would be ideal if we could rely on a charity evaluator to do this work for us. One commonly used resource is Charity Intelligence Canada, which analyzes Canadian charities. Unfortunately, this resource falls short when it comes to understanding impact. While they provide a “Demonstrated Impact” rating, it’s a simple 5-level scale based on a methodology that assigns monetary values to outcomes. In my view, this method is subjective and unreliable, requiring too many assumptions. For example, how exactly will they assign a monetary value to educating a child, saving a dog’s life, or saving a human’s life? Different evaluators might assign very different values, and there’s limited research to support these decisions.

While Charity Intelligence is a valuable resource for financial information on charities, I cannot recommend it for impact evaluation, although you may find some interesting output statistics there.

Charity Navigator, an American resource, does a much better job of quantifying impact. They actually conduct some impact per dollar analysis, as we described above. For example, on their American Red Cross evaluation page, they note, “This program was found to avert a DALY (Disability-Adjusted Life Year) for less than 0.01% of U.S. GDP per capita in 2019.” DALY measures one year of healthy life, so this means that for 0.01% of the U.S. economic output per person, one year of healthy life can be saved, equivalent to about $7 USD. While Charity Navigator is a great tool for researching U.S. charities, it doesn’t evaluate Canadian charities, so we’re largely on our own.

Let’s return to the True North Aid example and calculate impact per dollar. In their impact report, I see they spent $391,268 to ship 100,000 pounds of supplies to remote communities. While this is an output, not an outcome, it’s still useful to calculate impact per dollar. This equals 0.26 pounds of supplies delivered per dollar donated.

While this number alone doesn’t provide much insight, I can compare it to other organizations doing similar work or track the charity’s efficiency over time. For example, if another organization delivers supplies at a rate of 0.5 pounds per dollar, it seems more efficient. However, if feedback reveals that most items were non-functional, their efficiency at delivering supplies doesn’t reflect the true outcome of the intervention. This is why calculating impact per dollar based on outcomes is so important.

I can also compare True North Aid’s activities to the previous year. In 2023, they delivered 0.13 pounds of supplies per dollar, so they improved to 0.26 in 2024, doubling their efficiency in just one year. However, we must be cautious with these comparisons. If they received free transportation donations in 2024, this would reduce costs and skew the efficiency calculation. Such accounting details are often not transparently shared or easy to find in financial reports.

TRUE NORTH AID
Research Criteria Research Notes
4a. Efficiency: Impact per dollar:
– Do they report impact per dollar figures?
– If not, can we calculate the estimated cost per outcome or output?
– How do they compare to similar organizations?
– No, impact per dollar not provided
– For supplies program, they delivered 0.26 pounds per dollar, up from 0.13 the previous year
– Cannot identify another similar organization program to compare it to

4b. Efficiency – overheads

The final criteria I’d like to consider in this chapter is overheads. While I believe impact per dollar is a more accurate measure of an organization’s efficiency, overheads can still offer some insights.

As introduced in Chapter 5, overheads refer to the percentage of funds spent on administrative costs (salaries, office expenses, fundraising, etc.) rather than direct charitable activities. While overheads can indicate waste, they’re not inherently bad. If overheads contribute to increasing impact, they can be a worthwhile investment. Typically, charitable overheads range from 10-35%, or less for volunteer-run organizations. Anything outside this range could be a red flag.

The challenge with overheads is that charities use varying accounting practices, making it easy to hide administrative fees in other areas. As Mark Blumberg explains, they “usually hide a lot more than they reveal,” making it difficult to accurately determine the true overheads.

That said, there are a few things we can watch for. Firstly, we can check if the charity transparently provides overhead information, often found in their annual report or financial statements (we’ll dive deeper into financial statements in Part 3!). These costs may be labeled as “overheads,” “administrative fees,” or broken down into categories like “staffing” or “fundraising.” If no overheads are disclosed, it’s a red flag, as transparency is essential in charitable work.

We can also use Charity Intelligence, which provides good financial data, including an “overhead” figure. However, I don’t rely on their overhead figure because it’s based on a revenue ratio, showing overheads as a percentage of revenue. For example, if a charity simply stops its program spending and holds onto the donations without spending them, the revenue ratio would remain unchanged, despite the charity’s reduced effectiveness. A more meaningful approach is using an expense ratio, which shows overheads as a percentage of expenses. Charity Navigator uses this method, and it better reflects how funds are being used.

Once we know the overhead, we can check if the charity justifies these costs. For example, if staffing overheads are high, do they explain why so many staff are needed to deliver the service? Or, if there are significant travel costs, do they explain the necessity of this travel? On the flip side, I would also be skeptical of very low staffing costs, as this could indicate that most of the work is being done by volunteers. While this may be fine for simple activities, like a bake sale, it could be a concern for more technical work that requires professional services.

One other aspect I would recommend checking is fundraising costs. These can sometimes become excessive, leading to significant waste if they’re not generating a good return on investment. For instance, in 2013, reporting by the Tampa Bay Times showed that hundreds of American charities were mismanaging funds by grossly overspending on fundraising solicitation costs. They identified the top 50 worst offenders as having spent less than 11% of the donated funds actually on direct aid to the beneficiaries, with the worst being the Kids Wish Network, who spent a staggering 86% on fundraising costs and only 2.5% on the direct aid.

To avoid donating to such organizations, we should check the fundraising expense ratio, which is a metric that shows how much is spent on fundraising as a percentage of the revenue it generates. The CRA has guidelines that set thresholds for these fundraising ratios – a ratio under 35% is acceptable, 35% to 70% may trigger further review, and anything over 70% raises serious concerns, requiring justification. The acceptable ratio of 35% means the organization would spend $0.35 to raise $1.00 in donations. When this number is excessively high, most donors would be disappointed to learn how much of their contribution is used for fundraising rather than directly supporting the cause.

Turning back to our True North Aid example, their financial statements show $259,159 in administrative expenses out of $2,436,855 in total spending, resulting in an overhead expense ratio of 11%. However, it’s tricky to get a clear picture. They also report $535,158 in other “administration” costs under program expenses, likely related to staffing costs tied directly to programs. Shouldn’t these also be considered part of the overhead? There are also travel costs for staff—should these be included? If I factor both of these in, the overhead ratio jumps to 39%.

Let’s input this into our table.

TRUE NORTH AID
Research Criteria Research Notes
4b. Efficiency: Overheads:
– Are they transparent about administrative costs?
– Do they justify their overhead spending?
– Are they reasonable, and not extremely high or low?
– Do not specify their overheads, but publish their financial statements to enable public to calculate
– Overheads are not justified with any explanation
– Overhead is 11-39% range, within reason

In this chapter, we’ve outlined the various aspects I would recommend researching about a charity’s effectiveness and efficiency. As you can see, it’s not always a straightforward task, and it’s perfectly fine if we can’t gather complete information – this is actually quite common. However, even with limited data, we’ll be in a much better position to make informed decisions.

In the next chapter, we’ll focus on the final two criteria: finances and reputation/reviews.

Share to social media:

Newsletter Updates

Enter your email address below and subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *