Volunteer Impact Report
IndustryView | 2014
“Measuring volunteer impact” refers to any attempt to quantify how much the work performed by volunteers creates economic or social change. To find out what metrics, indicators and data collection methods nonprofits use to measure volunteers’ impact on their organizations’ outcomes, we partnered with VolunteerMatch to survey 2,735 nonprofit professionals around the globe. Here’s what we found.
Fifty-five percent of respondents said that their organization collects data—quantitative, qualitative and financial—to measure the overall impact volunteers have on its programs. In most cases, measurement occurs monthly (32 percent) or annually (24 percent).
Some survey respondents say they have a formal process for collecting and analyzing volunteer data. Insights are used to optimize programs, volunteer recruitment strategies and the delivery of services to clients.
Others employ a more casual approach, collecting numbers for only one or two data points—such as the number of hours worked by a volunteer over a period of time—to get a general sense of their volunteers’ impact.
Of the 45 percent of respondents who reported that their organizations don’t measure volunteer impact, 34 percent attributed the failure to a lack of resources and tools. At the same time, 29 percent said that a lack of skills or knowledge prevented measurement, and 25 percent said their organization’s staff lacks the time to do so.
Only 6 percent of respondents said the reason they didn’t measure impact was because the data was not important—whether because their nonprofit engages too few volunteers to make it worthwhile, the nonprofit’s managers already understand their volunteers’ impact without a formal assessment or because the leadership doesn’t care to find out.
Diane Knoepke, managing director of client leadership for IEG Consulting, says that “measurement is an increasingly hot topic, so it’s a great time to gather tools and resources about measuring impact. The key is to discern the most important outcomes an organization wants to drive, and then build simple, realistic solutions to measure them.”
The majority of respondents whose organizations measure volunteer impact say the data is useful—giving leaders insights that inform decisions to improve the quality, reach and outcomes of volunteer programs and to increase funding.
Thirty-four percent found volunteer impact data to be “very useful,” while 43 percent called it “somewhat useful.” Twenty-two percent of respondents had either a neutral opinion or weren’t sure of the usefulness of the data. Only 2 percent say impact data is not useful at all.
Most respondents reported gaining at least one benefit from collecting and using volunteer impact data. Nineteen percent said that their organization used the data to make adjustments to the volunteer program itself, which resulted in higher volunteer recruitment and retention rates.
One respondent explains that when their organization began measuring impact, it found ways to adjust how prospective volunteers’ skills were evaluated, which led to more relevant task assignments—thereby increasing volunteer engagement and retention.
Another respondent says that a formalized process for collecting volunteer data resulted in the organization finding “previously unidentified volunteer hours,” improving its overall reporting of activities.
Additionally, 18 percent of respondents say that their organizations used volunteer impact data to improve program outcomes. For instance, Pfizer’s Global Health Fellows (GHF) program recruits employees to use their expertise to improve health conditions in developing countries by volunteering through partner non-governmental organizations (NGOs). Pfizer conducted surveys and interviews and collected reports from its volunteers—known as “Fellows”—to measure their impact on the NGOs’ programs.
In one case, GHF found a Fellow’s work “increased the number of beneficiaries [served] from approximately 275 to 600[, a 217 percent gain].”
Meanwhile, 17 percent of respondents reported that their organizations obtained more funding because the impact numbers motivated funders to give. Indeed, a 2014 Software Advice survey found that 60 percent of individual donors want proof that a nonprofit is making a positive impact before making a second donation. Because volunteer work factors into the ultimate impact equation, showing individual and major donors how volunteers themselves contribute to mission advancement can influence donation decisions.
In another example, a nonprofit professional explained that their organization “wanted to measure the impact volunteers had made in people’s lives; [in the lives of] the volunteers themselves; in the [lives of the] clients that we had worked with and helped … [Measuring this impact] helped us get funding from the Big Lottery Fund, because quantitative figures demonstrated to them who was actually benefiting.”
Finally, a less common but nonetheless noteworthy benefit was that, in some cases, volunteer data led to the creation of paid positions (6 percent).
One respondent used volunteer impact data “to convince leadership to hire a full-time volunteer coordinator to directly manage the volunteer program, [which has] over 1,000 active volunteers across a 600-mile range.” The data validated the need to approve the added expense.
In another case, volunteer data persuaded a nonprofit’s leaders to hire an additional staff member, which increased its capacity to handle more projects.
Forty-five percent of respondents used at least one indicator to measure how volunteers impact programs. Fewer than 1 percent used seven or more.
Respondents identified the “dollar value” of a volunteer’s time as the most popular metric, but at the same time said it was only the fourth most effective for measuring impact.
The most effective metrics, according to 98 percent of respondents, were project outputs—such as the number of meals served, books distributed, children who were aided in finding stable homes or medical records entered into a database—as well as testimonials from service recipients (97 percent), which contain qualitative data that can indicate impact.
Another popular indicator, according to 83 percent of respondents, was proof of progress towards mission goals—for example, the amount of water saved due to conservation programs in which volunteers played a key role, or an increase in the number of residents using parks that were improved by volunteers.
Knoepke says “indicators of actual results—problems solved, opportunities created, needles moved—should be the highest priority. I would also add ‘hours of staff time created’ and ‘hours of staff time equivalencies earned’ to the list, as those are often intermediate steps to impact.
“For example,” she explains, “if a nonprofit program staff member spends 10 hours out of her 50-hour workweek on scheduling follow-ups with beneficiaries, [ask,] ‘Is there an opportunity to free up that time by leveraging volunteer support?’ If so, that makes it possible for that staff member to spend 20 percent more of her time on higher-impact activities.”
Twenty-four percent of respondents in our sample used a single method of data collection to obtain impact data, while 5 percent employed six or more methods.
Of the various data collection methods, direct observation was the most popular—used by 84 percent of respondents—and was also considered the most effective method by 97 percent of respondents.
“We ask staff to evaluate the impact of volunteers: to [gauge] the success of the volunteer program itself [and the impact on] each program where volunteers are one of the ‘inputs,’” says one respondent.
Volunteer surveys—which gather feedback directly from volunteers regarding their experiences and activities—were the second most popular data collection method, with 84 percent of respondents reporting they used them. Beneficiary surveys, which collect data from the recipients of the nonprofit’s services about their experience interacting with volunteers, were next, at 70 percent.
Surveys were not only popular, but were also considered to be highly effective methods for data collection, with 95 percent saying that volunteer surveys were “very effective” or “somewhat effective,” and 94 percent saying that beneficiary surveys were “very effective” or “somewhat effective.”
Ninety-four percent of respondents also reported that using software applications to automate data collection and analysis was valuable for organizing and sharing information.
“Volunteers submit documentation after each patient visit … into our electronic medical records system so we can easily run reports,” one respondent says.
Another respondent explains that their coordinators use a custom-built volunteer data system for tracking information. Yet another says data and notes about volunteer activity are stored in their constituent relationship management (CRM) system, where it can easily be retrieved for reporting purposes.
Our sample was comprised primarily of respondents representing small, U.S.-based nonprofit organizations. A combined 39 percent engage 100 or fewer volunteers each year; 30 percent engage between 101 and 500; 12 percent engage between 101 and 1,000; and a combined 15 percent engage 1,001 or more.
Thirty-nine percent of respondents represented nonprofits that earn less than $1 million in annual revenue, and a combined 72 percent reported having 100 or fewer paid employees.
Respondents from human services nonprofits accounted for 35 percent of our sample, with the next largest segments attributed to health care (15 percent) and education (15 percent).
British mathematical physicist and engineer, Lord Kelvin, said “if you cannot measure it, you cannot improve it.” This is true not only for math and science, but also for nonprofit management.
Knoepke confirms this, saying, “Few nonprofit leaders want to spend their time measuring impact; they want to spend their time creating and facilitating impact. But we well know that what goes unmeasured often goes undermanaged.”
It’s common practice to add up volunteer hours, and multiply that number by the Independent Sector’s estimated hourly volunteer rate to calculate the monetary value of volunteer activities. But this figure only scratches the surface of the true impact volunteers have.
“[What’s] key here is not to think about ‘measuring volunteer impact,’ but rather, to think about ‘measuring social or business impact’—and then [to track] the extent to which volunteers are a factor in the impact created, and [use] that information to design solutions that help their contribution grow,” says Knoepke.
A lack of tools and knowledge are the greatest barriers to measuring impact. Fortunately, these barriers can be overcome.
In an article explaining how to implement a beneficiary feedback program, we explained that there are many free and low-cost survey tools that are intuitive for even the least tech-savvy employees. These same tools could be used to collect feedback from volunteers (and anyone who interacts with them).
Additionally, managers can collect feedback simply by asking for it in person, over the phone or via email, and can then add context to the feedback using quantitative and financial data.
Finally, nonprofits that are short on time can engage volunteers to implement and run a measurement program themselves—or, if the budget allows, can hire a third-party consultant that specializes in impact measurement activities.
Knoepke cautions, however, that “measurement is rarely an effective ‘volunteer project’—but participating in tracking is something they can certainly do.”
To find the data in this report, we conducted a 7-week survey consisting of 14 questions, and gathered 2,735 unique responses from random nonprofit professionals worldwide. We also emailed survey invitations to nonprofit software buyers who contacted Software Advice for guidance in their software-selection process. And we posted the survey on social networking sites, including Twitter, LinkedIn and Google Plus.
Additional responses to the survey were collected by our partner, VolunteerMatch, with whom Software Advice has no financial relationship.
Finally, two charts in this report (“Top Volunteer Impact Indicators” and “Top Methods for Collecting Volunteer Impact Data”) include percentages that are much lower compared to the surrounding data. This is because “qualitative constituent feedback,” “volunteer retention rate” and “formal reports” were not included in the multiple choice questions in the survey, but rather were write-in responses. We have included them in these charts because a significant number of write-in responses included these indicators and methods.
About Software Advice
Software Advice™, a Gartner company, is a trusted resource for software buyers. We provide detailed reviews and research on thousands of software applications. Our team of software advisors provide free telephone consultations to help buyers build a shortlist of systems that will meet their needs. Software Advice is headquartered in Austin, Texas, and has been named a Top Workplace by the Austin American-Statesman.
VolunteerMatch believes everyone should have the chance to make a difference. As the Web's largest volunteer engagement network, serving 100,000 participating nonprofits, 150 network partners and 12 million annual visitors, VolunteerMatch offers unique, award-winning solutions for individuals, nonprofits and companies to make this vision a reality. Since its launch in 1998, VolunteerMatch has helped the social sector attract more than $5.4 billion worth of volunteer services. For more information, visit www.volunteermatch.org.