The Best Software Selection Tactics for SMBs

By: on November 23, 2015

There are many tactics small businesses can, and should, employ when embarking on a software selection project. Recently, Software Advice surveyed software buyers across multiple industries to determine which of 14 software selection tactics for SMBs they found to be most effective. This report outlines what what we discovered.

Key Findings

  1. Checking and contacting vendor references—meaning getting feedback from real customers—is the most effective tactic for evaluating software.
  2. Although 71 percent of buyers involved end users in the selection project, our research found that this tactic is actually the least effective.
  3. The most popular tactic, assessing processes affected by the software, actually has a relatively low impact on project outcomes and satisfaction.

Software Selection Tactics for SMBs: The Success Quadrant

We deemed tactics as being “most effective” if they met two criteria:

    1. They had a high impact on the outcome of the project (those who applied the tactic experienced different outcomes than those who didn’t). 
    2. They led to relatively high rates of satisfaction with the systems buyers ultimately selected.

Conversely, a tactic was deemed ineffective if people who applied it were actually more dissatisfied in their software selection than those who didn’t.

To help buyers understand this data, we created a quadrant that provides a visual of where each tactic falls on the “impact” and “satisfaction” spectrum. The numbers on the x-axis (the horizontal line) represent the percentage of people who applied a given tactic and rated their satisfaction a “9” or “10” on a 10-point scale (10 being “extremely satisfied”).

The numbers on the y-axis show the impact score: a measure of how different the outcomes were between people who did and did not apply the tactic (for an expanded description of this score, see the Methodology section).

Based on these values, tactics shown in the upper-right quadrant were the most effective, because they had both a high impact on the outcome of the project and led to relatively high satisfaction.

The tactic in the bottom-left quadrant was the least effective because it had a negative impact: buyers who applied the tactic were more unhappy than those who didn’t.


While the tactics in the top-left quadrant did impact the outcome of the project, they were deemed “neutral,” because they didn’t result in extremely high rates of satisfaction (like those tactics in the green quadrant).

Finally, if buyers who applied a tactic were less satisfied than those who didn’t apply it, but still had a relatively higher rate of satisfied buyers, the tactic would have appeared in the bottom-right quadrant. None of the tactics we evaluated fell into this category.

30 Percent of Buyers ‘Extremely Satisfied’ With Software

Overall, buyers who contacted us for advice about software were highly satisfied with the products they ultimately selected. A majority of buyers (61 percent) rated their overall satisfaction with their final software purchase an “8” or higher, while 30 percent rated satisfaction a “9” or “10.”

Overall Satisfaction With Software Selection

People who had an attorney review their agreement with the software vendor reported the highest overall rates of satisfaction, with 36.6 percent rating it “9” or higher. While this was a higher percentage than people who checked vendor references, the impact checking references had on the outcome of the project was slightly lower.

Buyers’ Satisfaction With Tactics

The people in our sample who prepared a request for proposal (RFP) had the next-highest percentage of people rating their satisfaction “extremely high”—however, the impact on the outcome of the project was not as strong as it was for other tactics. This tactic had an impact score of “2,” compared with checking vendor references, which had an impact score of greater than “5.”

Tactics’ Impact on Project Outcome

Most Popular Tactics Not Necessarily the Most Effective

After analyzing the data, it became clear that the most-used tactics were not always the most effective ones. Instead, we found that two of the most effective tactics were actually less popular. Checking vendor references and having an attorney review the agreement were only employed by about 74 percent and 56 percent of buyers, respectively.

Common Software Selection Tactics

Meanwhile, the most common tactic buyers used during the software selection process—“assessing the business processes that would be affected by the purchase”—had a relatively low correlation to high satisfaction, as well as a low impact on the success of the software selection project.

Additionally, it appears that many buyers are employing a tactic that is actually detrimental to the success of their software selection project. Over 70 percent of buyers involved end users in the selection process—the only tactic that actually increased the likelihood of buyers’ extreme dissatisfaction with their ultimate software purchases.

Demographic Information

The results of this survey are most relevant to small businesses, given the breakdown of our sample. In fact, the vast majority of buyers in our sample represented companies with fewer than 500 employees.

Respondents by Business Size

Meanwhile, the level of responsibility of the buyers we surveyed was rather widespread, with the largest segment of buyers filling managerial roles at their company. Only 16 percent of software selection projects in our sample were led by an executive-level employee.

Respondents by Level of Responsibility

Software Advice collected 321 responses for this survey in 2013. We asked buyers 14 “yes” or “no” questions to determine what tactics they used during their software selection process. We then asked respondents to rate their level of satisfaction with their software purchase on a scale of one to 10, 10 being “extremely satisfied,” one being “extremely dissatisfied.”

In order to determine the impact of each tactic on buyers’ satisfaction levels, we divided the percentage of buyers who did not apply a tactic and were extremely dissatisfied (giving a “1” or “2” rating) by the percentage of buyers who were extremely dissatisfied and did use the same tactic. The resulting number was the “impact score,” which represents the likelihood of the buyer being extremely dissatisfied with their selection if they did not apply the tactic. To use “checking vendor references” as an example:

  • Step 1: We found that 15.5 percent of buyers who did not check vendor references reported extremely low levels of satisfaction (rating their satisfaction level “1” or “2”).
  • Step 2: We found that only 3 percent of buyers who did check vendor references were extremely dissatisfied with their purchase (rating their satisfaction level “1” or “2”).
  • Step 3: We divided the percentage of buyers who were very dissatisfied with their purchase and did not check references by the percentage of buyers who did check vendor references: 15.5 percent / 3 percent = 5.2.
  • Conclusion: Buyers who did not check vendor references were 5.2 times more likely to report extremely low satisfaction. This tactic was gauged to have a strong impact on buyer satisfaction levels, and thus, the outcome of the project.

If you have additional questions about methodology, or would like to further discuss this report, feel free to contact Ashley Verrill at


You may also like:

Construction Software Research & Articles

Customer Relationship Management (CRM) Software Research & Articles

Medical Software Research & Articles

Compare Business Software for SMBs