Benchmarking Blunders

Most Project Teams have encountered them: those times when benchmarking data is either lost, doesn’t make sense, or just flat doesn’t support the business case the project team is trying to make. See if your organization has stumbled into any of these benchmarking blunders. We’ve also included tips for finding a resolution to each of them.

project benchmarking

Data sets are lost or incomplete. This could be the result of faulty equipment or misunderstandings within the team about who was going to collect and store the information. Once you’ve determined where the error occurred, consider if there are ways to recreate the data. Depending on the information your team intended to capture, it may be possible to pull the data from paper records or other sources. Vendors might have at least a portion of the information within their systems. It may also be feasible to access a backup copy if the current version of the file was inadvertently deleted or can’t be found in the folder that’s supposed to house it. If you’re not able to completely restore the data set, be sure to note it in any reports your team offers so there are no misunderstandings on what was benchmarked and what wasn’t.

Stakeholders aren’t in agreement on benchmarking data. The most common conflicts arise around the issue of which metrics matter. Project management teams may focus on one set of data while end users are more concerned with another. There’s also a possibility various stakeholder groups simply don’t understand what the benchmarking data means. Providing them with a bit of background along with the report can often clear up any confusion. It’s also helpful to put organization-specific benchmarking data into perspective. Consider including data for your company’s overall industry, or for the type of project (or possibly the operational area) the project supports.

The team’s conclusions aren’t supported by the data. This usually comes as a surprise to everyone and it can leave the team scrambling for answers. First, step back and evaluate the data you captured. Is it complete and were your gathering techniques sound? Did you capture what you thought you captured? Now look at the data in detail. Are there market cycles or recurring patterns that should be taken into account? Depending on the duration of time reflected in the data that’s been gathered, it’s possible the team is only looking at one portion of that cycle. If, however, you find that the metrics just don’t match expectations and initial predictions didn’t pan out, the best course of action is to simply acknowledge it and move forward.

Team members don’t agree on the methodology. Found most frequently when setting internal performance metrics, there are sometimes differing opinions on how and/or when data should be gathered and how the results will be interpreted (or possibly who will interpret them). It’s crucial these disagreements and differences of opinion be evaluated and put to rest before data collection begins, otherwise you’re likely to come out of it with irrelevant data or a sub-group of individuals who disregard the findings. The best strategy is typically a mixture of compromise and informed leadership directive. Bring the team together to discuss the purpose of benchmarking specific performance indicators. Review the various methodologies preferred within the team and the merits or drawbacks of each. Take the time to solicit and examine any concerns. The Project Team’s leadership team must then make a final decision on which methodology will be used. Supporting reasoning should also be revealed, so even if specific team members aren’t happy with the decision they can at least understand the rationale behind it.

Discuss Your Project, Portfolio & Training Needs