Is this data helpful?

This post is for E&C professionals who prepare slide decks at this time of the year to show “annual key metrics” to leadership.

This should sound familiar: you look at a chart and anticipate what questions leadership will have. Questions like “How does this compare to the previous 5 years?” or “What discipline were imposed for this category of allegations?”

And off you go preparing more charts. Each leading to more anticipated questions. And soon your deck becomes an exercise to answer questions from people who, at times, just like to hear themselves asking questions.

The only data that leadership should have is data that helps them make decisions, trust the current system, and anticipate what’s ahead.

If the data doesn’t do that, resist the temptation to include it.


HT to Seth Godin

Metrics that answer questions

This post from Seth Godin could have been written for ethics and compliance professionals who regularly scramble to create charts for the next board meeting.

Those charts are often filled with output metrics and lagging indicators that beg more questions than they answer. Those metrics are used because they are easy to track.

If I show you a chart that tracks my daily body weight (output metric), and you notice a trend or spikes, you will immediately ask for details about my nutrition and exercise (input metric). Keeping track of my weight is easy. Keeping track of my caloric intake and outake is a lot more work, but that’s where the answers are.

The next time you look at the chart that tracks the number of calls to your helpline, ask yourself how helpful it is (it’s not, at least not on its own). Then find something useful to measure.

Why are you measuring?

Strathern’s Law states that “when a measure becomes a target, it ceases to be a good measure.”

Seth Godin put it in terms that E&C professionals can appreciate: “As soon as we try to manipulate behaviors to alter a measure, it’s no longer useful.”

Just think of Wells Fargo’s “Eight is Great!”. Need I say more?

If you use metrics at work (and you do), ask yourself why they are in place:

  • Are you trying to identify pain points, and allocate resources to alleviate the pain? Or,
  • Are you trying to change behavior?

If the sole (or primary) purpose of your metric is to encourage or discourage a particular behavior, you may be headed for trouble.

What is compliance training for?

90% of your employees completed your online training.

80% of them thought the concepts were clear.

70% thought its length was just right.

60% understood the link to your corporate values.

50% said they learned something new.

40% agreed that it was related to their job.

30% would recommend it to a colleague.

20% thought it would help them be more compliant in the future.


Of the above, what metric do you track?

Of those, which ones do you share with your board?


If 90% of our employees do the training but only 20% think it is helpful, are we doing our job?

Say it like you mean it

A goal in your head is fine but unlikely to be reached.

A goal written down has better chances.

A goal shared with others creates accountability and cheerleading, and is much more likely to be reached.

So those goals of a more just society we all have right now, they need to be written down, backed by metrics, and published on our organizations’ websites.

Otherwise it’s like we don’t mean it.

The questions we don’t ask

Most organizations believe that you can’t improve what you don’t measure.

Which means everything gets measured.

Or almost everything.

When it comes to corporate values, the absence of metrics is almost shocking.

After launching a policy and training employees on it, we could survey them and ask “Does this new policy make you feel more or less trusted as an employee?” But we don’t ask.

After interviewing the source and the subject of an investigation, we could ask each if they felt respected in the process. But we don’t ask.

After a big commercial win, we could gather the team and ask “Did we win this the right way? Did we walk the talk?” But we don’t ask.

Is it because it’s hard? Is it because we are afraid of the answer? Is it because we don’t care? Other good questions that don’t get asked.

How we work depends on why we work

This post is the fifth in a series devoted to my reading notes (and thoughts) on the essays contained in The Culture Book, Volume 1. This essay is from Lindsay McGregor, co-founder of Vega Factor and co-author of the bestselling book Primed to Perform: How to Build the Highest Performing Cultures through the Science of Total Motivation (ToMo). Vega Factor’s mission as a company is that every single organization on Earth has a high-ToMo way of operating and a great culture by 2050. For my 2016 reading notes Primed to Perform, please click here.

Social science uncovered six motives that explain why people work: Play, Purpose, Potential, Emotional Pressure, Economic Pressure and Inertia (For an overview of the six motives, click here). These motives can be measured, and the measures can predict the performance of an individual and of an organization. More specifically, the measures can predict a number of outcomes, including ethical behavior.

Our reasons for working, our “why”, directly affects what we do and how well we do it. Culture is everything that shapes our “why”, all the things in an organization that influence how we show up for work.

Anyone attempting to measure performance must first understand that there are two types of performance: tactical and adaptive. Tactical performance is your ability to execute against plan. Adaptive performance is your ability to diverge from plan, a necessary skill in today’s ever-changing world. Leaders need both types to run an organization effectively. While all six motives can improve tactical performance, only the first three increase adaptive performance. As they set to measure performance, leaders are warned not to “weaponize” the data they collect through dashboards and scorecards. Of course, metrics are necessary but they must be carefully selected, not used to instill fear in their employees, and not necessarily tied to compensation.

The most powerful driver of employee motivation is role design. A role is poorly designed when employees don’t know what they are responsible for, only understand a piece of the problem the organization is trying to solve, don’t see the impact of their work, or don’t have the skills for the job. It should be noted that well-paid does not mean well-designed. When a role is well-designed, employees are trusted to experiment and they see the link between their work and the organization’s mission/purpose

Performance tax

The jerk we don’t fire.

The metrics distracting from the real work.

The third approval on an expense requisition.

The sales bonus that’s not in the best interest of the customer.

Communications focused on “what” and “how” but not on “why”.

They all impose a tax.

On our compliance.

On our business.

On our culture.

Inputs –> Outputs –> Outcomes

In the ethics & compliance world, many professionals struggle to measure the effectiveness of their program.

We pick a few easily measurable metrics, such as the number of allegations made, and are quickly puzzled. What does it mean when that number goes up or down?

The more experienced professionals realize that input metrics are rarely helpful to measure effectiveness. They do offer an opportunity to ask good questions but they seldom provide an answer.

Output metrics are more telling. Let’s say we typically get 100 potential conflict of interest disclosures during our annual survey. This year, for the first time, we hold five lunch-and-learn sessions (input) the week before the survey to explain the different types of conflicts and why it’s important to disclose them. If our disclosures double (output), then we can probably say that our training campaign was effective.

But is this the business we are in? Is our role to generate allegations and disclosures and training completions?

Of course not.

Our role is to drive behaviors that are consistent with our organization’s values. Generally, the outcome we seek is an organization that generates trust, treats everyone with respect, and performs with integrity.

And so an effective program is one that generates the right outcomes. The relevant metrics tend to be more qualitative than quantitative – and thus harder to measure. The good news is that there are competent professionals out there who can measure these outcomes.

It’s the only way to truly measure the effectiveness of our E&C programs.


Hat tip to Todd Zipper

Are we here to help or to shame?

Dashboards and scorecards can help our business or hurt it.

If the purpose of the dashboard is to highlight pain points and deploy necessary resources to relieve the pain, then employees will gladly share the information we ask for.

If, on the other hand, the scorecard is perceived as a ploy to improve performance by shaming the few who don’t seem to keep up with the rest, then employees argue, cheat, and lie in attempts to withhold incriminating information. And performance won’t get better.

We must ask ourselves: why are we tracking the metrics that we have? And do our employees understand our purpose?