-
Notifications
You must be signed in to change notification settings - Fork 316
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
3 changed files
with
114 additions
and
0 deletions.
There are no files selected for viewing
Binary file added
BIN
+40.9 KB
assets/blog/2024-10-30-how-we-measure-success/skill-assessment-advanced.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,114 @@ | ||
--- | ||
title: How we measure the success of coaching engagements | ||
date: 2024-10-30 | ||
authors: | ||
- matt-cloyd | ||
excerpt: | | ||
18F's projects end in different ways: sometimes there's a website or report, other times the outcomes are more invisible. When a project has a tangible output, like a website, it's pretty easy to tell if we did our job. But when the goal of the project isn't a thing, how can we tell whether we succeeded? | ||
tags: | ||
- how we work | ||
--- | ||
|
||
|
||
|
||
18F's projects end in different ways. Sometimes there's a website or report, but other times the outcomes are more invisible. When a project has a tangible output, like a website, it's pretty easy to tell if we did our job. | ||
|
||
But when the goal of the project isn't a *thing*, how can we tell whether we succeeded? | ||
|
||
|
||
## Coaching product owner skills | ||
|
||
For a recent 18F project, the goal was to coach several agency partner staff members into becoming first-time product owners. None of the staff members had ever done product work, but these staff members were going to be shepherding a digital services project to completion. It was important to the agency that we teach them well. | ||
|
||
How would we know we were successful in turning agency staff into competent product owners? What could we use to discern whether or not we succeeded in meeting our goal? | ||
|
||
|
||
### Adding a measurable framework | ||
|
||
You can't know whether you've had an impact unless you have a way to observe or measure change. So, we decided to find something to measure and a way to measure it. We wanted something that would tell us if the staff were developing product owner skills, so we could assess their progress, and change our approach if things weren't going as planned. | ||
|
||
To measure product owner skill development, we first decided on a set of product owner skills to measure. Using a virtual whiteboard with virtual sticky notes, we brainstormed all of the product owner skills we could think of. We de-duplicated identical or nearly-identical sticky notes to produce a list of unique skills. Then, we voted on the skills using n/3 voting — that is, each person got one-third as many votes as there were stickies. After voting, we found 10 skills that had a high number of votes. Ten skills seemed like the right amount — not too few, not too many, with good coverage of the skill areas we wanted to evaluate. | ||
|
||
But, how would we measure the development of these 10 skills? | ||
|
||
We decided to use a skill development approach called EDGE. The acronym stands for Explain, Demonstrate, Guide, Enable, a simple mnemonic for how to teach a skill. The idea is: first the teacher Explains what they're going to teach, then they Demonstrate it. Next, the learner starts trying it on their own, but the teacher is there to Guide the learner, correcting or refining the skill as they practice it. Then, the teacher Enables the learner to take ownership of the skill — the learner starts doing it on their own, intervening only when the learner requests help. | ||
|
||
We decided to assess product owner skills by asking each apprentice product owner to do a self-assessment of their skills every few months, using a survey based on EDGE. | ||
|
||
We created a table with the EDGE stages across the columns and the product skills down the rows. At regular intervals throughout the project, we asked the apprentice product owners to assess themselves on each of the 10 product ownership skills. Were they unfamiliar with the skill and needed explanations or demonstrations? Were they doing the skill, but still in need of guidance? Or did they think they were enabled to fully own the skill? | ||
|
||
|
||
<figure> | ||
{% image "assets/blog/2024-10-30-how-we-measure-success/skill-assessment.png" "Skill development self-assessment" %} | ||
<figcaption> | ||
Skill development self-assessment of an apprentice product owner. This PO needs a demonstration of product discovery, an explanation of how to set goals and roadmap, and guidance in managing vendors. | ||
</figcaption> | ||
</figure> | ||
|
||
Our goal was to get all the product owners to "Enabled" in all 10 skills. When we ran self-evaluations, we would look at what skills were more to the left of where we expected, and focus on supporting the product owners in learning those skills. | ||
|
||
We produced a new chart for each self-evaluation, so we could look back and see changes over time. As the product owners developed in skills, we could see movement to the right, toward the "Enabled" column. | ||
|
||
<figure> | ||
{% image "assets/blog/2024-10-30-how-we-measure-success/skill-assessment-advanced.png" "Skill development self-assessment" %} | ||
<figcaption> | ||
Advanced skill development self-assessment, showing that the apprentice PO is enabled to do product discovery and vendor management, and needs guidance in setting goals and roadmaps. | ||
</figcaption> | ||
</figure> | ||
|
||
|
||
Using this method, we ended up close to our goal of full enablement of all 10 skills. | ||
|
||
|
||
### Shoring up the assessments | ||
|
||
The next time we do this, I plan to update our methodology. | ||
|
||
In this project, we only ran self-assessments. Next time, in addition to self-assessments, we'll ask the 18F coaches to also assess the apprentices' skills, using the same EDGE chart. Then, we'll compare the self-assessment to the coach assessment and talk about the similarities we see, and the differences we see. | ||
|
||
It was tempting to use only the apprentice product owners' self-assessment, in part because they're all skilled professionals, and most of them had longer and more established careers than the 18F team. But with both assessments, we can have two data points instead of one, which will give a clearer picture of the situation. | ||
|
||
|
||
### Assessing project success | ||
|
||
With a visual in hand, we can more clearly tell whether we were successful in our goal. If everyone was enabled to do all 10 skills, that's a clear success. If the apprentices made little or no progress, we wouldn't call that a success. | ||
|
||
But calling something a "success" or a "failure" becomes more complicated when the outcome is not 0% or 100%. Between those two ends of the spectrum lie other possibilities. | ||
|
||
|
||
## Beyond notions of success and failure | ||
|
||
Most of this post has been based on the idea that "success" and "failure" are useful concepts in our work. That's not always the case. These binary concepts are not always particularly useful in professional settings where a project's completion depends on more than just the skills of the individual contributors. | ||
|
||
There are many factors at play in an 18F project, touching complex systems like staffing, management, funding, and more. All those factors complicate how we think about success. | ||
|
||
What if the apprentice product owners had been resistant to learning these skills, perhaps because they were coerced into the product owner role? | ||
|
||
What if they only had a few hours a week to learn because they were stepping into the product owner role in addition to their existing role? | ||
|
||
How would we judge our success when so many of the ingredients of success were outside our control? Would we consider ourselves failures just because other people were busy? | ||
|
||
|
||
### Looking at "success" in mediation | ||
|
||
Let's take a look at how another type of professional handles success — mediators. Mediators work to help people resolve their conflicts by working out a resolution among themselves, instead of relying on a judge or arbitrator to decide for them. | ||
|
||
But not all conflicts are solvable. Sometimes people in conflict think they'll get a better outcome by *not* coming to an agreement in a mediation. Sometimes (though less often than we think) people's needs are truly mutually exclusive, in which case it might be better to leave mediation and go to court. | ||
|
||
When this happens, is the mediator a failure? No — at least, not simply because the mediation ended without resolution. | ||
|
||
In assessing their professional success, mediators do take into account whether they brought people to resolution — as one data point among many. They would also consider whether they performed mediation skills to the best of their ability, asking questions like, "Did I help all the participants do their best thinking and negotiating?" Their supervisors or observers might also provide input. | ||
|
||
Mediators can also consider how the participants assessed the mediation. If the participants think the mediator did a good job and appreciated their work, even the conflict wasn't resolved there, that could also be a sign of success. | ||
|
||
|
||
## What "success" looks like on 18F engagements | ||
|
||
As consultants, much of 18F's work shares common elements with mediation — we are brought in to guide our partners to do their best thinking, to make good decisions, and to build reliable public services. But we're not in control of all the factors that would result in our ideal digital service delivery. While we do our best to guide agencies around constraints, it's not always possible. Sometimes the ingredients for our idea of success are just not present. | ||
|
||
So 18F also uses a combination of factors to judge our work, like whether we had successful outcomes; whether staff skillfully used their product, design, engineering, and consulting skills; and whether our agency partners are satisfied with our work. | ||
|
||
(Turns out, our partners are overwhelmingly positive about the quality of our work and the relationships we build while working together. [Interested in working with us?]({{ '/contact' | url }})) | ||
|
||
And while binary ideas of "success" and "failure" aren't always applicable to our work, we do care that the American people are counting on us to deliver. We measure outcomes and partner satisfaction to show that we're doing our best to serve you. We're here not only to build websites and other tangible digital services that serve the public, but to change the conditions in government that make quality service delivery more achievable. By applying the skills we have and teaching them to others, we're doing our best to help make our democracy work. |