Planning a data literacy programme in government

This is a stream of consciousness (ie a ramble) on some work I’m doing. Its for my own benefit, but might be of interest to others (don’t be afraid to TL;DR) Reason: I collapsed on the tube this morning, my head wasn’t together and I had a meeting to discuss this plan. Blogging helps me hone my words, because I don’t know who is reading and it makes me think about content issues.

To aid me, I listened to punk band Many Monika (who I saw last night).

At the end of 2015 the Cabinet Office Transparency Team merged with the Government Digital Service (GDS) Registers team to form the GDS Data Team (“the team”) led by Paul Maltby.

Part of the new team’s new work is around data literacy and I’ve been tasked with building a plan for a data literacy programme. So in December 2015 I held a workshop attended by members of the team. I started with 5 questions:

  • what do we mean by data literacy?
  • what is our aim with this programme?
  • who are our potential users?
  • what user research is needed?
  • what learning resources are there?

In true GDS style many, many sticky notes where stuck on whiteboards. After the workshop I grouped them and typed them up. Since then I’ve spent some limited time (I’m doing 2.5 jobs) trying how to figure out how to turn it into a plan.

Fortunately I was in HM Treasury on Whitehall yesterday and bumped into Pauline Ferris and Sheila Bennett. They were instrumental in planning and building the original Government Digital Strategy + the ongoing progress reports.

Pauline suggested building a Benefits Realisation Plan and pointed me to the NHS Institute. After reading their simple, clear guidance I printed out the workshop outputs, cut them up and had a crack:


A Benefits Realisation Plan is basically a table with the columns headings:

  • Desired benefit
  • Stakeholders impacted
  • Enablers required to realise benefit
  • Outcomes displayed if benefit realised
  • Current baseline measure
  • Who is responsible
  • Target date

Initially I found it challenging to crowbar the outputs into the table. So I printed out the NHS Institute’s guidance and re-read. This helped and a 2nd iteration of the Benefits Realisation Plan started to make sense.


Overnight I became concerned about the narrow selection of people in the original workshop. They’re brilliant, but probably not the actually users.

This reminded me of an incident in 2014 whilst worked on the Service Manager Induction Programme. I used to run the “Make, Test, Learn” part of the training, where trainee service managers would apply their learning. They would pick something to build (usually something GDS was already working on) and start with a discovery phase. The core part of this phase is user research which requires talking to actually users. I would pick “users” who knew about the thing they wanted to build from inside GDS (where the training was held).

One day I was pulled up by Leisa Reichelt (head of user research) and given a bollocking. Picking users who build and work on a thing was wrong. They are not true users. They are proxy users, eg if you work on a product, rather than use a product, your view of the product is biased.

From that point on, whenever the “Make, Test, Learn” session was run, the course attendees had to go find real users themselves. This meant doing guerrilla research and testing, ie going out into the wild and finding them.

The reason for that proxy user ramble is because the workshop I held in December was full of proxy users, not actually users. Yes the workshop had been useful to get ideas to test, but they’re just ideas and may not result in an effective delivery plan. More user research is required.

I tweeted Matt Edgar and Sharon Dale who worked on the original Service Manager Induction Programme and asked them how they planned it. As expected, Matt replied saying discovery, listening to existing service managers, then iterating it thru alpha & beta cohorts (successive groups of course attendees). Sharon replied that with every cohort, she tried to add something to the training based on their needs. This helps make the point to the cohort about how important users needs are.

One of the questions I asked in the workshop last year was, “who are our potential users?”. Among the answers were:

  • coders
  • data holders
  • comms people
  • operations people
  • top 100 policy professionals (define “top”?)
  • non-policy people (rather nebulous)
  • non-statisticians (very nebulous)

GDS has plenty of the top 4 and the team knows how to access the policy professionals. So no shortage of users for discovery.

Bearing in mind Pauline’s advice to build a Benefits Realisation Plan, its more important right now to list the desired benefits of a data literacy programme. The workshop suggested they are:

  • increase departmental capability to solve their own problems (this has a GDS are “holier than thou” feel to it – needs reworking)
  • more civil servants are able to ask the right questions (with data)
  • more civil servants are able to spot misleading uses of data
  • government data is more accessible to its users
  • more government data is “dogfooded” (ie we use our own data)
  • the baseline of capability in government rise

Head hurts. Time for a break, some fresh air and a wander….