I’m the delivery manager for the digital strategy team and I want to tell you about the lo-fi service assessment approach we recently piloted.
Service assessments are crucial check-in points during a digital service’s development cycle. It’s a two-way conversation between a panel of digital specialists and the service team to check the project is on track and is meeting the digital by default service standard.
It’s been a year since we introduced department-led service assessments for health. In that time, we’ve led 4 service assessments for the Human Fertilisation Embryology Authority (HFEA) digital team for their new website and clinic data portal at alpha and beta and a mock assessment for the National Pandemic Flu Service. This equated to 20 hours of assessment time for service teams and assessors, approximately 10,000 words in follow-up reports, bracketed by briefings, assessor pre-meets, digital leader sign-off and publishing the reports; co-ordinated and delivered by a team of 5 digital specialists at DH. With 3 assessments round the corner, it was time to review the process.
We gathered as a team to discuss how we could adapt the assessments to reflect our team capacity and still provide robust recommendations. We came up with these ideas:
From 18 points to 3 core questions
- How does the service meet user needs?
- Is it safe and secure?
- Can the service be quickly improved?
Instead of going through all of the questions for each point in the standard, we would prioritise checking whether teams have thought about these important principles. We still value the 18 points in the service standard and will continue to reference them for live assessments.
A little more conversation
Teams would frame their work as a show and tell under the 3 themes mentioned above, followed by a retrospective on the timeline of their development. This would be a more engaging way for all to understand the work being done and would get teams talking.
Keeping assessments in the governance chain
We would still set service assessments as a condition in spend control approvals and reports would be attached to that approval. This would help us track if teams committed to the recommendations we made.
Piloting the approach
We were keen to test the approach as soon as possible and thankfully the National Institute for Health and Social Care Excellence (NICE) digital team were willing to be our guinea pigs! We got the green light from the service standard team at the Government Digital Service (GDS), who offered to shadow the sessions. Recruiting assessors was still difficult but a message on the cross-government Slack community helped us find Simon Hurst, a user researcher from the Coop digital team.
The assessor panel and observers: [l-r] Tony Yates, NHS Digital; Nayeema Chowdhury, DH; Hannah Wandless and Alice Rodgers, GDS; Simon Hurst, Coop Digital; Matt Harrington, DH; Andrew Jones, NHSBSA
The result? With a leaner format, we were able to comfortably run two assessments in one day. The service team and GDS loved the retrospective format, which encouraged an honest and open discussion on how the project went. We invited constructive feedback too which was largely around the structure and timings; we’ll be really clear about this for next time. There was also interesting feedback on whether this format would work better with teams we were more familiar with. We talked this one through but we felt that the outcome would still have been the same had we run the full 4-hour assessment.
We’ll publish the report from the NICE assessment on this blog so you get a clearer idea of how we formatted our approach. It’s been great to trial this with NICE and have GDS as a sounding board for our ideas. Returning the favour, we have volunteered to observe and feedback on GDS’s new assessment approach which is in its alpha phase with the NHS Business Services Authority’s low income scheme service. If you have any questions on service assessments or on our approach, drop me an email or message me on the cross-government community Slack channel.