Hi Mark, As someone who helps manage the overall strategic planning process within a UK local government context (children’s services), I am currently grappling with what you have called ‘the wrong question’. This is in the context of trying to support the establishment of an ‘in-house’ funder/contractor relationship in line with the drive for UK local government to become, first and foremost, a commissioner rather than simply a provider of public services. While I appreciate that the final arbiter of a successful strategy would be the population outcome indicator curves, could it not be argued that performance measures, which after all have been chosen following partnership deliberation of what works, act as a useful ‘working proxy’ of strategy success? In other words, if performance measure curves are turning, could it not then be reasonably assumed that, in due course, partners could expect to see turns in indicator curves? I suppose this is similar to the notion of performance measures acting as ‘lead’ indicators, with outcome indicators as ‘lag’ indicators. Of course, if, in the event, the one does not follow the other (i.e. if improved performance on selected measures does not impact on population outcomes as initially hypothesized), then the suitability of one or other of the individual components of the strategy is clearly brought into question, and the proxy value of one or other of the performance measures invalidated. Am I missing something, or is it a case of not so much ‘the wrong question’ as perhaps a ‘yet-to-be validated answer’?
Jay Hardman Leicester UK
The general population level progression from results to indicators to baselines, story, partners, what works and strategies, often leads people to ask next, “What performance measures will tell us if our strategies are working?” This, of course, is the wrong question. To tell whether our population level strategies are working, we must look to see if the indicator curves are turning. Indicators measure the extent to which strategies are working. Performance measures tell us if the individual components of our strategies are working.
For example, a partnership might come together to promote community safety as measured by indicators such as the crime rate or the percentage of people who feel safe. After working through the RBA/OBA process, they might settle on an initial three part strategy which includes community policing, improved lighting and a neighborhood watch program. To see if the overall strategy is working, we would look to see if the curves are turning on the selected indicators (i.e. crime rate and percentage who feel safe). As for performance, we would take each component of the strategy in turn and identify performance measures for that component. So for the neighborhood watch program, we might look at the percentage of neighbors signed up, or the crime rate for neighbors in the program compared to neighbors not in the program.
There is one other kind of performance measure relevant here, which may be the source of some of the confusion on this subject. For those managing the overall strategic planning process, there is a need to know the extent to which the strategy (or strategies) have been implemented, and how well they have been implemented. So a performance measure for the partnership managing the strategy might be the percentage of agreed action steps that are on track. Notice how this measure tells us how well the partnership is working, not how well the overall strategy is working. One could easily have a strategy that is implemented beautifully, but has no effect on the indicator baselines.
September 15, 2009: And more….Having re-read your initial note: Does it follow that successful performance of strategy components is a predictor of strategy success? And if not does this mean that they are the wrong components? Another way of phrasing this question is, “Could all components of a strategy succeed and the overall strategy fail?” I think the answer is “yes.” (1) We might be doing the right things but not enough of them. (2) We might be doing the right things and yet the strategy is incomplete. I believe we have learned that strategies sufficient to turn population level curves require the contribution of many partners, and include no-cost and low-cost elements, some of which may be unusually difficult to assess. In addition, successful strategies must be sustained over time and must include a process (an ethos?) for continuous rethinking and improvement. In both of these cases, successful performance measures of strategy components would not be a good predictor of the success of the overall strategy to turn a curve. This does not mean that partnerships responsible for developing such strategies do not have a responsibility to track component performance, to spur performance improvement and where necessary to redirect investments. But we have such a long history of settling for the appearance and not the reality of change in quality of life for children and families. In looking for intermediate measures, we must be careful not to repeat this mistake in another guise. The improved performance of individual services and even whole service systems would indeed bode well for the quality of life of the customers of those services or systems. But these improvements could well take place in the context of overall deterioration in the well-being of the whole population of children. This is a simple summary of the whole history of child protection reform. We fix problems with services for abused children, but more children are abused. Mark