Features: May 14th, 2004

Reforming Performance Targets: Lessons from Executive Agencies

By Dr Oliver James

Reproduced by permission of the Public Management and Policy Association.

The sometimes undesirable effects of performance targets on public services are an area of concern in contemporary UK government. This concern has become more marked since 1998, with the development of a government-wide system of targets as part of the regime of Public Service Agreements (PSAs) as well as numerous, but more limited, systems elsewhere in the public sector. The recent House of Commons Public Administration Committee’s report, ‘On Target? Government by Measurement’, argues that sometimes performance targets, which are often linked to various forms of reward, create incentives that can lead to damaging side effects on untargeted activities and/ or perverse effects on targeted activities. The reports notes a lack of joined up government in target setting, with those responsible for developing targets not always drawing sufficiently on the knowledge of those involved delivery. It notes, too, that there is often fragmentation and conflict between targets at different levels of government.

There are undoubtedly problems. Yet their identification and some of the proposed remedies appear to be narrowly focused on a few high profile incidents rather than taking a longer term view of their broad effects. The Conservative Party’s proposals to abolish centrally set targets are vulnerable to this criticism. In contrast, proponents of targets and incentives in organisations, often economists, continue to stress their benefits, particularly in the absence of market mechanisms in the public sector.

More evidence needed

For all the general talk about ‘evidence based’ policy, there is surprising little hard evidence about the effect of performance targets on public services. All too often, interest moves on to the next initiative or ‘big idea’ without waiting to assess the outcomes of previous changes. As a result we may be doomed to cycle between conflicting prescriptions for public organisations, overreacting to the overly-narrowly defined problems of the day and failing to form a balanced view of the consequences of different structures. To improve our chances of avoiding this fate, it is important to encourage research that systematically both explores the processes and drivers of reform in the public sector and the consequences of those reforms.

The experience of executive agencies can help us understand some of the likely consequences of adopting performance targets elsewhere in the public sector, particularly in central government (James, 2003). Executive agencies were launched following the ‘Next Steps’ report in 1988, which recommended that ‘agencies should be established to carry out the executive functions of government within a policy and resources framework set by a department’. All agencies share two principal features. Semi-detached from its parent department, each has its own budget and certain freedoms. These include freedom from some departmental regulations, freedom from ad hoc, day to day, intervention by the department and freedom from some central government-wide regulation with the organisation under the direction of a chief executive recruited through open competition. Some ‘trading’ agencies have additional freedoms to raise their budgets from charging customers. Secondly, an agency’s chief executive is personally accountable for the unit’s performance. This includes both the specific operational tasks the agency is required to perform and output and outcome focused performance targets set by the parent department.

Agencies judged a success

Between 1988 and 2001, 173 executive agencies were created. All had some system of performance targets. The number varied from just 1 to over 20, with a mean of 7 targets per body. Formally, at least, Ministers set the targets. In practice, departments tended to be heavily reliant on the executive agency for ideas about suitable targets and levels. To improve the advice available to ministers, Ministerial Advisory Boards were developed. These comprised agency staff, departmental civil servants and non executive directors, often with a private sector background. If it were applied to PSAs, a similar set up might promote the inclusion of a broader range of relevant stakeholders, including ‘outsiders’ such as sector experts or representatives of service users, who may be able to challenge organisations to set better targets.

The Government’s assessment of the effect of agency working has been broadly positive. The principal review of agencies surveyed a range of evidence, especially performance against targets, and concluded ‘the agency model has been a success. [sic] Since 1988 agencies have transformed the landscape of government and the responsiveness and effectiveness of services delivered by government.’ (Office of Public Service Reform and HM Treasury, 2002, p. 10).

Targets only tell half the story

In the second half of the 1990s, a large majority of executive agencies achieved most or all of their targets with only 6 (8%) achieving around half of their targets and a further 3 (4%) achieving less than this. Yet performance against targets offers only limited assurance about overall effectiveness. There were large gaps in the performance target regimes, with many valued outcomes and objectives not having corresponding targets. Taking the Home Office and Department of Social Security’s executive agencies as examples, targets did not cover 47% of their aims and objectives and a further 31% were only partially covered. In some trading agencies all goals were expressed in a single target -often a percentage return on assets. In part, this limited target regime reflected the alternative accountability mechanism of customers paying for services.

However, it was often difficult to encapsulate public policy goals that were valued by departments in the target regimes. Most targets were for financial performance and inputs rather than outcomes. Even where output targets existed for production, these did not always map onto outcomes. In the mid 1990s, 59% of targets related to outputs (usually units of different goods or services produced in a year), 17% to efficiency (often a measure of unit cost of output), 12% to processes (often the achievement of an administrative task), 9% to inputs (usually budget or staff levels) and only about 1% directly measured outcomes. Not only were targets limited, they changed from year to year. In the first half of the 1990s, this turnover in targets was 70%. (Talbot, 1996, pp. 49-51). Inevitably, this hampered a comparison of performance over time.

Using performance targets to hold chief executives to account has also proved very difficult. These officials have higher powered incentives than those traditionally given to civil servants with personal contracts (initially for a period of three years, then on a rolling basis), with performance related pay. A chief executive’s continued tenure is, at least nominally, related to acceptable performance but few chief executives have been publicly criticised by ministers or removed for poor performance against targets. In part this is because it is widely recognised that targets don’t cover all valued outcomes and that it is difficult to judge chief executives’ personal contributions to performance. Where chief executives have departed under a cloud, it has usually been difficult to relate their departure directly to poor performance against targets.

Even outcome focused targets of the type championed by proponents of the PSA regime often fail to capture much that is central to the concerns of ministers. Linking performance against targets to ‘high powered’ incentives is difficult to implement and creates the risk of promoting undesirable behaviour. However, the experience with agencies suggests that a few strategic targets appear to be useful as a guide to priorities, and the measurement of performance against targets can be a valuable starting point for broader discussions of performance.

Impact on organisational performance

The effect of performance targets on the conduct of public activity is difficult to judge but can be said to have contributed to some executive agencies becoming overly concerned with their own activities. This situation has created substantial negative ‘public sector externalities’, where the effects of these public bodies’ activities are not fully included in their performance assessment regimes (analogous to externalities in markets where not all consequences are reflected in the price of goods and services) (James, 2000). For example, the Benefits Agency’s focus on its own targets and distinctive working practices contributed to negative externalities for the broader social security system. There was sometimes poor information exchange between the Agency and the Department of Social Security Headquarters, hampering the communication of policy changes (including important changes to pension entitlement). It also led to others in the system, especially local authorities administering Housing Benefit, using inaccurate information generated by the Agency. The full potential for shared services across organisations, including systemic e-government, was also not exploited. These problems appear to have been shared, to varying degrees, by many agencies that are ‘mainstream’ to their departments’ activities or that need to work closely with other bodies. However, the effects of performance targets on overall performance were much less problematic in agencies focused on clearly defined customer groups, such as many trading agencies (James, 2003).

In the light of these issues, the Treasury and Cabinet Office’s current direct involvement in performance measurement and targeting could be seen as attempting to ensure a holistic perspective on performance rather than unnecessary control. In the social security system, systemic concerns were a key motivation for the creation of the Department for Work and Pensions and the associated merger of the Benefits Agency with other bodies to form the Jobcentre Plus business. At the same time, emphasis on controlling organisations by the use of crude targets was de-emphasised. More generally, the regime of Public Service Agreements, particularly the ‘cross-cutting’ PSAs, are an attempt to strengthen mechanisms for setting shared priorities and measuring progress across organisational boundaries (James, 2004).

Alternatively, it may be that central government should not be attempting to set and monitor targets in as many areas of public services as it currently does, and that regional or local discretion in this area is preferable. The allocation of tasks between levels of government is the subject of much political debate and bound up in the present administration’s concern to ‘deliver’ on public services. It is perhaps not surprising therefore that the development of a clear solution to this problem is not yet apparent despite much rhetoric in central government about the need to decentralise.


James, O (2000) ‘Regulation inside Government: Public Interest Justifications and Regulatory Failures Public Administration Vol. 78, No. 2, pp. 327-343.

James, O. (2003) The Executive Agency Revolution in Whitehall: Public Interest versus Bureau-shaping Explanations Basingstoke, Palgrave/Macmillan. Further details, and a sample chapter, are available from: http://www.ex.ac.uk/shipss/politics/staff/james/index.htm

James, O (2004) ‘The UK Core Executive’s Use of Public Service Agreements as a Tool of Governance’ Public Administration Vol. 82, No. 2 or 3.

Office of Public Services Reform and HM Treasury (2002) Better Government Services: Executive Agencies in the 21st Century London, The Cabinet Office.

Talbot, C. (1996) ‘Ministers and Agencies: Responsibilities and Performance’ in The Public Service Committee Second Report Ministerial Accountability and Responsibility Volume II Minutes of Evidence HC 313, Session 1995-96, London, HMSO, pp. 39-55.

Dr Oliver James is a Senior Lecturer in Politics and Co-ordinator MA Public Administration and Public Policy at the Department of Politics University of Exeter. o.james@exeter.ac.uk