By Scott Proudfoot
This article was first published in eGov Monitor Weekly http://www.egovmonitor.com/newsletter/signup.asp
The Internet is changing how we use and learn about government. eGovernment can improve service delivery and offer greater transparency and accountability. Having the tools in place to measure online public sector performance is key to delivering on that promise.
The past five years have witnessed an extraordinary effort by governments to make services and information available over the web. Virtually all developed nations have launched comprehensive and challenging eGovernment initiatives. The result, according to a 2001 UN study, is that there are now well over 50,000 government-managed web sites.
So far, we have been measuring the progress of eGovernment in the most rudimentary fashion. Most cited studies are variations of the same methodology – benchmark governments against each other based on the online availability of a pre-determined ‘baskets’ of services and information. The nation states with the largest ‘baskets’ are declared eGovernment leaders.
This is true of the United Nations own work in this area, Benchmarking eGovernment: A Global Perspective. Even the better studies in this area such as Accenture’s E-Government Leadership: Realizing the Vision rely on this methodology. And we see the same approach used to rate the online progress of state governments, political parties, and legislatures. These are simple counting and cataloguing exercises.
The QUANTITY of eGovernment is being measured not the QUALITY, nor the ROI, nor the effectiveness nor the actual impact on citizens, etc. These studies do educate us about the implementation pace of eGovernment:
They do little to tell us about the results of eGovernment:
The problem of how to effectively measuring web assets is not exclusive to government!
The majority of private sector sites operate on fuzzy performance expectations and, in many instances, none. Large online retail sites have pushed the envelope the farthest in tracking and measuring results but a lively debate still exists amongst the experts about the value of different performance metrics. There is considerable experimentation and an on-going search for better tools, indicators, and measurements.
And, irrespective of how far the private sectors evolves in measuring web assets, many of these techniques and tools cannot be applied easily to public sector sites. These tools are transaction driven. Governments need to measure more than ‘sales’, ‘revenue’, and ‘profits’. Additionally, many of the ‘leading edge’ web site measurement techniques raise privacy issues that make their adoption by governments problematic.
Consequently, few clear guidelines, benchmarks or measures exist to evaluate the performance (however that may be defined) of public sector web sites. Governments need to define return on investment (ROI) measures and criteria specific to the public sector.
But, there are growing efforts to measure the efficacy, performance and impact of government web assets.
This new focus on performance is driven by the escalating cost of these web assets and the need to determine whether the initial expectations of this new on-line channel are being realized. In the United Kingdom, the National Audit Office addressed this performance aspect in this way:
|“As eGovernment and e-services mature, the focus of attention will tend to shift from simply providing access to services in electronic form to actively managing take-up and usage of these options by the public. All government sector agencies should put in place appropriate management information to regularly monitor usage of their web sites and electronic services, and ‘play-back’ this information to the content providers and divisions responsible for originating Web material and Internet services.”Government on the Web ll, 2002 (UK)|
At the forefront of this effort is the OECD with a major project underway to look at web site evaluation and measurement practices. The OECD has defined five dimensions considered key elements of public sector web sites:
|5||Citizen focused government|
A major research project is underway on measurement and evaluation and the results will be published in the spring 2003.
In the US, the Bush Administration has been attempting to measure its 24 key eGovernment initiatives by using business case methodologies that address such issues as alignment of the web site with the overall program strategy, measures of enhanced capabilities, improved accessibility, increased transactions and cost efficiencies. Many agencies are using the Balanced Scorecard as a framework for their measurement and reporting initiatives.
E-Government Performance Measurement in Canada
Canada has been a leader in eGovernment internationally. Our federal government is further along in grappling with the issues of performance measurement and ROI than many other governments.
The lead agency in this regard has been the Chief Information Office Branch of Treasury Board (CIOB). The CIOB has championed an effort to set the performance measurement framework by providing a mix of policies, tools and guidelines to help government departments and agencies undertake self- assessments. Departments are free to use these CIOB elements and adapt them to existing internal evaluation criteria.
The key elements of the CIOB approach so far are:
Common Look and Feel Standards Across Government: CL&F is intended to distinguish federal programs and services and to allow viewers to successfully navigate from one federal site to another by building common ‘best practice’ navigational elements.
Accessibility: This year all federal web sites are to be fully accessible to Canadians with hearing and sight disabilities. The Canadian government Accessibility standards are derived from the W3W Web Content Accessibility Guidelines. This effort across every federal government web site represents a tremendous web design re-engineering effort has which gone largely unnoticed by those not disabled. Accessible web sites via such tools as text readers offer many disabled individuals an opportunity for easier and more complete access to government information and services than is possible in a non-digital environment.
Focus Groups/Usability Studies: These are established standard approaches which, because of their cost, tend to be restricted to the larger government sites. Properly applied they provide useful insight and feedback from customer groups to drive site strategies and design.
The 11 Indicators of Success: These were set out this fall as the core indicators departments are expected to use and adapt to measure their online assets. They are:
|4||Critical mass of services,|
Nine Stage Progress/Implementation Measurement Models: These classify the pace of implementation and degree of personalization and functional interactivity of government sites. Separate guidelines have been created for services/transaction oriented government sites and information driven sites.
Common Measurement Tool: This is ‘core’ question survey approach which quizzes client satisfaction levels. It grew from the Service Improvement Initiative of the late nineties. Currently, CIOB is working on a ‘next generation’ Common Measurement Tool model; usable across multiple channels, including online. This new model will be made public in the next few months after the completion of interdepartmental discussions.
Common Definitions for Web Traffic Metrics: TBS has issued a Request for Proposal to establish and define core web traffic metrics that will be used by the three major GOL Gateway Sites (Canadian, Non-Canadians & Business) + the 35 cluster sites the cross most departmental online offerings. Since government web sites are using a variety of web traffic software packages that often measure different thing or define common things differently, it has been difficult to accurately compare the performance of one government web site to another. By establishing common definitions, such comparisons are possible. Government can also ‘roll up’ individual site statistics within and across departments and used more sophisticated benchmarking and diagnostic tools to assess performance.
The Canadian Government is clearly leading the pack in online performance measurement. But, governments are also in the earliest stages of understanding and measuring public sector online performance. There are a number of issues to be resolved:
|1||Privacy and security issues must be addressed. Governments have to decide how far to go, both with existing tool and new tools coming on-stream. Inevitably, these decisions limit some options; making others more important.|
|2||Every government web site collects web traffic statistics but few public service managers read or use them to actively manage their sites. (In our experience, most private sector managers are not doing any better.) Web traffic statistics (currently designed by techies for techies) have to be made more strategic, manager-friendly and intelligible in the context of the governments’ specific needs. Public sector managers, themselves, have to become more literate in web metrics.|
|3||Benchmarking works best when apples are compared to apples. Government programs vary widely, even within a single department. A science-driven health information web site has a different audience and needs than an online jobs site. To improve sites, methodologies have to be in place to measure them against ‘ best of breed’ in their own space on a national and international basis.|
|4||Governments need to ‘cross walk’ between online and offline performance measurement. The Internet is just one channel of program delivery. Government managers need to gauge success relative to other channels such as service via phone or client visits to an office. To the greatest extent possible, public sector managers will want to establish common metrics between offline and online channels.|
The Internet is already changing the way we use and learn about government. There are sufficient reasons to believe digital government has even more potential to offer improved service delivery and greater transparency and accountability. Having the tools in place to measure that performance on an on-going basis is key to delivering on that promise.
Scott Proudfoot is Co-Chairman, Hillwatch Inc, a company that develops innovative proprietary methodologies to help government departments & agencies benchmark, evaluate and improve their online performance. Scott may be contacted on email@example.com
Ó KAM Ltd 2002