The project was initiated by the National Corrections Advisory Group (NCAG) in response to Corrective Services Administrators’ interest in more in-depth benchmarking analyses than is possible through the annual statistical collection process for the Report on Government Services (RoGS). An independent consultant was contracted in 2005 to undertake a comparative study of home detention programs currently operating in Australia and New Zealand.
The brief for the project specified that the study would be based on information provided by corrective services agencies from existing sources, would not involve the development or collation of any new statistical collections, and that access to individual offenders or their files would be outside the scope of the project.
NCAG representatives agreed that information would not be collected on individual offender characteristics, given the resource implications for providing unit record data for some jurisdictions, relative to the value added that would be provided by producing a statistical profile of detainee characteristics given the outcomes of the literature review and analyses conducted during earlier stages of the project. In response to a progress report presented at the November meeting, NCAG also agreed to “focus the scope of the study on process benchmarking comparisons, to limit analysis of outcome measures to variation in the existing indicator of ‘completion rate’ reported in the RoGS without collecting further statistical breakdowns on potential underlying factors such as socio-demographic and correctional history data, and that unit record data would not be collected”.
Purpose
The purpose of this report is to inform jurisdictions about the range and nature of home detention (HD) programs operated by Australian and New Zealand corrective services through a comparison of the key features of these programs (process benchmarking) and, based on available comparable information about program outcomes, to analyse the factors underlying variations in performance across jurisdictions (performance benchmarking). The scope of the comparison is limited to front-end and back-end programs, excluding home detention for unsentenced offenders.
Methodology
The information for these comparisons is based on a range of sources including: relevant legislation, policy and procedural manuals, and other key document analysis; a review of the international research and practice literature for good practice features underlying successful performance; statistical analysis of data provided by jurisdictions; and consultations with officers responsible for home detention programs in each jurisdiction. The process benchmarking is based on a comparison of program features as outlined in legislation and in current policy and procedures documentation, not details of actual practice.
For the process benchmarking exercise, jurisdictional programs were compared on a number of features including program objectives and stated purpose; scope of application and eligibility criteria; methods used to assess suitability for the order and assessment report content requirements; order conditions and level of discretion to apply and vary conditions; order length restrictions; practices for providing order/program information to the detainee, co-residents and general public; case planning and case management practices including the point in time at which case planning commences and core matters covered in the case plan; case review features, frequency and responsibility; surveillance/monitoring methods and responsibility, including type of monitoring regime and minimum contact standards; application of electronic monitoring (EM) and features of such schemes; circumstances under which orders may be revoked; actions upon breach of an order including available penalties, authority to impose these, and level of discretion; and selected features of program administration.
These features on which programs were compared were selected on the basis of either representing core areas generally addressed in legislation and policy/procedures documentation, or identified in the international research and literature review as good practice features and/or critical success factors in program evaluations for home detention, or identified by program managers as features they considered as contributing to successful outcomes for their programs.
Program performance was assessed in terms of the only currently available comparable outcome measure, which is program completion rates (calculated as the proportion of orders finalised each year that were not revoked), as reported by jurisdictions for the annual Report on Government Services (RoGS).
Key Results
The analysis of annual order completion rates for home detention over time shows no consistent and substantial level of superior performance for any single jurisdiction, particularly when taking into account factors that would contribute to performance variation, such as differences in the size and nature of detainee populations. The total completion rate for each jurisdiction (except Victoria and the ACT which were excluded from this analysis given the small number of completed orders in these jurisdictions) over the ten-year period from 1996-97 to 2004-05 ranged from 77 to 89%, with four jurisdictions differing by less than 4 percentage points (85-89%). Completion rates ranged from 79 to 92% in 2004-05, with the three best performing jurisdictions differing by less than one percent.
Although there is not a substantial variation in program completion rates across jurisdictions, NZ and the NT show slightly higher rates for total orders (over a 10 year period for the NT and the six-years that the program has been operating in NZ) as well as highest or equal highest rates for the most current year (2004-05). An analysis of features shared by these two jurisdictions, but which are unique to the two when compared with other jurisdictions with lower completion rates, was undertaken to identify possible contributing factors to performance variation in program outcome.
No unique program features (as documented in policy and procedural manuals or legislation) were identified that clearly explain these differences in performance variation on the measure of order completion. In general, all jurisdictions demonstrate the program features and practices identified as good practice in the international literature. NT and NZ, while sharing a similar performance standard, vary in a number of aspects, eg, whether front-end or back-end programs, decision-making authority for front-end orders, the standard conditions universally applied to all detainees, application of electronic monitoring, and responsibility for surveillance. Also, both share a large number of common program features with those other jurisdictions showing lower rates of program completion.
Analysis of selected detainee population characteristics identified in the research and practice literature as contributors to program outcome and/or recidivism, although limited in scope given the information available to the study, did not provide any strong evidence for differences in population characteristics such as gender, Indigenous status, or most serious offence being strong predictors of program outcome. There is also no obvious correlation between caseload or unit cost and program completion rates based on the information available to the study. The two jurisdictions sharing the highest completion rates (NT and NZ) have markedly different detainee to operational staff ratios (attributable at least in part to the use of a contracted company to monitor detainees in one jurisdiction and the scope of non-metropolitan geographic coverage required in the other). Unit cost between the two jurisdictions with the highest and the lowest completion rate was almost identical in 2004-05.
Overall, on the basis of the information available to the study, there are no obvious factors contributing to performance variation on the outcome measure used. Arguably, home detention programs are distinguished less by significant differences in key areas of operation (such as broad assessment, case management, and breach processes) than by different ???strategic??? approaches established in legislation that govern the scope and application of such programs.
This benchmarking analysis needs to be considered as preliminary rather than definitive work, providing a basic comparison about broad program features, assessed against a single outcome measure. This is in line with the objectives and scope of this study, which was designed to provide a common understanding of the different programs operating in Australia and New Zealand, and to consider performance variation based on existing indicators. Additional measures are discussed in the final section of the report.