This article was written in collaboration with Dr. Fiona Hollands, Associate Director and a Senior Researcher for the Center for Benefit-Cost Studies of Education at Teachers College, Columbia University. It appears in the September 2018 issue of School Business Affairs, published by the Association of School Business Officials International. PDF version download
Each year during the budget season, district and school leaders have the opportunity to make budgetary adjustments to make sure limited resources are used both effectively and efficiently to help achieve the organizational mission. Of the myriad types of budget adjustments, one leaders frequently face has to do with selecting a limited number of new programs to fund and/or identifying some existing programs to discontinue.
Such decisions are high-stakes, not just because they involve hundreds of thousands or even millions of dollars, but also because they have lasting impact and broader implications on programming in other areas, personnel, and students. In addition to directing limited resources to where they are most needed and can be most impactful in improving student outcomes, sound decisions will send a clear signal to the school system on what is important (also what is not) and get people excited and motivated. In contrast, poor decisions can bring about unnecessary and even harmful disruptions, hurt morale and culture, and cripple leaders’ effectiveness and authority. Worst of all, poor decisions waste resources and energy that could have been spent elsewhere to help students learn and improve.
Despite the importance and significance of such decisions, many districts don’t yet have the necessary organizational infrastructure that is conducive to informed, sound decision making about which programs to fund or de-fund. In this article, we first propose six factors to consider when making such decisions. These factors should help decision makers evaluate each budget item in a fair and transparent manner. Next, we explain how leaders can use a subset or all of these factors to develop a decision making protocol that considers these factors either sequentially or simultaneously to reduce the risk of haphazard, siloed, and biased decisions. The goal is to prioritize which programs to accommodate in the budget.
Factor 1: Alignment with Organizational Priorities
This factor is concerned with whether the program in question is aligned with the district’s mission, stated goals, strategic plan and current areas of priority. In budget discussions, it is often referenced and used as an argument for making and justifying program funding decisions. Despite being conceptually straightforward, this factor can be difficult to apply due to lack of specificity in how organizational priorities are defined and communicated.
For example, student learning as an organizational priority is so general that almost any program can be argued to be aligned with it. In contrast, increasing the recruitment of minority teachers is a very specific priority against which a program’s alignment can be easily assessed.
Local context such as district size, budget situation, and extent of collaboration among leaders will influence the precision of district priorities. The more specific the organizational priorities, the easier to assess alignment of program funding decisions.
Factor 2: Evidence of Impact
For district leaders, it is an (often unmet) obligation to invest limited resources in programs that are impactful and stop wasting money on those that are not. In practice, however, this can be challenging because many districts lack the capacity to critically review the research literature to identify effective programs to adopt, or to conduct rigorous evaluation studies on the impact of programs they are already implementing in their district.
Ideally, evidence of program impact would be collected by reviewing student performance before and after participating in a given program to see whether outcomes improve more than they would have done without the program. The best way to gather such evidence is to conduct experiments by randomly assigning students into a “treatment” group where participants receive the program intervention and a “control” group where the participants do not receive the program intervention. Alternatively, “quasi-experimental” studies can be conducted in which statistical methods are used to rule out factors other than the program intervention as the cause of any differences in performance.
Thanks to the efforts of the federal government and research community, districts can now use web sites such as the What Works Clearinghouse (WWC) and Best Evidence Encyclopedia (BEE) to find evidence of impact for some widely-available programs which have been reviewed using the highest standards of evaluation. To allow comparison of programs that target different types of outcome, such as reading vs. math, impacts are presented in a standardized form termed “effect sizes.” These can be used as a metric for decision making across different program types. Decision makers must still consider whether impacts found in these studies can be replicated in their own contexts and with their own student and teacher populations.
For the much greater number of programs that have not been rigorously vetted by these resources, it remains a significant challenge for districts to know what is working and should be continued or expanded. That said, districts can, with a certain level of confidence, apply a very conservative approach to identifying programs that might not work and consider discontinuing them. If no improvement is observed in teacher or student performance after a program has been implemented for a few years and there has been no substantial change in the demographics of the program participants, it is likely that the program is not effective in the observed context.
Factor 3: Cost per Pupil
Program cost is a factor that leaders often weigh heavily when making decisions. In many cases, a program is not funded due to its high price tag. However, total price tag is oftentimes a misleading measure that masks or even distorts the true expense of a program. Instead of only relying on the total program price, education leaders should use costs per pupil for decision making.
For example, Program A with a total price of $250,000 is seemingly more than twice as expensive as Program B at a total price of $100,000. When the number of students served is considered, if Program A serves 5,000 students while Program B only serves 400 students, then Program A’s costs per pupil are one fifth of those for the ostensibly cheaper Program B.
Part of the reason for the current focus on total cost instead of cost per pupil is that decision makers often do not think carefully about how many students will be served when a new program is proposed. Even when this information is available, it may not be routinely documented and used in decision making. Requiring this important number in the program description and documenting it are the first steps to establishing it as a criterion for budget decisions. After a few years of collecting cost per pupil information, you will be able to develop a local norm for economical, acceptable, and expensive program costs per pupil, which can be used to guide decisions.
It should be noted that while the immediate “cost” consideration of a program is the burden on the budget, a full evaluation of the costs of a program should include consideration of the amount of time required by teachers and other staff to implement it, and the amount of training and ongoing support needed. Additionally, some programs may need new facilities, equipment or services such as transportation. This full cost evaluation is especially important when a program is under consideration for replication or expansion.
Factor 4: Political Support
It is no secret that programs are more likely to succeed if they attract buy-in from teachers and staff within the school system and the support of board members, community members and parents. In addition to judgments regarding alignment with district priorities, evidence of effectiveness, and costs, district leaders need to consider the amount of support or opposition toward a new program among relevant stakeholders.
It is also no secret that the main obstacle to discontinuing an existing program is pushback from stakeholders, especially from teachers and staff whose livelihoods would be impacted. Objections may be raised by parents whose children’s needs would no longer be served as the result of a program cut, and teachers may be concerned about absorbing “troubled” students in their classrooms if those students have nowhere else to go. If unmet needs can be addressed by new programs, concerns would be attenuated and resistance reduced.
Another political aspect to consider is whether the program has a viable “champion” or “owner” who will maintain its profile and ensure that it is well-resourced and implemented. More than that, this ownership or championship is often critical to help the district remain focused and committed, despite many distractions and interruptions leaders face throughout the year.
The above four factors – Alignment with Organizational Priorities, Evidence of Impact, Cost per Pupil, and Political Support – help district leaders assess programs individually, without considering how each program could potentially affect or be affected by other programs. However, educational programs are not implemented in isolation. What happens in and to one program not only affects the students in the program, but also has implications for those in other programs. In addition to the merits and weaknesses of an individual program, leaders need to take a holistic view to examine the interaction a program might have with the school system from at least two angles. These are discussed next.
Factor 5: Feasibility of Implementation
The first angle is feasibility of implementation, which is concerned with both the central office department that is in charge of the implementation and the schools where the program will be rolled out. In many districts, it is not uncommon to see schools, especially low-achieving schools, attempting to implement multiple initiatives simultaneously. In spite of noble intentions, decision makers need to think hard about whether the principals, teachers, and students in those schools have the time and energy to devote attention to yet another new initiative. Stretching personnel too thinly will likely put a toll on both the new and existing initiatives.
At the district level, leaders need to look at how many programs each department is currently operating and how successful they are. If the department is already struggling to implement existing programs, it may not be wise to add more weight to the department’s current responsibilities. Even if the department is succeeding in implementing existing programs, it is still necessary to assess the burden each additional program places on the department’s staff to ensure that they are not set up to fail.
Sometimes, feasibility of implementation is also dependent on other departments that provide support. For example, a new program may call for hiring a large number of high-quality minority teachers to provide both instructional and social-emotional support to minority students. But if the Human Resources department has been unable to fill vacant teaching positions, money allocated to this new program will be unlikely to have the intended impact: the shortage of qualified candidates will result in the funds remaining unused instead of being allocated to meet other needs that can be more easily met.
Factor 6 Coherence with Other Programs
The second angle from which to look at programs holistically is coherence. What this factor focuses on is whether the programs under consideration will likely complement each other and existing programs, or create confusion and competition instead.
For two programs that target the same improvement area, it is not uncommon for them to differ in philosophy, language, approach, and method. When the differences contradict each other directly (e.g., one reading program focuses on whole language while another emphasizes phonics), adopting both may not be a wise decision.
In many cases, however, the differences may not necessarily lead to conflicts in implementation. What leaders need to consider is whether the central office department that will be rolling out the two programs has a plan that articulates how the two improvement efforts will be coordinated to enhance each other, and whether the department has the capacity to execute it.
DECISION MAKING PROTOCOLS
With so many factors to consider, it is necessary to institute a protocol that governs how the factors should be used for decision making. Numerous strategies have been devised in other fields to facilitate decision making when multiple factors or “criteria” are involved. Depending on the local context and culture, a district can use all or a subset of the above six factors sequentially or simultaneously to make program funding decisions.
A sequential decision making protocol would involve consideration of each factor, one after another. Before practicing the sequential protocol, leaders need to reach consensus on: 1) which of the six factors will be considered for decision making, if not all six, and 2) in which order they will be considered based on their relative importance. The chosen factors should be rank ordered from most important to least important. All programs are then assessed based on the first factor and only those that receive unanimous or majority support can advance to the next round where they are assessed again based on the second factor. This process continues until the final assessment based on the last factor is complete. Programs that survive the entire process will be approved for funding support. Any currently operating programs that fail during the process should be discontinued.
For example, through discussion, a leadership team agrees to use Factors 1 (Alignment with Organizational Priorities), 2 (Evidence of Impact), 5 (Feasibility of Implementation), and 3 (Cost per Pupil) in that order to help decide which new programs to fund and which existing programs to de-fund. Based on these four factors, Table 1 below demonstrates the process of how the leadership team makes funding decisions about seven programs (A – G) following the sequential protocol. Among the seven programs, programs A and D are existing and highlighted in bold. After the last assessment is complete, there are two remaining programs (D and E). As a result, existing program D will be continued and new program E will be launched with funding support.
Table 1 Example of a Sequential Decision Making Protocol
|1. Alignment with Organizational Priorities||A, B, C, D, E||F, G|
|2. Evidence of Impact||B, C, D, E||A|
|5. Feasibility of Implementation||B, D, E||C|
|3. Cost per Pupil||D, E||B|
In contrast to the sequential protocol where factors are examined one at a time, the simultaneous protocol considers all of the factors selected for decision making at the same time. Due to increased complexity, the simultaneous protocol can be best facilitated by quantifying the result of assessing each decision factor and then synthesizing the scores.
Of the six factors discussed above, Factors 2 (Evidence of Impact) and 3 (Cost per Pupil) are quantitative in nature. For Cost per Pupil, the assessment should lead to a specific dollar amount, say $200 dollars per pupil. For Evidence of Impact, effect size, which often ranges between 0 and 0.5 for educational programs, is routinely used to gauge the magnitude of program impact.
For the other four factors, a score can be obtained by having each decision maker rate a program on a scale (0-5, for example). Ideally, decision makers would gather evidence to support their ratings and use rubrics that clearly define what constitutes, for example, as “5” for Feasibility of Implementation. Ratings could be assigned during a meeting of the district’s leaders or cabinet, or by representatives of each stakeholder group (e.g., district leaders, board members, program directors, principals and teachers). After each person, or group of stakeholders, provides a rating, the individual scores can be averaged to obtain an overall rating for each program.
Scores for each of the factors need to be standardized for comparability and combined to produce an overall rating for each program. For example, a utility value can be calculated for each program. If certain factors are more important and should play a bigger role in affecting the decisions, different weights can be assigned to each factor with the more important factors being assigned heavier weights.
All districts have an implicit or explicit process for making decisions about which programs to fund and which programs to defund. What has been lacking in these processes, however, is discipline and rigor. It is not uncommon for leaders to overlook important factors, rely on misguided information, or apply different rules and criteria to different programs when making program funding decisions. Neither is it surprising that some individual leaders have undue influence over final decisions. As a result, despite the best intentions, limited resources are not spent on programs which are the most cost-effective for improving student learning.
In this article, we highlight six factors that we believe are important for leaders to consider in making program funding decisions. District leaders can apply these factors (or other preferred factors that are important to local context) either sequentially or simultaneously to help identify the best-rated programs to fund. This disciplined process will help leaders make budget decisions that are more strategic, holistic, and data-driven. Equally important, it will help build a collaborative team among leaders and increase the odds of programs being successfully implemented.