Analysis / Reform

Crowdsourcing Defense Reform

Introduction to the Survey

This summer marks the 30th anniversary of the 1986 Goldwater-Nichols Department of Defense Reorganization Act, and interest in defense reform is high around Washington, D.C. The process for developing the 1986 legislative package is well-known for its focus on problem identification. CSIS has a proud role in that history, having housed the Defense Reorganization Project beginning in 1983 and published its February 1985 report, Toward a More Effective Defense, an influential predecessor to the landmark Goldwater-Nichols Act. CSIS is once again using its bipartisan, independent platform to help analyze defense reform prospects and possible solutions. As in the early 1980s, the reform process today must begin with strong problem definition if it is to succeed.

With this principle in mind, the CSIS International Security Program has undertaken a series of efforts aimed at improving the quality of defense reform problem definition and solution “fit.” These efforts have included roundtables with senior former military officers and defense officials, analysis of relevant congressional testimony and other literature, educational seminars with congressional staff on the Goldwater-Nichols Act, and a half-day public conference on defense reform. They also included a public survey on defense reform, undertaken with two primary goals. First, we wanted to garner a broad sense of public opinion on some of the key facets of the defense reform issue set, including assessments of the balance of power among institutions, problem identification, and guiding principles for potential reform. Establishing areas of notable consensus, or divergence, can assist in charting the course toward building support for contemporary reforms. Second, we sought to test opinions against the old axiom that “where you stand depends on where you sit” by requiring respondents to answer several demographic questions. As many opinions in this issue area tend to be bolstered by personal experience, we determined that accounting for potential biases was necessary.

We released the survey on March 4, 2016, and kept it open until March 11, 2016. During the week, we received more than 900 responses in full or in part. Respondents were asked to sort themselves into one of four primary demographic groups that best reflected their professional experiences: legislative branch, executive branch-military, executive branch-civilian, and other. Respondents then answered additional professional demographics questions related to their level of seniority and the organizations in which they served.

Given the length of the survey and that respondents were only required to answer demographic questions, the sample size per content question steadily declined over the course of the survey. The first content question received approximately 918 full responses while the last question garnered approximately full 750 responses. Although the demographic information below reflects the full 918 respondents who completed that portion of the survey, we provide the sample size for each content question separately in this analysis.

figure-a

Figure A displays the average proportion of responses across the primary demographic groups throughout the survey. Respondents with military experience were our largest respondent group, followed by civilians in the executive branch, and “other,” a group comprising individuals who define themselves by their experiences outside the U.S. federal government. We received the fewest responses from professionals with experience in the legislative branch, with the response rate averaging approximately 37 people across the content portion of the survey.

Within each of our other three major demographic groups, respondents reflected a diverse mix of experiences. Military respondents were overwhelmingly at the field grade officer levels of 05-06, serving primarily in military departments and/or combatant commands. The survey received the most responses from those in the Army followed by the Air Force, Navy, and Marine Corps; the breakdown of military respondent service demographics is displayed in Figure B.

Among civilians, we received an almost even split of responses from those with experience in DoD and those from other U.S. government agencies, such as the National Security Council staff and Department of State. The vast majority of civilian respondents had experience at the GS-13 level or higher. The final group, “other,” contained a wide mix of people with experience in defense industry, academia, think tanks, journalism, and additional nongovernment careers. Those who selected “other” mostly fell into two polar categories of experience—less than 5 years or greater than 21 years.

figure-b

Survey Response Analysis

Question 1: Guiding Principles

959 people answered this question in whole or in part.

Given its experience with prior reform efforts, the CSIS study team sought first to determine the principles for reform that respondents felt were most important. Figure 1A displays the results. The lower the score, the more important respondents found the principle. The nine possible principles from which to select were developed based on pre-survey research to identify guiding principles for reform. That effort included surveys and interviews of prominent former senior military and civilian defense officials. The public survey results in Figure 1A aligned with the findings of that smaller group: maintaining civilian authority was determined to be the most important guiding principle for reform (with the lowest average score—closest to 1); ensuring the quality of military advice was a close second choice as the most important principle. These priorities align fully with those in the original Goldwater-Nichols reform effort and conform well to traditional and contemporary civil-military relations literature, particularly when also accounting for the high priority our respondents accorded to ensuring the independence of military advice, which was selected fourth most frequently. As Figure 1B demonstrates, the strength of agreement for putting first priority on maintaining civilian control was significant, with close to 400 respondents selecting it first.

figure-1a

figure-1b

Figures 1C and 1D provide demographic breakdowns for Question 1 respondents. These figures demonstrate some interesting differences in perspective regarding the most important principles for reform. Once again, the lower the rating (closer to 1), the more important a group found the principle. Note that Figure 1C indicates that legislative branch respondents accorded relatively less priority to principles relating to military advice than did other respondents. Conversely, legislative branch respondents were noticeably more likely than the other three groups to prioritize increasing the efficiency and effective management of DoD systems. Finally, respondents who identified themselves as “other,” generally described as coming from backgrounds outside of government, ranked most highly the need for a principle focused on improving strategy formulation and contingency planning, with maintaining civilian authority a close second priority. Military respondents ranked ensuring the quality of military advice highest, just slightly above maintaining civilian authority.

figure-1c

figure-1d

Figure 1D shows the breakdown of military respondents across the services. It demonstrates clear differences of opinion on the most important principles between, on the one hand, the Army and the Air Force, who ranked military advice as the most important principle, and, on the other hand, the Marine Corps and the Navy, who ranked maintaining civilian control highest. Also of interest in Figure 1D is the relatively higher ranking Air Force respondents placed on the principle of balancing military “supply” and “demand.” This is perhaps related to the Air Force’s distinction of being on the slowest readiness recovery path and facing the most significant modernization bow wave among the services.

Question 2: Opportunities for Reform

941 people answered this question in whole or in part.

The CSIS study team next wanted to ascertain respondents’ views about how prone certain key issues were to improvement through reform. Fifteen pre-selected choices were provided in the survey, drawn from the issues frequently addressed in congressional testimony as in need of reform. In this question, the higher the score, the more a respondent believed an area could be improved through reform. Figure 2A displays the results. Respondents believed the greatest opportunities for improvement through reform were, in priority order, in the acquisition process, strategy process, and programming and budgeting processes. Items seen as least likely to produce improvement through reform were the independence of the chairman of the Joint Chiefs of Staff, the civilian personnel system, and the civil-military balance.

figure-2a

Each respondent was asked to rank-order only five issues, providing useful data for determining the strength of respondents’ beliefs in the potential reform opportunity in various areas. Figure 2B displays the total number of times an issue was selected by the respondent pool as ranking in their top 1–5 opportunities for reform. One interesting insight from displaying the data in this way is identifying any differences between the number of people selecting an issue as ranking in their “top 5” opportunities for reform and the strength of the ranking for an item within that “top 5.” Such a gap appears to exist for the strategy opportunity set, which Figure 2B displays receiving the fourth-highest total of votes but Figure 2A displays as the second-highest scoring opportunity for reform. Taking these two pieces of data together, we can infer that while fewer people selected strategy as a “top five” area of opportunity for reform, those who did ranked it as highly opportune for reform. It was likely the highest-ranking choice for a dedicated subset of respondents.

figure-2b

The demographic breakdown of responses to the question on reform opportunities displays the most significant variation among groups in responding to the survey. As Figure 2C demonstrates, legislative branch respondents were noticeably more likely to rank efficiency of DoD and the acquisition process as opportunities for reform than were other groups. Similarly, those in the military or outside of government (“other”) were noticeably more likely to identify strategy as an area opportune for reform than were the legislative and civilian executive branch respondents. For their part, executive branch civilians noted relatively greater opportunity for reform in the interagency process, civil-military balance, and civilian personnel system than did others. True commonality of viewpoint among the groups was only evident in three areas: the modest opportunity value of reforms to the joint requirements process, to the quality of military advice, and to DoD programming and budgeting. There is a similar amount of divergence in Figure 2D, which displays the breakdown of military respondents by service. Only on civilian personnel reform and interagency process is there fairly general agreement across the military respondents as to the potential opportunity for reform. The Navy is significantly more hopeful about reforms to the acquisition and programming and budgeting processes than the other services; the Army and Marine Corps are more hopeful about reforms to strategy and contingency/war planning.

figure-2c
figure-2d

Question 3: Strength of Institutions

795 people answered this question in whole or in part.
The third survey question asked respondents to rate the current strength of key leaders and institutions against the needs of the Department of Defense. The categories were secretary of defense, chairman of the Joint Chiefs of Staff, combatant commanders, service chiefs, service secretaries, and National Security Council staff. A score of 5 was identified as “just right” in terms of strength. As Figure 3A shows, all categories of DoD leaders and institutions scored close to the “just right” mark, with the chairman of the Joint Chiefs of Staff, service chiefs, and service secretaries falling slightly on the “too weak” side, the secretary of defense scoring a nearly perfect “just right,” and the combatant commanders falling slightly onto the “too strong” side. The most interesting result, however, came with the response on National Security Council staff, which respondents found to be unequivocally “too strong,” scoring close to a 6.9. Even more interesting, Figure 3B and Figure 3C show that this perception of a “too strong” NSC staff is shared across all of the demographic groups, to include across the military services and expressed most strongly by those respondents identifying themselves with the legislative branch. That group of legislative branch respondents judged all other key actors and institutions as slightly too weak, with the service secretaries having the largest deficiencies from “just right.” Most other response groups were relatively uniform in their opinions, although the Marine Corps respondents were noticeably more concerned that service secretaries are “too strong.”

figure-3a

figure-3b

figure-3c

Question 4: Size of Institutions

783 people answered this question in whole or in part.
Whereas Question 3 focused on the strength of institutions relative to their need, Question 4 seeks to elicit respondents’ views on the size of these institutions relative to their appropriate responsibilities. Some of the energy around defense reform comes from a concern that major defense institutions have become too large, impeding both effectiveness and efficiency. Figure 4A displays the results, which demonstrate a general concern that the key institutions of DoD and the NSC staff are too large relative to need.figure-4a

Once again, the National Security Council staff is identified as the organization of greatest concern within the survey’s choices. As the demographic breakdowns in Figures 4B and 4C illustrate, this concern about the NSC staff’s size is shared fairly equally among all demographic groups, with the group of Marine Corps respondents seeing the largest gap between need and size (too large). Overall, military respondents were almost equally as concerned about the size of OSD as they were about the size of the NSC staff. Within that military cohort, Navy respondents were most likely to rate OSD as too large relative to need—even more likely than they were to rate the NSC staff as too large. Navy respondents were, as a group, also most concerned about the size of the Joint Staff, followed closely by Marine Corps respondents. The Air Force respondent group was consistently, though only modestly, less concerned about the size of national security institutions than those from other services.

figure-4b

figure-4c

Question 5: Effectiveness of the Interagency Process

782 people answered this question in whole or in part.
Prior work by CSIS scholars has identified significant interest among reform advocates in examining interagency reform as part of any defense reform review. The study team thus asked respondents to rate the effectiveness of the interagency system at certain tasks important to national security formulation. Those tasks were strategy development, policy evaluation and assessment, budgeting and resource management, policy development, use of military force/forces decisions, and the confirmation and political appointment processes. On a scale of “wholly ineffective” to “highly effective,” respondents judged the interagency process as less than middling on all seven processes. Figure 5A displays the results.

figure-5a

Respondents were most positive about the White House-led process on confirmation and political appointments and least positive about interagency strategy development processes. Figure 5B demonstrates the relative unanimity of viewpoint on the interagency system’s effectiveness in the seven areas. Military respondents rated the system lower than did other groups in five out of the seven areas. Perhaps unsurprisingly, civilian executive branch respondents were more negative than other groups about the interagency processes for confirmation and political appointment and policy implementation, the two areas where this cohort is most likely to interact with the White House. There was very little variation in responses across the military services.

figure-5b

figure-5c

Question 6: Effectiveness of Congressional Oversight

782 people answered this question in whole or in part.
After turning the reform lens on the White House-led interagency system, the next institution examined in the survey was Congress. The study team asked respondents to rate the effectiveness of congressional oversight in seven key areas. These were strategy development, use of force authorization processes, policy development, confirmation and political appointment processes, evaluation/assessment of policy, budgeting and resource management, and routine oversight of DoD policies and programs, such as through the National Defense Authorization Act process. Figure 6A displays the results.

figure-6a

As with responses on the interagency system, respondents judged congressional oversight to be less than average in its effectiveness across all categories. Congress did best on routine oversight, as in the passage of the annual authorization act. It was rated least effective in its oversight of strategy development, a domain in which congress has attempted many statutory directives over time, from the establishment of the National Security Strategy, National Military Strategy, and quadrennial strategy documents, to its periodic creation of independent strategy review panels, such as the National Defense Panel. As with the interagency effectiveness question, these responses were fairly uniform across the various demographic groups surveyed. Notably, this included the legislative branch respondents themselves: only in one of the seven categories—that of evaluating/assessing policy—did the legislative respondents show a noticeably more positive perception of congressional oversight. Even then, the rating was below a middle-grade of effective oversight. There was very little variation in responses across the military services.

figure-6b

figure-6c

Question 7: Role Clarity

765 people answered this question in whole or in part.
The final question included in the survey was on the degree of clarity regarding the appropriate role that DoD actors play in nine major functional areas. This question grew out of a common assertion heard by the CSIS study team in pre-survey analysis: there is significant ambiguity over roles and responsibilities in DoD, which is contributing to unnecessary duplication, problems with speed and agility, the layering of bureaucracies, and possibly ineffective outcomes. The breadth of these perceptions was somewhat born out in the survey data. As shown in Figure 7A, responses across the nine categories tended to cluster somewhere in the middle between “totally unclear” roles and “crystal clear” roles. The areas of greatest perceived ambiguity were in planning for cross-regional operations, communications with others in the interagency, executing cross-regional operations, establishing and enforcing joint requirements, and approving programmatic and acquisition issues. The areas of greatest seeming clarity were in more cleanly described chain-of-command relationships: the approval for the use of force and the execution of military action.

figure-7a

Figure 7B breaks down the data by demographic group. Those identifying themselves as legislative branch respondents were most concerned about role clarity, rating it weaker in eight out of nine categories than did the other demographic groups. Perhaps unsurprisingly, the legislative branch respondents were relatively more positive than the other groups in the final category: the clarity of roles in communicating with Congress. On issues pertaining to the planning and execution of cross-regional military operations, military respondents were slightly more positive about role clarity than others. In five of the categories, civilians in the executive branch were the most positive about role clarity, most notably so on approval for the use of force, communications with the president, and approval for programmatic and acquisition issues. Figure 7C illustrates some modest variations in perspective among military service respondents. As with the larger response pool, on no issues did a respondent group come close to the extremes of “crystal clear” roles nor “totally unclear” roles.

figure-7b

figure-7c

Conclusion

With the release of the House and Senate authorizing committees’ FY 2017 National Defense Authorization Act bills, CSIS and the broad defense reform community are now turning their attention to the quality of the proposals put forward on Capitol Hill or generated in the Pentagon and elsewhere. As the above analysis demonstrates, the March 2016 public survey on defense reform provides insight into how various groups interested in the topic view the challenge set. Some of these views are borne out in the Hill action on defense reform thus far in this cycle. Notable examples are the inclusion in both chambers of proposed strictures on the size and role of the National Security Council staff, the SASC’s concern over size of defense institutions and organizations, and both HASC and SASC efforts to amend the defense strategy formulation processes and resulting document.

The survey data is thus illuminating in identifying likely areas of concern, consensus, and divergence on defense reform. Nevertheless, the CSIS study team stresses that there is a selection factor evident in the respondent cohort that tilts the outcomes toward the viewpoint of highly vested parties. This survey data should therefore be considered only as one input into a much broader assessment methodology for defense reform. As the CSIS study team moves forward to analyze proposed reforms put forward by Congress and the Obama administration, and to develop and refine its own proposals, it will draw not only on data in this survey, but also on its complementary assessment efforts.

This paper is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2016 by the Center for Strategic and International Studies. All rights reserved.

TAGS: