F-35 Officials Prove Need for Cyber Testing by Cancelling One

Photograph of F-35 with Computer Code overlaid on the aircraft.
(Modified U.S. Air Force photo.)

Realistic weapon testing has come under assault yet again.

The troubled F-35 recently hit another snag when, as first reported by Politico, the Joint Program Office (JPO) refused to proceed with the required cyber security tests of the F-35’s massive maintenance computer, tests needed to determine the computer system’s vulnerability to hackers. The JPO argued that such realistic hacker tests could damage the critical maintenance and logistics software, thereby disrupting flights of the approximately 100 F-35s already in service. But that simply raises obvious and disturbing questions about what could happen in combat. But on the broader question of how DoD buys weapons today, this is a clear demonstration of the folly of approving production on expensive systems long before they have been fully designed and thoroughly tested, a now common practice on almost all major Defense procurements.

The scheduled cyber tests target the vulnerability of the F-35 Autonomic Logistics Information System (ALIS). According to Lockheed Martin’s website, ALIS “integrates a broad range of capabilities including operations, maintenance, prognostics and health management, supply chain, customer support, training and technical data.” ALIS is designed to be a “single, secure information environment” that connects the plane’s on-board failure diagnostics with its maintenance management and the logistics supply system. In theory, ALIS would identify a broken part, order a replacement through the logistics system, and tell the maintenance crews what to fix. Cyber tests are particularly important for the F-35, which is commonly referred to as a “flying computer.” The plane has approximately 30 million lines of software code controlling all of the plane’s functions, from moving flight surfaces to creating images in its infamous $600,000 helmet. All this is tightly integrated with the ALIS program, which many consider to be the plane’s largest vulnerability. Should an enemy hack the ALIS system successfully, they could disable F-35 systems in combat, cause disastrous crashes, or ground the entire fleet.

The ALIS software and computer has seen its fair share of problems already. Last year, Joint Strike Fighter Program Executive Officer, Lt. Gen. Christopher Bogdan, had to admit that the software was “way behind.” Maintenance crews supporting the Marine Corps’ F-35 demonstration aboard the USS Wasp this summer found themselves going off base to transfer ALIS computer files to their laptops over a commercial Wi-Fi network when the ALIS system proved incapable of handling the massive data files. Elsewhere, maintenance personnel report that 80 percent of the issues identified by ALIS are “false positives,” reporting parts as broken when they weren’t. Determining which ALIS reports are real and which are not is a time-consuming process for maintenance crews, adding significantly to their workloads when they are already overburdened by the F-35’s significant reliability shortfalls.

Realistic cyber testing is required of all military systems “capable of sending or receiving digital information,” according to a 2014 memorandum from the Department of Defense’s top weapons tester. “The cyber threat has become as real a threat to U.S. military forces as the missile, artillery, aviation, and electronic warfare threats which have been represented in operational testing for decades,” wrote Dr. Michael Gilmore, Director of Operational Test & Evaluation. “Any data exchange, however brief, provides an opportunity for a determined and skilled cyber threat to monitor, interrupt, or damage information and combat systems," he added.

Dr. Gilmore proscribed testing of such systems to be completed in two phases. The first is an internal assessment by the program’s designers to attempt to identify potential problems and security gaps through an “overt and cooperative examination to identify all significant vulnerabilities and the risk of exploitation of those vulnerabilities.”

The second phase brings in outside “Red Teams” to simulate hacker attacks on the system to identify vulnerabilities. DOT&E uses adversarial teams certified by the National Security Agency to “act as a cyber aggressor presenting multiple intrusion vectors consistent with the validated threat.” Tests of this kind are often referred to as penetration or “pentesting” in civilian circles. By using highly skilled teams of computer hackers to break into the system, the combat user, weapons buyer, and designer learn if and how the system can be disrupted or exploited—and whether its vulnerabilities can be fixed.

It’s not as if this is a theoretical threat. The Department of Defense admitted in 2013 that a foreign power had hacked into unclassified F-35 subcontractor systems and stolen large amounts of sensitive information about the aircraft. The DoD would not say which foreign power stole the data at the time, but earlier this year, documents released by Edward Snowden confirmed the Chinese stole the information from Lockheed Martin in 2007.

The Air Force’s investigation into the breach was initially resisted by the F-35’s prime contractor, Lockheed Martin, Shane Harris described in his book @War. They were only able to determine that the company’s networks had been breached repeatedly after the Air Force generals in charge of the program at the time insisted that Lockheed and its subcontractors cooperate. The stolen information included vulnerabilities of the aircraft and its software.

Left unsaid is whether the program office would rather have operations disrupted now by friendly testers or later by hackers when the planes are engaged in combat.

The F-35 program office may have inadvertently confirmed the gravity of the concerns about software vulnerability with their statements regarding the testing delay. A program spokesman says the office “did not initially approve a cyber-vulnerability test due to the lack of a risk assessment related to operational F-35 assets.” In other words, the office fears the tests could end up disrupting real-world flight operations of the F-35s already in service. Left unsaid is whether the program office would rather have operations disrupted now by friendly testers or later by hackers when the planes are engaged in combat.

Concurrency Increasing Software Risks and Vulnerabilities

This speaks to one of the major, fundamental failures with the F-35 program: its unprecedented level of concurrency. Concurrency is the overlapping of development, testing, and production in an acquisition program. Advocates of the strategy claim it is a way to shorten the time necessary to field a weapon system. In reality, concurrency has historically slowed down the acquisition process and greatly increased costs.

Highly concurrent programs increase the risk that systems built early in the process will require expensive fixes or retrofits after problems are identified during subsequent testing. The Defense Department’s Undersecretary of Acquisition, Technology, and Logistics reported to Congress that the costs of concurrency for the F-35 program last year were $1.65 billion. These costs include “recurring engineering efforts, production cut-in, and retrofit of existing aircraft.” The report hardly painted a flattering picture of the practice.

Concurrent software development issues are hardly new. Frank Conahan, an assistant comptroller with the then-named General Accounting Office, warned against the practice in testimony before the Senate Armed Services Committee in 1990. Even then, nearly a decade before the Joint Strike Fighter program began, Mr. Conahan correctly identified software development as the one of the biggest risks to success in highly concurrent programs. “If the software doesn’t work, then the weapon system as a whole is not going to work the way it should,” he said.

The practice is becoming increasingly entrenched for several reasons. Defense contractors and the Pentagon tend to understate costs and overstate performance. Hence they have a strong motivation to spread subcontracts across as many congressional districts as they can (known as “political engineering”) and sell the Pentagon as many units as possible before an under-performing program becomes obvious to everyone. Those with a vested interest in the program then have a great deal to lose if a system does not perform well during testing. A recent example is the now famous dogfight test between an F-35 and the older F-16 the F-35 is designed to replace. The F-16 performed much better and prompted many to question the value of the entire Joint Strike Fighter program.

But because the F-35 is already in multibillion-dollar production employing thousands of people in hundreds of congressional districts, the plane has a great deal of political support. At least, that is the image Lockheed Martin wishes to cultivate. Parts of the aircraft are built in factories all across the country before eventually arriving in Fort Worth for final assembly. Lockheed Martin says the F-35 relies on suppliers from 46 states and provides an interactive map touting this fact. The reality is the majority of the work is done in only two states, California and Texas. Several states counted in the 46 have twelve or fewer jobs tied to the F-35. Still, there are precious few politicians willing to cast a vote that will be portrayed as “killing jobs” when campaigning for reelection.

The military services and defense contractors have a long history of working and lobbying to avoid realistic operational testing of new weapons systems.

A much better way of doing business is known as “fly before you buy,” the almost universal buying practice in commercial, non-defense procurement. Former Director of Operational Test & Evaluation Tom Christie says when done properly it “will demand the demonstration, through actual field testing of new technologies, subsystems, concepts, etc. to certain success criteria before proceeding at each milestone, not just the production decision.” In other words, acquisition decisions can be made based on performance achieved rather than capabilities hoped for.

The military services and defense contractors have a long history of working and lobbying to avoid realistic operational testing of new weapons systems. A common claim is that testing of this kind is too expensive and adds unnecessary delays to an already lengthy weapons acquisition process. In fact, the most recent industry effort to avoid realistic testing resulted in a provision in the National Defense Authorization Act requiring DOT&E to “ensure that policies, procedures, and activities implemented by their offices and agencies in connection with defense acquisition program oversight do not result in unnecessary increases in program costs or cost estimates or delays in schedule or schedule estimates.” However, these claims are false. The Government Accountability Office (GAO) recently released an audit showing that operational testing does not cause significant cost increases or schedule delays in major weapons programs.

The Pentagon and defense contractors will continue to avoid independent, realistic testing out of their own self-interest. The GAO said it well in its recent report: “postponing difficult tests or limiting open communication about test results can help a program avoid unwanted scrutiny because tests against criteria can reveal shortfalls, which may call into question whether a program should proceed as planned.” This is why Congress created the independent DOT&E in 1983 with broad, bipartisan support (the amendment creating the office passed 95-3 in the Senate)—one of the most important and lasting achievements of the military reform movement of the 80s. To this day, the office provides a vital service to strengthening national security and protecting the men and women in combat who must actually use the equipment the Pentagon buys.

Photo of Dan Grazier

By: Dan Grazier, Jack Shanahan Military Fellow

Dan Grazier is the Jack Shanahan Military Fellow at the Project On Government Oversight

comments powered by Disqus