What does Cyber really mean for AFVs? We all hear about grey zone boogeymen, meddling elections and attacking IT systems, but is there anything actually at risk in the 'Cyber' domain in the cold hard steel of an AFV? Yes, potentially quite a lot as it turns out.
Cyber is a bit of an amorphous bogeyman term that often just means computers. But Cyberattack and resilience against cyber attacks is a real thing and has very tangible implications for AFVs. As recently as 2020 the US Army stated that 'Adversaries demonstrated the ability to degrade select capabilities of the [Stryker Dragoon] when operating in a contested cyber environment' and began taking proactive steps to increase the resilience of fielded platforms. These weren't loopholes found in tests, these were operational vehicles in Europe being attacked, successfully.
"Tests Revealed that Most Weapon Systems Under Development Have Major Vulnerabilities, and DOD Likely Does Not Know the Full Extent of the Problems" US GAO, 2018
This post is an attempt, via very heavy cribbing from a range of experts including an excellent paper produced by BAE Systems from 2019 on the topic of demystifying cyber in the AFV domain, to outline some more tangible facts around the nebulous concept of cyber and how it relates in very real physical terms to the AFV domain.
Note: I'm not an IT guy, nor a cyber dude, so apologies for anything wildly inaccurate ahead. I've had a good crack at researching this horrendously complex topic, and welcome any cybermonkeys to message me and I'll gladly make edits accordingly.
What is 'Cyber' and 'Cyber Resilience'?
A brief exploration of terminology, which is never entirely clear when it comes to nebulous concepts like cyber. In the context of this post, cyber is the digital domain - any interaction with the digital side of an AFV. Computers, essentially. To get from cyber to cyber resilience, we need a few intermediary steps.
A cybersecurity vulnerability is a weakness in a system that could be exploited to gain access or otherwise affect the system’s confidentiality, integrity, and availability. It is the door through which access may be gained, whether forced or inherently open to attack.
A cybersecurity threat is anything that can exploit a cyber vulnerability to harm a system, either intentionally or by accident. This is a combination of individuals or entities with intent, and the capabilities to carry out the attack.
Cybersecurity risk is a function of the threat (the intent and capabilities), vulnerabilities (whether inherent or introduced), and consequences (fixable or fatal).
The concept of cyber resilience, then, is the ability to continuously carry out desired functions and outcomes whilst mitigating the risk and defending against attacks against your cyber vulnerabilities.
While the world moves rapidly to keep up with the developing cyber landscape, the military are often a few steps behind. This is largely a product of the monstrously slow and unresponsive wheels of the processes around infrastructure and equipment procurement, which means changing designs, processes and hardware is rarely something that can be done with agility, leading to vulnerabilities being widely propagated and slow to close.
A US GAO report in 2018 on weapon system cybersecurity had a fairly damning outlook, stating that it was likely most defence systems were vulnerable, and that there was low probability the DoD had much in the way of an idea of what those vulnerabilities even were. The rapid acceleration of autonomy and connectivity that defence and the private sector love so much is the exact thing rapidly increasing the risk of cyber-attack, and massively increasing the potential consequences of such an attack.
The main way at present to mitigate cyber-attacks is in securing the supply chain so as to avoid vulnerabilities being exploited in hardware before it is installed, or exploiting gateways into the systems in the first place. Once there are vulnerabilities, the sluggishness and complexity of modifying defence systems after contract award and design finalisation makes it extremely unlikely they will be closed quickly, easily or cheaply.
How can an AFV be attacked, and what are the defences?
It's easy to look at an AFV and suggest that it is largely immune to cyber vulnerability. It is a vehicle, not a server rack in a building, and cyber-attacks as far as we hear about them are things you direct at a digital structure or a server, not something physical like a tank, right?
Well, no. Aside the obvious point that a modern AFV is a hugely complex system of systems with significant digital complexity in its own right, the vast majority of AFV are now fully networked systems linked into automated C4ISR and BMS networks that render the entire battlefield a dense network of targetable and exploitable nodes.
The route into an AFV is typically through its physical and digital connections pathways, and once in, a path to an exploitable point will be made. In broad terms, the hierarchical layers of an AFV that are open to exploit are at the platform, bus, assembly, board and chip level, in descending order. The specific exploit may be at any of these tiers, and to have meaningful resilience for the vehicle as a whole requires robust analyses, assessments and mitigating measures at all of them.
Simplistically, the risks are either that a chip is compromised and allows access up into boards, assemblies and buses, or that a gateway to the platform is compromised allowing access down into the vehicle buses and connected systems.
The Chip to Assembly Level
There are many attacks that exploit the inherent characteristics and design elements found in the chips that are used in the electronics of AFV systems. Many backdoors in modern chip designs don't require physical access to the systems in question; indeed, the overconfidence of chip designers that physical access is required has left many backdoors open. Some of the attack vectors at the chip level include:
Integrated Circuit (IC) Reverse Engineering (RE). By simply obtaining a physical example of a chip, critical elements of the entire circuit can be reverse-engineered and then exploits and attacks devised. The design of the circuits including their embedded firmware and hardware or software-based cryptographic elements are all exploitable, including via micro-probing techniques as well as so-called circuit microsurgery to identify and bypass hardware and software locks in the chip. An RE process is an enormously complex challenge as varying thickness stacked multilayer interconnected circuits comprised of a range of metals; polymers and silicides including low-k dielectrics, fluorosilicate glass (FSG), phosphosilicate glass (PSG), and SiO2 must be delayered, mapped and analysed using a range of techniques including scanning electron microscopy (SEM), transmission electron microscopy (TEM), plasma (dry) etching, wet etching, and polishing. Once the 'blueprint' of the chip is obtained, weaknesses or exploits can be devised, tested and deployed as part of a multi-layer cyber-attack, interfering, damaging or taking control of chip functions. In essence, if you have the blueprint, you can analyse its weaknesses and design counters.
Silicon Malware. Many, if not most, chips contain intentional and unintentional backdoors and hidden functions such as admin and security controls or reset and power-down functions, so-called 'baked in malware'. Many of these can be readily exploited without a human or other physical interaction. A practical publicised example on the ramifications of the exploitation of backdoors in the core architecture of a chip was demonstrated in a 2012 discovery of a backdoor in the Actel/Microsemi ProASIC3 A3P250 chip, at the time widely used across military and industrial applications and originally marketed as "having inherent resistance to both invasive and non-invasive attacks". As the abstract of an academic paper on the issue stated; "The backdoor was found amongst additional JTAG functionality and exists on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), our pioneered technique, we were able to extract the secret key to activate the backdoor, as well as other security keys such as the AES and the Passkey. This way an attacker can extract all the configuration data from the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property (IP) theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact they can be easily compromised or will have to be physically replaced after a redesign of the silicon itself."
Glitching. As the name suggests, glitching focuses on initiating faults and momentary interruptions into a target system, interfering with proper operation and potentially rendering subsystems inoperative for periods of time. This could range from transient power interruptions, corruption of clocks to prevent normal execution of otherwise inaccessible protected processing paths, or RF glitching via electromagnetic pulse to interfere with proper chip operation. Clock manipulation is a particularly impactful glitch in all applications, especially fire control and targeting systems.
Though not a cyber-attack, a good example of the grave implications of erroneous clock timing can be seen with the US Patriot air defence system in 1991. A flaw in the Patriot system meant after 100 hours of continuous use, the clock would drift around a third of a second. Sounds ok, but when you're trying to compute a fire solution with high velocity targets and interceptor missiles, that drift is hundreds of meters of misdirection in the sky.
Periodic reboots of the system and you are fine, but users didn't get this instruction, resulting in Scud missiles being detected, but targeting radars looking in the wrong piece of sky for them, and so scrubbing them as clutter when nothing was found there. 28 people were killed by a missile that was dismissed by the system in the above example. With modern AFV using complex FCS to not just assist in accurate engagement, but fundamentally allow a weapon to be fired at all, inducing even a momentary clock glitch attack could render AFVs entirely impotent for the crucial moments required to defend themselves or prosecute an attack.
Inductive Bit-Flipping. Another attack vector at the chip level is inductive bit-flipping, whereby individual memory cells (bits) are manipulated and the resultant encoded system messages are observed. Seeing how flipping a bit effects the cyphertext can open the ability to circumvent security or turn functions off. This attack is especially effective where the attacker knows the format of a message even if they cannot read it. If they know the aforementioned timing element for an FCS calculation is located at a particular point in a string, they can target that part of the message with a bit flip attack and corrupt it, thus leading to errors downstream. error correcting memory, digital signatures and other authentication tools are key to discerning when a message has been altered so it can be scrubbed and sent again or other action taken.
Defending a system at the chip level is mainly around inherent protection in the integrated circuit design stages and ensuring security throughout the supply chain feeding components into the final circuit assemblies such that an intact physical copy of the chip will never fall into the hands of someone desiring to conduct RE and mapping of it, nor that they can implement design comprise at the point of manufacture.
Secure Chip Design. Reflects a general need for chip manufacture to account for the current and projected attack types and attempt wherever possible to mitigate those attack vectors. This would include anti-tamper design elements to prevent RE from being a viable course of action, as well as actions such as holding cryptography off-chip to prevent exploitation if obtained or otherwise accessed. Securing the custody of chips, meaning the supply chain is secure in manufacture and delivery, and the asset, whether a weapon or specifically an AFV and its subsystems, is never allowed to fall into a potential enemy's hands intact are also crucial.
Emissions Management. To defend against side channel attacks and any malicious signals coming into the platform to trigger an exploit or otherwise interfere with system functions, it is critical to understand the nature of each signal being received and distributed within the vehicle, applying filtering and interference as appropriate. Such proactive management of emissions is an ongoing process that requires constant analysis of potential threats and vulnerabilities. An issue with highly proactive firewall and filter application is that for robust protection a physical interface is required at nearly all line replaceable units (LRUs), creating a tangible burden on space, weight and power requirements. Beyond this, the more restrictive and cyber safety conscious systems become, the greater the risk they periodically intercept and block legitimate signals, introducing self-harming effects without any adversary input.
Active Chip Defence. More proactive measures to monitor, detect and disrupt attacks are constantly being worked on, and then can be introduced to systems. As mentioned, the lack of agility in defence procurement and fielding may prevent these techniques from protecting today's systems, but absolutely are at the heart of the next generation of systems.
Beyond the chip, the issues are broadly the same but happening at increasing levels of scale. Boards and assemblies can be similarly mapped and interfered with to exploit them, and protecting them is similarly a case of proactive measures in design (including software functions to monitor and identify erroneous behaviour), as well as securing of the supply chain to prevent insertion of vulnerabilities or compromised hardware.
One slightly different level is the bus level, buses being the communication pathways (either physical or digital) used by systems to pass data between themselves. As the trusted means of passing information and signals between systems, the bus offers a unique opportunity to compromise systems without having to access them or understand them in any way - simply corrupt, transform or otherwise disrupt the signal to them.
Any data that is travelling through a CAN or 1553 bus is, by inherent design of a bus-based system, visible to every and any system on that bus. If in possession of the interface control document, an adversary with access to the bus can send, intercept and manipulate messages from any hosted system in real time. This includes external message manipulation, as well as sensor and actuator data manipulation, all of which can have far reaching consequences. These is called Man-in-the-Middle (MITM) attacks. A similar risk is that once in control of a system on the bus, the attacker can not just manipulate, but also generate their own messages, which is called a Man-on-the-Middle (MOTM) attack.
Defending a system at the bus level is based heavily around encryption of signals and creating a environment that is too volatile for exploits to persist on.
Encryption of signals conceptually largely defeats such man in the middle attacks, however the encryption key is held within the systems on the bus, which in this scenario have already been compromised and so the encryption may be defeated anyway, and certainly should be treated as such for safety.
A further defence is to mirror the approach taken by cloud-based services, using an agile defence that creates a "constantly evolving and dynamic attack surface" meaning the vehicle's systems are not static and homogeneous, but constantly shifting, cycling instances and flushing themselves so that even if compromised, it will be a short time before they are wiped clean, and the systems they interface with are constantly doing the same thing. This environment becomes extremely hard for a single exploit to propagate through the complete network and radically reduces the time that a threat can survive on the system, requiring constant effort to regain a footprint on the initial compromised system.
Introducing this sort of approach within an AFV is challenging, but a topic of active research and development that would increase resilience and create vehicles that are radically harder to exploit a single vulnerability to impact the whole platform.
There are lots more, and much more technical, approaches to bus defence, including intrusion detection (IDS) and intrusion prevention (IPS) resilience technologies that seek to detect sniffing, replay, injection and spoofing attacks; as well as cross-assembly authentication and authorisation techniques for dynamically establishing trust between systems on a bus, scaling based on prior behaviour with having lower trust and higher scrutiny of new systems and signals.
The uppermost layer is attack and defence at the whole platform level, with the predominance of attacks targeting the subsystems responsible for on- and off-board communications and network management between the AFV and other AFVs or higher-level infrastructure. In a practical sense, this is the door that lets the attacker into the vehicle and to then exploit many of the lower-level attacks above. Common attacks at the platform level include:
Vehicle to Infrastructure (V2I). All contemporary AFV constantly send and receive data with a broader infrastructure, and naturally, this data pathway becomes a significant risk area. Many systems still transmit unauthenticated packets and often have limited checks for authenticity, integrity or validity, allowing attacks to pivot from successful attacks into the overall infrastructure into a vehicle system. A good example of V2I related cyber exploitation are the US Army's M1296 Stryker Dragoon vehicles that were 'hacked' by 'adversaries' whilst operating in Europe in 2018. “Adversaries demonstrated the ability to degrade select capabilities of the ICV-D when operating in a contested cyber environment,” said the US DOT&E, with unconfirmed reports that the 'hacks' impacted the vehicle's data and voice communication systems.
Vehicle to Vehicle (V2V). Similarly to V2I, the trusted data links between vehicles are open to exploitation on the same basis of often sharing unauthenticated data and allowing a rapid pivot from one compromised platform to another throughout the formation. With high-level data sharing, sensor fusion and other shared information systems now the norm for new AFVs these systems are more software and IT-dependent and more networked than ever before. Historically efforts focused on securing networks, not specific weapon systems or vehicles, and has surprisingly often left prolific vulnerabilities in fielded systems themselves.
Radio Frequency (RF) Apertures. As well as targeting the specific V2I/V2V links by compromising a node and then using it to pivot onto another, the existence of RF apertures and communications protocols provide a gateway on all platforms to inject malicious data or otherwise disrupt a vehicle through its radios, Wi-Fi, Bluetooth and GPS receivers.
Test and Data Ports. Most systems include test ports for connecting during setup, maintenance and repair, and in many cases, these are unsecured, with their protection being the inherent inaccessibility of the hardware and the vehicle itself. Though requiring a physical interaction, plugging hardware implants into just one port could be sufficient to open a gateway and allow larger scale intrusion and V2I/V2V propagation of an attack. As with lower-level attacks, these test access points may also be in software as well as hardware and exploitable once access through an aperture has been achieved.
Malicious Maintenance. More proactive measures to monitor, detect and disrupt attacks are constantly being worked on, and then can be introduced to systems. As mentioned, the lack of agility in defence procurement and fielding may prevent these techniques from protecting today's weapons, but absolutely are at the heart of the next generation of systems. During its manufacture, in storage or in transit, the attacker can gain access to a system and begin to expand the attack through the vehicle systems and eventually other networked systems and infrastructure.
Defending a system at the platform level is around two key concepts, assuring a secure network in the V2I and V2V environments, and physical protection of the platform from malicious interaction. In both cases, this can be handled in large part by proper standards and accreditation of the systems, facilities, and supply chain.
Secure V2I/V2V. Self-evidently, the first step is around ensuring a secure V2I/V2V environment with full authentication and managed trust relationships at all levels, without the assumption of secure or safe spaces behind certain points in the network.
Secure ports and gateway management. All access points into the vehicle must be secured. Much as with the secure V2I/V2V connections, physical ports and digital gateways need authentication, secure key management and distribution, and no matter the interface, a level of security is embedded. Assumptions around unrestricted access to maintenance or service interfaces due to the presupposition that only authorised individuals could access certain nodes due to their physical location and so security interfaces are omitted.
Secure the supply chain. One of the key areas of focus is securing the entire supply chain and assuring that there can be no entry gateway to exploit upstream of a vehicle being built. Preventing compromised components from entering or interfacing with the vehicle prevents the platform from being compromised.
Back to basics
All the above factors and defences are important but, If slightly anticlimactically, the main defence of AFV from a cyber perspective is about process compliance. Securing the supply chains, facilities, and personnel stops access that allows any of the exploits to occur in the first place (for the most part).
The risk is very rarely about jumping the air gap and 'hacking' into vehicles in the field, it's about compromising them at a more basic level very far from the battlefield in the supply chains and support facilities, then exploiting the vulnerabilities, characteristics and specifications found.
The effects of these hacks are also not the exciting Hollywood examples of taking remote control of the platform, but about disrupting systems and critical processes such that the vehicle becomes unable to complete its core functions, preventing it from completing its mission or simply being unable to shoot first.
At a higher level, the same realities apply and the gravest threats to an AFV is low level suppliers failing to properly secure their facilities and products, leading to a cascade of events that reach the vehicle. The impact of an undetected hack and manipulation of a supply system resulting in a failure of spares being effectively distributed to frontline units would rapidly affect far more AFVs than any point weakness in the operational vehicles' systems.