Thomas Gazagnaire

Thomas Gazagnaire

Building Functional Systems from Cloud to Orbit. thomas@gazagnaire.org

Is Running Untrusted Code on a Satellite a Good Idea?

2026-02-25

The same conversation keeps happening. I explain what Parsimoni does (run third-party software on someone else's satellite) and the response is always some variant of: "I would never trust code I have not reviewed to run on my satellite." They are right to worry -- I would say the same thing in their position. And the security research community has been saying it loudly for several years now.

To understand where the gaps are, it helps to separate the two halves of the stack. Bus software (attitude control, power, thermal) runs on qualified RTOSes and is the satellite manufacturer's domain. Payload software (imaging, communications, data processing) increasingly runs on COTS (commercial off-the-shelf) processors with memory management units (MMUs) perfectly capable of enforcing memory isolation. For LEO missions on COTS hardware, the hardware is there. The software running on it does not use it.

I think three things are missing for anyone wanting to run hosted payloads or patch software in orbit safely:

  1. Hardware-enforced memory isolation between payloads and flight software, either through a hypervisor (like Xen or KVM) or a separation kernel (seL4 has a formal proof of functional correctness on specific hardware configurations). Both use the MMU to ensure a compromised payload cannot reach the bus.

  2. A standard OTA deployment format so "install this application" means the same thing across different satellites: cryptographically signed, designed for 10-minute LEO pass windows and partial transfers.

  3. A metered runtime that enforces resource quotas (CPU, memory, storage, bus bandwidth) so one payload cannot starve the others. Resource requirements should be declarative, embedded in the deployment image, and verified before installation.

The tools to address much of this exist in the terrestrial world: memory-safe languages (roughly 70% of CVEs in large C/C++ codebases are memory-safety issues, a figure consistent across Microsoft, Google, and Android since 2019), verified cryptography, minimal single-purpose OS images. These are not theoretical: Google cut Android memory-safety vulnerabilities by 76% in six years by writing new code in memory-safe languages, and AWS runs millions of serverless workloads on Firecracker, a 50,000-line Rust VMM that boots in under 125 ms. None of these have made it into flight software yet, even as the number of spacecraft in orbit (and the attack surface) has grown sharply. Space Odyssey examined three satellites and found vulnerabilities in all of them, and those are only the ones someone bothered to look for.

Mass launched to orbit versus publicly disclosed satellite software vulnerabilities, 2000-2025

Mass launched to orbit (blue) versus publicly disclosed satellite software vulnerabilities (red bars, from CVE/NVD and conference disclosures; data and sources). The gap between the curves is the problem: launch volume has exploded, security scrutiny has barely started.

The evidence is mounting. None of the Space Odyssey satellites had ASLR, stack canaries, or DEP; one had no authentication on its telecommand interface. The Viasat/KA-SAT attack in February 2022 disrupted Ukrainian communications and 5,800 German wind turbines, making the threat operational (Boschetti, Gordon and Falco, ASCEND 2022). At Black Hat 2025, a vulnerability assessment of NASA's core Flight System (cFS) found remote code execution, path traversal, and denial-of-service, all from textbook C memory bugs. NASA's own CryptoLib, the reference implementation of CCSDS encryption, has had recurring buffer overflows in its parsing code (CVE-2025-29909, CVSS 9.8). A memory-safe language eliminates these classes of bug by construction.

The pattern across all of these is consistent: space software engineering has a safety-versus-security bias (Pavur and Martinovic, J. Cybersecur. 2022; Falco, JAIS 2019). Systems are designed for availability and determinism (watchdogs, radiation tolerance, FDIR). Integrity and confidentiality are secondary. This is not irrational: a satellite that loses attitude control is a more immediate problem than one that leaks telemetry. But it leaves software stacks undefended against adversarial threats. Jero et al. (NDSS SpaceSec 2024) proposed a defence-in-depth taxonomy -- secure boot, memory protection, authenticated updates, compartmentalisation -- that overlaps substantially with the three gaps above.

So why does the current stack not provide the three things above? Most small-sat payload software runs on VxWorks, RTEMS, embedded Linux (Yocto, Buildroot), or bare metal. The RTOS path typically runs everything in a single address space: the hardware has an MMU, but most CubeSat missions do not enable memory protection because it requires rethinking memory management, interrupt handling, and DMA buffer allocation. This is exactly the architecture Space Odyssey found vulnerable. Embedded Linux gives you process isolation, but hardening it for multi-tenancy and certifying it for flight remains a substantial undertaking. ESA's OPS-SAT demonstrated deploying new applications before its re-entry in 2024, but it remains the exception.

The isolation and metering techniques are not new. Cloud infrastructure solved analogous problems years ago, and avionics systems (ARINC 653, DO-178C) have their own qualified solutions. The gap is in LEO payload software, where neither stack fits: the cloud stack assumes abundant resources, the avionics stack assumes a single operator and a multi-year qualification budget. I am not going to minimise how much work qualification (under ECSS-Q-ST-80C or equivalent) requires. Calabrese, Kavallieratos and Falco (SCITECH 2024) emulated a cyberattack from a hosted payload on OPS-SAT that compromised the bus, demonstrating why isolation matters. But a real multi-tenant satellite also needs FDIR (Fault Detection, Isolation, and Recovery) integration, thermal management, power budgeting, and link budget allocation. I am focusing on the software infrastructure, but I do not want to pretend the problem ends there.

The regulatory pressure is new. The EU Cyber Resilience Act is pushing toward stronger isolation and update mechanisms. ENISA is calling out the space sector. The White House called for memory-safe languages in critical infrastructure in 2024, a recommendation that applies to flight software as much as to anything on the ground. OCaml, the language Parsimoni and Tarides use, is memory-safe in its default fragment. And the hardware is finally there: radiation-tolerant processors with enough compute for on-board processing are commercially available at CubeSat price points. The missing piece is the software infrastructure.

These are the problems I co-founded Parsimoni to work on. Our payload software platform, SpaceOS, builds on the OCaml ecosystem that Tarides maintains. Our first payload reached orbit on SpaceX Transporter-13 in March 2025. If you are building payload software and running into these constraints, or if you are an operator considering hosted payloads, we would like to hear from you: get in touch.

References