Back in 2012, Anil Madhavapeddy and I
were building MirageOS as a library OS, and we started to accumulate
a lot of libraries to manage manually. So I wrote
opam to handle the dependencies and the
mirage tool, an embedded DSL to describe device trees and
auto-generate the build and wiring code.
Gabriel Radanne later refined and
formalised it as Functoria,
which made the correspondence between device graphs and OCaml
functors explicit.
F Prime does the same thing for
C++ flight software: as I described in a
previous post, both frameworks
decompose systems into typed components and generate the plumbing
that wires them together. The difference is that Functoria is an
embedded DSL in OCaml, while
FPP (F Prime's
modelling language) is a standalone DSL with a
formal specification.
The interesting thing about a standalone DSL is that it can target
multiple languages: the same .fpp files can generate C++ and OCaml.
So I wrote ofpp to test
this idea. It parses the complete FPP grammar and generates MirageOS
wiring code from FPP topology files. The generated code compiles
against the real MirageOS libraries, and NASA's reference fpp-check
tool accepts the same .fpp files unchanged. FPP's dependency graph
maps directly to OCaml functors, so the rest of the post walks
through that mapping.
ofpp is a prototype, not a replacement for the mirage tool or for
Functoria -- I built it to explore what a shared architecture
language between F Prime and MirageOS could look like.
The mapping
Here is UnixHelloKey, a unikernel that takes a --hello
command-line flag
(source):
FPP topology
passive component Unikernel {
async input port start: serial
param hello: string default "Hello World!"
}
instance unikernel: Unikernel base id 0
topology UnixHelloKey {
instance unikernel
}
Generated main.ml
let hello_0 =
let doc = Cmdliner.Arg.info ~doc:"hello" ["hello"] in
Mirage_runtime.register_arg
Cmdliner.Arg.(value & opt string "Hello World!" doc)
let start = lazy (Unikernel.start ~hello:(hello_0 ()))
let () =
let t =
let open Lwt.Syntax in
(* Mirage_runtime init ... *)
let* _ = Lazy.force start in
Lwt.return ()
in
Unix_os.Main.run t; exit 0
The FPP instance unikernel resolves to the OCaml module Unikernel by convention (capitalised). The param hello declaration generates a Cmdliner term (hello_0) that becomes the --hello command-line flag, passed as a labelled argument to Unikernel.start.
$ dune exec examples/mirage/tutorial/hello-key/main.exe
2026-03-04T13:52:57-08:00: [INFO] [application] Hello World!
...
$ dune exec examples/mirage/tutorial/hello-key/main.exe -- \
--hello='Bonjour\ MirageOS!'
2026-03-04T13:53:08-08:00: [INFO] [application] Bonjour MirageOS!
...
ofpp handles wiring only. Type safety comes from the OCaml module system: the generated functor applications must type-check against the MirageOS library signatures, so a miswired topology is a compile error.
config.ml vs config.fpp
In MirageOS, each unikernel has a config.ml that describes its
device dependencies using Functoria combinators. In FPP, each
unikernel has a config.fpp that describes the same dependencies
as a connection graph. Here is the network example -- a unikernel
that needs a TCP/IP stack:
Functoria (config.ml)
open Mirage
let main =
main "Unikernel.Main" (stackv4v6 @-> job)
let stack = generic_stackv4v6 default_network
let () = register "network" [ main $ stack ]
FPP (config.fpp)
module Unikernel {
passive component Main {
async input port start: serial
output port stack: serial
}
}
instance unikernel: Unikernel.Main base id 0
topology UnixNetwork {
import SocketStack
instance stackv4v6
instance unikernel
connections Start {
unikernel.stack -> stackv4v6.connect
}
}
The Functoria DSL hides the stack implementation behind generic_stackv4v6. The FPP topology makes it explicit: import SocketStack pulls in a sub-topology that wires Udpv4v6_socket and Tcpv4v6_socket into Stackv4v6.Make. The connection unikernel.stack -> stackv4v6.connect declares that the unikernel depends on the stack.
The FPP version is more verbose. Functoria's combinators are more
concise for pure OCaml projects, and generic_stackv4v6 picks the
right implementation for each target automatically (FPP makes you
spell that out).
The payoff is that the topology is a standalone graph, not embedded
in OCaml. The same .fpp files can drive C++ code generation, and
the connection graph is available for visualisation and tooling.
The device catalogue
Device components live in a shared
mirage.fpp.
Here are three of the devices used in the UnixPing6 topology
shown below:
module Vnetif {
passive component Make {
import Mirage_net.S
output port backend: serial
}
}
module Ethernet {
passive component Make {
import Ethernet.S
output port net: serial
}
}
module Ipv6 {
passive component Make {
import Tcpip.Ip.S
async input port connect: Ipv6Connect
output port net: serial
output port eth: serial
}
}
Each FPP construct maps to a specific OCaml concept.
Output ports are functor arguments. Vnetif.Make has one
output port (backend), so its generated functor takes one
argument: Vnetif.Make(Backend). Ipv6.Make has two (net and
eth), so its functor takes two: Ipv6.Make(Net)(Ethernet). The
topology connections decide which instance fills each argument.
import declarations are module type constraints. import Mirage_net.S on Vnetif.Make tells ofpp to generate module type Net = Mirage_net.S in the output. The OCaml compiler then checks
that the concrete module satisfies that signature.
Port types encode connect signatures. Ipv6Connect is
defined elsewhere in mirage.fpp as port Ipv6Connect(conf: Ipv6Conf), so Ipv6.connect takes a labeled ~conf argument.
Positional parameters (like _0: string) generate positional
arguments instead.
The generated code for UnixPing6 uses all three:
(* from import declarations *)
module type Backend = Vnetif.BACKEND
module type Net = Mirage_net.S
module type Ethernet = Ethernet.S
module type Ipv6 = Tcpip.Ip.S
(* from topology connections + output ports *)
module Net = Vnetif.Make(Backend)
module Ethernet = Ethernet.Make(Net)
module Ipv6 = Ipv6.Make(Net)(Ethernet)
(* from connect port types + connection order *)
let* backend = Backend.connect () in
let* net = Net.connect backend in
let* ethernet = Ethernet.connect net in
let* ipv6 = Ipv6.connect net ethernet in
If a connection is missing, it does not compile.
A larger example
Here is UnixPing6, an IPv6 ping unikernel wired through a
virtual network
(source):
UnixPing6 topology, rendered by fprime-visual from the FPP connection graph.FPP topology (config.fpp)
module Unikernel {
passive component Main {
async input port start: serial
output port net: serial
output port eth: serial
output port ipv6: serial
}
}
instance unikernel: Unikernel.Main base id 0
topology UnixPing6 {
instance backend
instance net
instance ethernet
instance ipv6
instance unikernel
connections Connect {
net.backend -> backend.connect
ethernet.net -> net.connect
ipv6.net -> net.connect
ipv6.eth -> ethernet.connect
}
connections Start {
unikernel.net -> net.connect
unikernel.eth -> ethernet.connect
unikernel.ipv6 -> ipv6.connect
}
}
The backend and net instances come from the shared mirage.fpp device catalogue (Backend is a Vnetif in-memory switch, Vnetif.Make creates a virtual Ethernet interface from it). The two connection groups (Connect, Start) become separate lazy bindings.
Generated main.ml
module type Backend = Vnetif.BACKEND
module type Net = Mirage_net.S
module type Ethernet = Ethernet.S
module type Ipv6 = Tcpip.Ip.S
module Net = Vnetif.Make(Backend)
module Ethernet = Ethernet.Make(Net)
module Ipv6 = Ipv6.Make(Net)(Ethernet)
module Unikernel = Unikernel.Main(Net)(Ethernet)(Ipv6)
open Lwt.Syntax
let connect = lazy (
let* backend = Backend.connect () in
let* net = Net.connect backend in
let* ethernet = Ethernet.connect net in
let* ipv6 = Ipv6.connect net ethernet in
Lwt.return (net, ethernet, ipv6))
let start = lazy (
let* (net, ethernet, ipv6) = Lazy.force connect in
Unikernel.start net ethernet ipv6)
(* Mirage_runtime init, then: *)
let* _ = Lazy.force start in ...
ofpp reads the connection graph, sorts the dependencies, and emits the functor applications in the right order. The Backend → Net → Ethernet → Ipv6 chain mirrors the physical network stack.
Sub-topology composition
Sub-topologies are shared via import. The DNS example imports the
socket stack and wires additional layers on top:
topology UnixDns {
import SocketStack @ pulls in Udpv4v6_socket, Tcpv4v6_socket, Stackv4v6
instance stackv4v6
instance happy_eyeballs_mirage
instance dns_client
instance unikernel
connections Connect_device {
happy_eyeballs_mirage.stack -> stackv4v6.connect
}
connections Start {
dns_client.stack -> stackv4v6.connect
dns_client.happy_eyeballs -> happy_eyeballs_mirage.connect_device
unikernel.dns -> dns_client.start
}
}
The generated main.ml chains the socket stack connect, then
connect_device for Happy Eyeballs, then start for the DNS
resolver -- each connection group becomes a lazy binding that
forces its dependencies:
module type Udpv4v6_socket = Tcpip.Udp.S
module type Tcpv4v6_socket = Tcpip.Tcp.S
module type Stackv4v6 = Tcpip.Stack.V4V6
module type Happy_eyeballs_mirage = Happy_eyeballs_mirage.S
module type Dns_client = Dns_client_mirage.S
module Stackv4v6 = Stackv4v6.Make(Udpv4v6_socket)(Tcpv4v6_socket)
module Happy_eyeballs_mirage = Happy_eyeballs_mirage.Make(Stackv4v6)
module Dns_client = Dns_resolver.Make(Stackv4v6)(Happy_eyeballs_mirage)
module Unikernel = Unikernel.Make(Dns_client)
let connect = lazy (
let* udpv4v6_socket = Udpv4v6_socket.connect ~ipv4_only:false ... in
let* tcpv4v6_socket = Tcpv4v6_socket.connect ~ipv4_only:false ... in
Stackv4v6.connect udpv4v6_socket tcpv4v6_socket)
let connect_device = lazy (
let* stackv4v6 = Lazy.force connect in
let* happy_eyeballs_mirage = Happy_eyeballs_mirage.connect_device stackv4v6 in
Lwt.return (stackv4v6, happy_eyeballs_mirage))
let start = lazy (
let* (stackv4v6, happy_eyeballs_mirage) = Lazy.force connect_device in
let* dns_client = Dns_client.start stackv4v6 happy_eyeballs_mirage in
Unikernel.start dns_client)
This is something Functoria's flat combinator model does not express cleanly: named sub-topologies that can be imported and reused across unikernels, with cross-boundary connections wired by the parent topology.
Target switching
In MirageOS, you write your application once and compile it for
different platforms without changing application code.
mirage configure -t unix wires in the kernel's TCP/IP stack;
-t hvt wires in MirageOS's own protocol implementations running on
a virtual network device. In FPP, each target is simply a different
topology.
Shared sub-topologies (TcpipStack, SocketStack, DnsStack) are
imported and reused across variants. The caller picks the target by
passing it to ofpp:
ofpp to-ml --topologies UnixHello mirage.fpp tutorial/hello/config.fpp
ofpp to-ml --topologies UnixDns --target unix mirage.fpp applications/dns/config.fpp
The --target flag selects the OS main loop: unix (default),
xen, solo5, or unikraft. The codegen emits the appropriate
entry point -- Unix_os.Main.run, Xen_os.Main.run,
Solo5_os.Main.run, or Unikraft_os.Main.run. Target switching
in ofpp is very experimental for now: only the Unix target has
been tested end-to-end.
Mixing C++ and OCaml components
ofpp already works for building MirageOS unikernels on the Unix
backend: describe the topology in FPP, run ofpp to-ml, compile.
The interesting next step is mixing C++ F Prime components with
OCaml MirageOS components in the same topology.
Three pieces are missing. First, binary serialisation: ofpp
already generates OCaml types from FPP enums, structs, and arrays,
but matching C++'s wire format for command, telemetry, and event
dispatch is not done yet. Second, FFI stubs: I have early
experiments with OCaml components running inside F Prime over
caml_callback, but they are not ready to open-source. Third,
build integration: I have a working (if ugly) CMake + dune
bridge, though a cleaner solution might come from
Package Managers à la Carte
(pre-print).
The end goal: run OCaml components on isolated MirageOS platforms (Solo5, Muen, Unikraft, seL4) inside F Prime deployments, and let MirageOS unikernels integrate with existing C++ flight software.
State machines
FPP also has state machines. ofpp generates typed OCaml modules from them: states become a GADT, signals become a variant, and guards and actions become functor parameters. Here is a toy example:
FPP state machine
state machine Door {
action lock
guard locked
signal open
signal close
initial enter Closed
state Closed {
on open if locked enter Closed
on open enter Opened
}
state Opened {
on close do { lock } enter Closed
}
}
Generated OCaml
type closed
type opened
type _ state =
| Closed : closed state
| Opened : opened state
type signal = Open | Close
module type ACTIONS = sig
type ctx
val lock : ctx -> unit
end
module type GUARDS = sig
type ctx
val locked : ctx -> bool
end
module Make
(A : ACTIONS)
(G : GUARDS with type ctx = A.ctx) :
sig
type t
val create : A.ctx -> t
val state : t -> any
val step : t -> signal -> t
end
Phantom types index the GADT, so pattern matching in step is exhaustive. The caller supplies ACTIONS and GUARDS as functor arguments. ofpp also renders state machines as Graphviz diagrams. In F Prime, components can embed state machine instance declarations that connect a state machine to the component's lifecycle; ofpp does not wire those up fully yet.
ofpp
The tool is called
ofpp. I started it
in February at the JPL workshop. Rob Bocchino
designed FPP
with a
formal specification
and an upstream test suite of 670 .fpp files across 35
categories, which I used to validate the parser. The parser covers
the complete FPP grammar (zero conflicts).
The whole thing is about 19k lines of OCaml: 1k for the parser and lexer, 8k for the checker, 2.7k for the code generator, and the rest for tests. ofpp also exports topology graphs to fprime-visual. The example repo covers all of mirage-skeleton: tutorials, device examples, and applications like DNS, DHCP, and Git. Every example has a cram test that builds the unikernel and exercises it.
ofpp is not affiliated with NASA or JPL. It is an independent experiment. If you work with FPP or MirageOS and want to try it:
# macOS
brew tap parsimoni-labs/tap && brew install ofpp
# from source
opam pin add fpp https://github.com/parsimoni-labs/ocaml-fpp.git
ofpp check path/to/file.fpp
ofpp to-ml path/to/file.fpp
ofpp dot path/to/file.fpp | dot -Tsvg -o sm.svg
If you have a MirageOS config.ml, try describing the same topology
in FPP and compare the generated main.ml with what Functoria
produces. File issues on
GitHub.
