Tom Herbert, SiPanda CTO, September 16, 2024
As we move towards Domain Specific Architectures, specialized computing environments, and hardware-software codesign, one thing is certain: Compilers are super critical in the new world. In fact, I’ve been evangelizing for a while: If you want to work on networking tomorrow, learn compilers today! (and tomorrow is here!!!)
So, compilers are key. They are tools that can take the user's code as a human intuitive expression of intended behavior and convert that expression into a highly optimized target binary. In networking, the rise of the compiler is fueled by the emergence of programmable datapaths. This includes software datapaths, like eBPF where compilers take restricted C as input and out eBPF byte code; as well as hardware datapaths, like P4 where the input is a program in the P4 language and the output is binary to run in a hardware engine.
Compiler model
The canonical compiler model is straightforward. A developer writes their program in some programming language, a front end compiler converts their program into a generic Intermediate Representation, and a backend compiler converts the Intermediate Representation into a target executable binary. The use of an Intermediate Representation, or IR, decouples the developer’s program from the target so that the developer doesn’t have to be concerned with the particulars or nuances of the backend targets or hardware. This decoupling also facilitates mixing and matching different programming languages with different backend targets. For instance the same program could be written in C, Python, P4, or Java that compiles to a common IR; and then that IR could be compiled to a variety of targets like x86, ARM, RISC-V, domain specific processors, etc.
Compilers for the networking datapath
A programmed network datapath has some interesting attributes that a compiler will want to take into account. Sometimes the network datapath is modeled as a parse-match-action pipeline. Parsing is a particularly good candidate for compiler optimizations due to its unique characteristics. In particular, parsers are served better by a declarative representation than an imperative representation. This means it’s better to program a parser by specifying it as an annotated parse graph or Finite State Machine, instead of a bunch of if-then-else statements (take a look at Replacing flow dissector with the PANDA Parser). We can enhance the programming model and compilers to fully support parsers in declarative representation.
So for a networking datapath program, we may have a mix of declarative and imperative code. That is, the programmer writes the parser logic in a data structure that looks like a parse graph and then annotates the nodes with backend functions in imperative code. This is quite intuitive since it’s how a human would conceptualize parsing and processing packets in a datapath. As shown below, we can add support for the parser program and IR into the compilation model for a network datapath program.
Common Parser Representation
Before we talk about the front end programming language or how a backend compiler emits an optimized target executable, let’s take a closer look at the Intermediate Representation.
SiPanda has developed a unique IR for parsers known as the Common Parser Representation. While internally, the LLVM compiler utilizes a parser-specific dialect of MLIR to represent a parser in LLVM-compatible code, the parser can also be represented in a JSON schema. Since the JSON format is more user-friendly, we will use it in our examples.
Let’s consider a simple datapath that parses IP in Ethernet and processes TCP and UDP packets. Conceptually, we can diagram this datapath as an annotated parse graph like so:
The corresponding Common Parser Representation in JSON might look like this:
"parsers": [ { "name": "my_parser", "root-node": "ether_node" } ],
"parse-nodes": [
{
"name": "ether_node", "min-hdr-length": 14,
"next-proto": {"field-off": 12, "table": "ether_table", "field-len": 2 }
}, {
"name": "ipv4_node", "min-hdr-length": 20,
"next-proto": { "field-off": 9, "table": "ip_table", "field-len": 1 },
"hdr-length": {"field-off": 0, "mask": "0xf", "field-len": 1,"multiplier": 4 },
}, {
"name": "ipv6_node", "min-hdr-length": 40,
"next-proto": { "field-off": 6, "table": "ip_table", "field-len": 1 },
"hdr-length": {"field-off": 0, "mask": "0xf", "field-len": 1,"multiplier": 4 },
}{
"name": "tcp_node", "min-hdr-length": 20,
"hdr-length": {"field-off": 12, "mask": "0xf0", "field-len": 1,"multiplier": 4 },
“handler”: “process_tcp”
} , {
"name": "udp_node", "min-hdr-length": 8, “handler”: “process_udp”
} ],
"proto-tables": [
{ "name": "ether_table",
"ents": [
{ "key": "0x800", "node": "ipv4_node" },
{ "key": "0x86DD", "node": "ipv6_node" }
],
}, { "name": "ip_table",
"ents": [
{ "key": "6", "node": "tcp_node" },
{ "key": "17", "node": "udp_node" },
]
}
]
What’s happening here?
In the “parsers” section, one parser named my_parser is defined with a root node of ether_node.
In the "parse-nodes" section, five nodes are specified. The root node, ether_node, is a fixed fourteen byte header as described by the min-hdr-length attribute. Ethernet is a non-leaf protocol, so the definition includes a next-proto attribute that describes how to extract the next protocol field and specifies the table to perform a protocol lookup. For Ethernet, the next protocol field is two bytes at offset twelve, and in this example the protocol lookup table for Ethernet is ether_table.
ipv4_node and ipv6_node are the nodes for parsing IPv4 and IPv6. IPv4 is a variable length header and the hdr-length attribute provides the function for computing the length– for IPv4 this is taken to be the value of low order nibble of the first byte multiplied by four. Both IPv4 and IPv6 are non-leaf protocols so they include a next-proto attribute; the next protocol comes from an IP protocol number field and the table to lookup the next node is ip_table.
tcp_node and udp_node are examples of transport layer protocol nodes. These are leaf protocols so they don’t have a next-proto attribute. TCP is variable length so a hdr-length attribute is defined. In this example, handlers are specified for TCP and UDP (process_tcp and process_udp respectively). A handler is a user written function to do backend processing for a node, typically this would be coded in imperative representation like plain C code.
In the "proto-tables" section, the ether_table and ip_table protocol lookup tables are defined. A protocol lookup table is simply a key value and a target node. Protocol tables are amenable to a CAM implementation as we talked about in the Parser Instructions blog.
The PANDA programming and compiler model
SiPanda has applied all the techniques we’ve talked about to create a programming and compiler ecosystem specific to the networking datapath. The goals of this endeavour are simple:
Provide a generic programming framework that can be used with any programming language
Have a powerful IR that includes native representations of parser or other constructs that don't readily fit in an imperative language model
Support backend compilers for an wide array of targets, where for any target a compiler produces an executable that runs as well as possible given the capabilities of the target
The implementation of our solution is based on the LLVM compiler infrastructure. To integrate the parser as an IR we utilize a parser specific dialect of MLIR that accommodates the Common Parser Representation. The front end compiler converts a user program into the IR-- we have implemented a C API and libraries called PANDA-C that allows easy-to-program network datapaths. At the backend, we can compile into RISC-V with parser instructions (SiPanda Parser Instructions). As the ecosystem picture below suggests, we can support P4, Python, Lua, Rust, and even a graphical IDE for programming the datapath; and similarly, we can support a variety of different backend software and hardware targets.
SiPanda
SiPanda was created to rethink the network datapath and bring both flexibility and wire-speed performance at scale to networking infrastructure. The SiPanda architecture enables data center infrastructure operators and application architects to build solutions for cloud service providers to edge compute (5G) that don’t require the compromises inherent in today’s network solutions. For more information, please visit www.sipanda.io. If you want to find out more about PANDA, you can email us at panda@sipanda.io. IP described here is covered by patent USPTO 12,026,546 and other patents pending.
Comments