top of page
Search
Writer's pictureQuinn Jacobson

Why Legacy Architectures Are The Antithesis of Hardware/Software Co-Design

Quinn Jacobson, SiPanda Hardware Architect, Friday July 9, 2021

One of the most powerful concepts in both hardware and software engineering is that of reuse – taking previously-built and tested hardware and/or software components and using them as part of a solution to a new problem. No doubt, reuse can be an effective component of the co-design process – it reduces time to market, as well as executing and technology risks. But what about reusing an entire architectural design, with new technology “underneath the hood” – is that approach compatible with hardware/software co-design? For instance, can I take the design for a 10GbE network interface solution, put it on hardware that is 10X faster, and use it as a 100GbE network interface solution? And if I can, will I be able to exploit hardware/software co-design?

Macquarium

This is not an abstract question – it is the “legacy architecture” problem. For products that are part of a complex ecosystem, legacy architectures have some significant benefits. Legacy architecture generally have much faster time to market than brand-new designs – the technical risk is much more contained, and the amount of work required is almost always less. Also, code that has been written by end-customers or ecosystem partners for the legacy architecture can theoretically be used on the new product with (theoretically) no more than some adaptation. There are many great examples of revamping legacy architectures to keep up with growing performance demands, one of the best examples being Bob Colwell figuring out how to use micro ops to translate x86 instructions to run in a modern CPU pipeline. The problem with legacy architectures however, is “the problem” – what if the problem a solution is addressing has significantly or fundamentally changed?


Like the earlier question, this one is also not an abstract or rhetorical question. Think about the changes in network I/O in just the past 15 years. In 2006, TCP offloads were state of the art. Five years ago, offloads for Network Virtualization using Generic Routing Encapsulation (NVGRE) and Open V-Switch (OVS) were state-of-the-art. In both cases, the capabilities that were not supported directly in hardware (the “fast path”) were generally offloaded to onboard processors, or to the server’s CPU (the “slow path”). While this approach can be effective at 10Gbps network speeds, it falls apart at network speeds of 100Gbps and greater, where the packet clock rate can exceed the clock rate of the NIC’s processor by a factor of 100X or greater. That is why legacy architectures are problematic. They are often difficult to modify to address new problems that weren’t contemplated when the legacy architecture was defined, which pretty much makes hardware/software co-design impossible in this case. Next week we will look at the larger issues that occur when hardware/software co-design is not utilized for performance-critical systems.


SiPanda was created to rethink the network datapath and bring both flexibility and wire-speed performance at scale to networking infrastructure. The SiPanda architecture enables data center infrastructure operators and application architects to build solutions for cloud service providers to edge compute (5G) that don’t require the compromises inherent in today’s network solutions. For more information, please visit www.sipanda.io. If you want to find out more about PANDA, you can email us at panda@sipanda.io.


83 views0 comments

Comentarios


bottom of page