Engineers from the Chinese social media conglomerate ByteDance are taking early advantage of a recently released feature of the Linux kernel called netkit that provides a faster way for containers to communicate with each other across a cluster.
First released in Linux kernel 6.7 in December 2023, netkit is a kernel network device that can be programmed by eBPF (not to be confused with the now-discontinued Netkit that was used to create virtual networks on a single server) and has been touted as a way to streamline container networking.
Like the rest of the cloud native world, ByteDance uses Virtual Ethernet (veth) to network containers. A network interface that has been in Linux since 2008, veth provides a way to create private networks of containers while still allowing them to communicate with the outside world.
Issues with veth
Though veth would seem like a perfect fit for container networking, but practitioners soon discovered it had a number bottlenecks that slowed communication rates across containers. In fact, veth requires each packet to traverse two network stacks — one of the sender and the other of the recipient — even if the two containers communicating were on the same server, which was often the case.
Virtual Ethernet requires packets to be routed up through the L2 Layer to find their destination addresses via the Address Resolution Protocol (ARP).
Because it is embedded in the kernel, netkit can intercept these packets before they hit the network stack and route them internally if the destination is on the same host. In other words, everything is sorted out in Layer 3, the network’s own layer, without pushing it up to Layer 2.

In its own tests, ByteDance found this improved performance by 10% once these “soft bottlenecks were removed,” according to a case study issued Monday by the eBPF Foundation. Plus, eliminating veth reduces CPU usage, the company found.
This was great news for such a performance-minded company.
ByteDance Investigates
Founded in 2012, ByteDance runs a number of large-scale platforms, all of which depend on fast performance.
There is the massively popular mobile short video service TikTok (nearly 2 billion users), along with its Chinese version Douyin (抖音), operated as an independent entity just for China. There’s also Toutiao (今日头条), a news and personal content aggregator app; and a long-form video platform, Xigua Video (西瓜视频): There’s also a gaming division, a hosted recommendation service (BytePlus Recommend), an enterprise collaboration suite (Lark) and the popular CapCut video editing tool and hosted service.
Moving to netkit would make sense. Isovalent, whose engineers worked closely on netkit, were the first to support the technology, with version 1.16 of its Cilium container networking platform. The problem for ByteDance was it only worked with those servers running Linux Kernel 6.7 or greater.
Due to “operational constraints,” ByteDance was still on Linux Kernel 5.15 and won’t make it to version 6.7 for another few years. So the company backported eBPF (along with the tcx traffic control extension), to this earlier kernel. It also needed to update the Container Network Interface (CNI) it was using.
Smooth Rollout
Now the company is in the process of updating its many kernels with eBPF, and updating CNI as well. Thus far netkit has been deployed in “dozens of clusters,” with minimal issues.
“We haven’t seen any accidents reported to netkit,”said ByteDance senior engineer Chen Tang, in a talk at the eBPF Summit last year. “I’m confident to say netkit is trustable.”
As with any production system, the trick with the upgrade is to not incur any downtime. So veth will also be kept in place for the time being.
New containers coming online would use netkit, while existing ones would continue use to veth. The engineers worked out how the two protocols could communicate and how veth, should netkit fail, would serve as a backup.
There is no need for additional setup for netkit. Once netkit is in the kernel, a netkit device can be created using netlink, an inter-process communication socket. ByteDance added this support in both the golang netlink package and iproute2.
There isn’t too much difference between setting up a veth and a netkit, but to achieve the best performance, the eBPF program should be attached to the container side of a netkit pair.
The company developed an agent to manage the eBPF program on the host for inner-server use, for tasks such as paring up layer ACL rules into bpf map entries, and dumping necessary information for kernel debugging. View the whole presentation here:
The post Netkit to Network a Million Containers for ByteDance appeared first on The New Stack.
Built on eBPF, netkit offers a swifter alternative to Virtual Ethernet for container networking, ByteDance engineers have concluded.