$ Inside Network Namespaces: How Containers Actually Isolate Traffic
`docker run` looks like magic. Pull back the curtain and it's mostly `ip netns add` plus a virtual ethernet pair. Let's build it from scratch.
A network namespace is a kernel feature: a separate copy of the network stack — interfaces, routes, iptables rules, sockets. Containers use them to be invisible to each other and the host.
Build a container's network with five commands
# 1. create a new namespace
ip netns add nsdemo
# 2. create a virtual ethernet pair
ip link add veth-host type veth peer name veth-ns
# 3. move one end into the namespace
ip link set veth-ns netns nsdemo
# 4. assign IPs and bring them up
ip addr add 10.99.0.1/24 dev veth-host
ip link set veth-host up
ip -n nsdemo addr add 10.99.0.2/24 dev veth-ns
ip -n nsdemo link set veth-ns up
ip -n nsdemo link set lo up
# 5. test
ip netns exec nsdemo ping 10.99.0.1
You just built the network half of a container. Add a process running inside ip netns exec nsdemo ... and you have everything Docker gives you, minus filesystem, cgroups, and IPC isolation.
How Docker actually does it
Docker creates a bridge (default docker0), gives each container its own namespace + veth pair, attaches the host end to the bridge, and uses iptables NAT for outbound traffic. That's it. No magic.
Why this matters
When container networking breaks, you debug it as a regular Linux network problem. ip a, ip route, iptables -t nat -L, tcpdump -i veth... — all the regular tools work.