Agreed. Though I wonder if ipv6 will ever displace ipv4 in things like virtual networks (docker, vpn, etc.) where there’s no need for a bigger address space
I’m using IPv6 on Kubernetes and it’s amazing. Every Pod has its own global IP address. There is no NAT and no giant ARP routing table slowing down the other computers on my network. Each of my nodes announces a /112 for itself to my router, allowing it to give addresses to over 65k pods. There is no feasible limit to the amount of IP addresses I could assign to my containers and load balancers, and no routing overhead. I have no need for port forwarding on my router or worrying about dynamic IPs, since I just have a /80 block with no firewall that I assign to my public facing load balancers.
Of course, I only have around 300 pods on my cluster, and realistically, it’s not really possible for there to be over 1 million containers in current kubernetes clusters, due to other limitations. But it is still a huge upgrade in reducing overhead and complexity, and increasing scale.
Wait, but if you have, for example an HTTP API and you listen on a unix socket in for incoming requests, this is quite a lot of overhead in parsing HTTP headers. It is not much, but also cannot be the recommended solution on how to do network applications.
Agreed. Though I wonder if ipv6 will ever displace ipv4 in things like virtual networks (docker, vpn, etc.) where there’s no need for a bigger address space
Yes, because Docker becomes significantly more powerful once every container has a different publicly addressable IP.
Altough IPv6 support in Docker is still lacking in some areas right now, so add that to the long list of IPv6 migration todos.
I hope so. I don’t want to manage two different address spaces in my head. I prefer if one standard is just the standard.
I’m using IPv6 on Kubernetes and it’s amazing. Every Pod has its own global IP address. There is no NAT and no giant ARP routing table slowing down the other computers on my network. Each of my nodes announces a /112 for itself to my router, allowing it to give addresses to over 65k pods. There is no feasible limit to the amount of IP addresses I could assign to my containers and load balancers, and no routing overhead. I have no need for port forwarding on my router or worrying about dynamic IPs, since I just have a /80 block with no firewall that I assign to my public facing load balancers.
Of course, I only have around 300 pods on my cluster, and realistically, it’s not really possible for there to be over 1 million containers in current kubernetes clusters, due to other limitations. But it is still a huge upgrade in reducing overhead and complexity, and increasing scale.
I wish everything would just default to a unix socket in /run, with only nginx managing http and stream reverse sockets.
Wait, but if you have, for example an HTTP API and you listen on a unix socket in for incoming requests, this is quite a lot of overhead in parsing HTTP headers. It is not much, but also cannot be the recommended solution on how to do network applications.
Replacing a TCP socket with a UNIX socket doesn’t affect the amount of headers you have to parse.