I recently noticed a weird problem with the behavior of the getaddrinfo system call on my Linux and OS X boxes when the flag AI_PASSIVE is set. More specifically, I couldn’t get getaddrinfo to automate the setup of a passive IPv4- and IPv6-enabled socket on hosts with IPv6 support.
Let me provide some context, first.
Writing portable IPv6-enabled applications: client-side …
I usually write portable networking code that leverages getaddrinfo to choose the proper behavior to adopt with regards to IPv6 and IPv4 connectivity according to the networking functions provided by the current system. On the client side, I call getaddrinfo with the AF_UNSPEC address family flag turned on, and cycle through all the returned addrinfo structures in the attempt to find an endpoint I can actually connect to. This AF-independent coding style automatically uses IPv6 (AF_INET6) and IPv4 (AF_INET) sockets according to the system capabilities and configuration. Which means that our application will work in any possible networking situation: on IPv4-only hosts where IPv6 support might be disabled or outright missing, on IPv6-enabled hosts without actual IPv6 connectivity (albeit usually with a slower startup time), and on hosts with both IPv6 support and connectivity. Here is a code example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
… and server-side
I also use getaddrinfo on server side code, with the AI_PASSIVE flag turned on, to prepare the local address for the bind system call. However, in the server side case I usually don’t adopt a fully AF-independent approach, which would force me to create separate sockets for handling IPv4 and IPv6 traffic, as, e.g., OpenSSH does. Instead, I prefer to adopt the opposite approach, based on a single passive socket that deals with all incoming connection requests. On IPv6-enabled system I need an AF_INET6 socket with the IPV6_V6ONLY option turned off (the default behavior on most OSes), and on IPv4-only systems I need an AF_INET socket. And I typically use getaddrinfo to prepare the corresponding socket address for bind in a portable. My code is then:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Ok, but what is the issue?
The problem is that the server-side code above won’t work properly on IPv6-enabled Linux and OS X hosts with the default configuration for /etc/gai.conf. In those conditions, when getaddrinfo is called with the AI_PASSIVE flag turned on, it will return the IPv4 0.0.0.0 address first and then the IPv6 :: address. This completely breaks the code above, since it will force our application to listen only on IPv4. Instead, we would like getaddrinfo to return the IPv6 :: address first on IPv6-enabled hosts.
How can I fix it?
To enable a more sensible behavior for getaddrinfo, you need to change the label table rule stanza in your /etc/gai.conf in the following way:
# label table rules label ::1/128 0 label ::/0 1 label 2002::/16 2 label ::/96 3 label ::ffff:0:0/96 4 label fec0::/10 5 label fc00::/7 6 label 2001:0::/32 7 label ::ffff:7f00:1/128 8 # add this line and uncomment the ones above
You basically have to add the last line and uncomment the previous ones, that represent the default configuration. As you see, the change is minimal and has a very low impact. I am not a security expert, but I don’t see anything wrong or potentially harmful about it.
Apparently, this is a well known issue. I am not aware if this was investigated in a follow up to RFC 3484 and RFC 6724. If so, I would be very interested about it. But, most importantly, I am totally clueless about why modern OSes such as Linux and OS X come up with what I believe is a completely botched up /etc/gai.conf configuration (which apparently is not even aligned to RFC 6724).
Mind you, in order for this to work you also need to make sure that the IPPROTO_IPV6 / IPV6_V6ONLY socket option is not set. Fortunately, both Linux and OS X are usually configured that way.
What about RFC 4038?
Yes, I am well aware that my server-side coding approach is exactly what RFC 4038, section 6.2.1, recommends NOT doing. But I find the one-socket-per-AF server-side approach blessed by RFC 4038 way too complex for either simple applications or quick prototypes. First (and most importantly), because I firmly believe it is an excessively difficult scheme to teach in a computer networks course for students at their first encounter with the BSD socket API (even if brilliant ones, such as the computer science engineering students following my class at the University of Ferrara). In addition, most OSes have a joint IPv4/IPv6 implementation with mature and robust backward compatibility support based on IPv4-mapped IPv6 addresses (the only exceptions that I know of are OpenBSD and Microsoft Windows, neither of which I care much about), perfectly capable of running applications developed according to my coding style.
Please, don’t get me wrong here. I have the deepest respect for the IETF and I am not saying that RFC 4038 is wrong. I am just saying that in several cases the server-side approach it advocates is an overkill.