<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>ndppd — LowEndSpirit DEV</title>
        <link>https://dev.lowendspirit.com/index.php?p=/</link>
        <pubDate>Wed, 08 Apr 2026 20:23:33 +0000</pubDate>
        <language>en</language>
            <description>ndppd — LowEndSpirit DEV</description>
    <atom:link href="https://dev.lowendspirit.com/index.php?p=/discussions/tagged/ndppd/feed.rss" rel="self" type="application/rss+xml"/>
    <item>
        <title>IPv6 Neighbor Discovery Responder for KVM VPS</title>
        <link>https://dev.lowendspirit.com/index.php?p=/discussion/2815/ipv6-neighbor-discovery-responder-for-kvm-vps</link>
        <pubDate>Wed, 21 Apr 2021 04:38:33 +0000</pubDate>
        <category>Technical</category>
        <dc:creator>yoursunny</dc:creator>
        <guid isPermaLink="false">2815@/index.php?p=/discussions</guid>
        <description><![CDATA[<blockquote><div>
  <p>This article is originally published on yoursunny.com blog <a href="https://yoursunny.com/t/2021/ndpresponder/" rel="nofollow">https://yoursunny.com/t/2021/ndpresponder/</a></p>
</div></blockquote>

<h2 data-id="i-want-ipv6-for-docker">I Want IPv6 for Docker</h2>

<p>I'm playing with Docker these days, and I want IPv6 in my Docker containers.<br />
The best guide for enabling IPv6 in Docker is <a rel="nofollow" href="https://medium.com/@skleeschulte/how-to-enable-ipv6-for-docker-containers-on-ubuntu-18-04-c68394a219a2">how to enable IPv6 for Docker containers on Ubuntu 18.04</a>.<br />
The first method in that article assigns private IPv6 addresses to containers, and uses <a rel="nofollow" href="https://github.com/robbertkl/docker-ipv6nat">IPv6 NAT</a> similar to how Docker handles IPv4 NAT.<br />
I quickly got it working, but I noticed an undesirable behavior: Network Address Translation (NAT) changes the source port number of outgoing UDP datagrams, even if there's a port forwarding rule for inbound traffic; consequently, a UDP flow with the same source and destination ports is being recognized as two separate flows.</p>

<pre spellcheck="false" tabindex="0">$ docker exec nfd nfdc face show 262
    faceid=262
    remote=udp6://[2001:db8:f440:2:eb26:f0a9:4dc3:1]:6363
     local=udp6://[fd00:2001:db8:4d55:0:242:ac11:4]:6363
congestion={base-marking-interval=100ms default-threshold=65536B}
       mtu=1337
  counters={in={25i 4603d 2n 1179907B} out={11921i 14d 0n 1506905B}}
     flags={non-local permanent point-to-point congestion-marking}
$ docker exec nfd nfdc face show 270
    faceid=270
    remote=udp6://[2001:db8:f440:2:eb26:f0a9:4dc3:1]:1024
     local=udp6://[fd00:2001:db8:4d55:0:242:ac11:4]:6363
   expires=0s
congestion={base-marking-interval=100ms default-threshold=65536B}
       mtu=1337
  counters={in={11880i 0d 0n 1498032B} out={0i 4594d 0n 1175786B}}
     flags={non-local on-demand point-to-point congestion-marking}
</pre>

<p>The second method in that article allows every container to have a public IPv6 address.<br />
It avoids NAT and the problems that come with it, but requires the host to have a <em>routed</em> IPv6 subnet.<br />
However, <em>routed</em> IPv6 is hard to come by on KVM servers, because virtualization platform such as <a rel="nofollow" href="https://www.lowendtalk.com/discussion/170194/how-many-ipv6-per-client/p2">Virtualizor does not support routed IPv6 subnets</a>, but can only provide on-link IPv6.</p>

<h2 data-id="on-link-ipv6-vs-routed-ipv6">On-Link IPv6 vs Routed IPv6</h2>

<p>So what's the difference between on-link IPv6 and routed IPv6, anyway?<br />
It differs in how the router at the previous hop is configured to reach a destination IP address.</p>

<p>Let me explain in IPv4 terms first:</p>

<pre spellcheck="false" tabindex="0">|--------| 192.0.2.1/24       |--------| 198.51.100.1/24    |-----------|
| router |--------------------| server |--------------------| container |
|--------|       192.0.2.2/24 |--------|    198.51.100.2/24 |-----------|
            (192.0.2.16-23/24)    |
                                  | 192.0.2.17/28           |-----------|
                                  \-------------------------| container |
                                              192.0.2.18/28 |-----------|
</pre>

<ul><li><p>The server has on-link IP address 192.0.2.2.</p>

<ul><li>The router knows this IP address is on-link because it is in the 192.0.2.0/24 subnet that is configured on the router interface.</li>
<li>To deliver a packet to 192.0.2.2, the router sends an ARP query of 192.0.2.2 to learn the server's MAC address, which should be responded by the server.</li>
</ul></li>
<li><p>The server has routed IP subnet 198.51.100.0/24.</p>

<ul><li>The router must be configured to know: 198.51.100.0/24 is reachable via 192.0.2.2.</li>
<li>To deliver a packet to 198.51.100.2, the router first queries its routing table and finds the above entry, then sends an ARP query to learn the MAC address of 192.0.2.2 which should be responded by the server, and finally delivers the packet to the learned MAC address.</li>
</ul></li>
<li><p>The main difference is what IP address is enclosed in the ARP query:</p>

<ul><li>If the destination IP address is an on-link IP address, the ARP query contains the destination IP address itself.</li>
<li>If the destination IP address is in a routed subnet, the ARP query contains the nexthop IP address, as determined by the routing table.</li>
</ul></li>
<li><p>If I want to assign an on-link IPv4 address (e.g. 192.0.2.18/28) to a container, the server should be made to answer ARP queries for that IP address so that the router would deliver packets to the server, and then forwards these packets to the container.</p>

<ul><li>This technique is called ARP proxy, in which the server responds to ARP queries on behalf of the container.</li>
</ul></li>
</ul><p>The situation is a bit more complex in IPv6 because each network interface can have multiple IPv6 addresses, but the same concept applies.<br />
Instead of Address Resolution Protocol (ARP), IPv6 uses <strong>Neighbor Discovery Protocol</strong> that is part of ICMPv6.<br />
A few terminology differs:</p>

<table><thead><tr><th>IPv4</th>
  <th>IPv6</th>
</tr></thead><tbody><tr><td>ARP</td>
  <td>Neighbor Discovery Protocol (NDP)</td>
</tr><tr><td>ARP query</td>
  <td>ICMPv6 Neighbor Solicitation</td>
</tr><tr><td>ARP reply</td>
  <td>ICMPv6 Neighbor Advertisement</td>
</tr><tr><td>ARP proxy</td>
  <td>NDP proxy</td>
</tr></tbody></table><p>If I want to assign an on-link IPv6 address to a container, the server should respond to neighbor solicitations for that IP address, so that the router would deliver packets to the server.<br />
After that, the server's Linux kernel could route the packet to the container's bridge, as if the destination IPv6 address was in a routed subnet.</p>

<h2 data-id="ndp-proxy-daemon-to-the-rescue-i-hope">NDP Proxy Daemon to the Rescue, I Hope?</h2>

<p><a rel="nofollow" href="https://github.com/DanielAdolfsson/ndppd">ndppd</a>, or NDP Proxy Daemon, is a program that listens for neighbor solicitations on a network interface and responds with neighbor advertisements.<br />
It is often recommended for dealing with the scenario when the server has only on-link IPv6 but we need a routed IPv6 subnet.</p>

<p>I installed <a rel="nofollow" href="https://packages.ubuntu.com/focal/ndppd">ndppd</a> on one of my servers, and it worked as expected with this configuration:</p>

<pre spellcheck="false" tabindex="0">proxy uplink {
  rule 2001:db8:fbc0:2:646f:636b:6572::/112 {
    auto
  }
}
</pre>

<p>I can start up a Docker container with a public IPv6 address.<br />
It can reach the IPv6 Internet, and can be ping-ed from outside.</p>

<pre spellcheck="false" tabindex="0">$ docker network create --ipv6 --subnet=172.26.0.0/16
  --subnet=2001:db8:fbc0:2:646f:636b:6572::/112 ipv6exposed
118c3a9e00595262e41b8cb839a55d1bc7bc54979a1ff76b5993273d82eea1f4

$ docker run -it --rm --network ipv6exposed
  --ip6 2001:db8:fbc0:2:646f:636b:6572:d002 alpine

# wget -q -O- https://www.cloudflare.com/cdn-cgi/trace | grep ip
ip=2001:db8:fbc0:2:646f:636b:6572:d002
</pre>

<p>However, when I repeated the same setup on another KVM server, things didn't go well: the container cannot reach the IPv6 Internet at all.</p>

<pre spellcheck="false" tabindex="0">$ docker run -it --rm --network ipv6exposed
  --ip6 2001:db8:f440:2:646f:636b:6572:d003 alpine

/ # ping -c 4 ipv6.google.com
PING ipv6.google.com (2607:f8b0:400a:809::200e): 56 data bytes

--- ipv6.google.com ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
</pre>

<h2 data-id="what-s-wrong-with-ndppd">What's Wrong with <em>ndppd</em>?</h2>

<p>Why <em>ndppd</em> works on the first server, but does not work on the second server?<br />
What's the difference?<br />
We need to go deeper, so I turned to <strong>tcpdump</strong>.</p>

<p>On the first server, I see:</p>

<pre spellcheck="false" tabindex="0">$ sudo tcpdump -pi uplink icmp6
19:13:17.958191 IP6 2001:db8:fbc0::1 &gt; ff02::1:ff72:d002:
    ICMP6, neighbor solicitation, who has 2001:db8:fbc0:2:646f:636b:6572:d002, length 32
19:13:17.958472 IP6 2001:db8:fbc0:2::2 &gt; 2001:db8:fbc0::1:
    ICMP6, neighbor advertisement, tgt is 2001:db8:fbc0:2:646f:636b:6572:d002, length 32
</pre>

<ul><li>The neighbor solicitation from the router comes from a <em>global</em> IPv6 address.</li>
<li><p>The server responds with a neighbor advertisement from its <em>global</em> IPv6 address.<br />
Note that this address differs from the container's address.</p></li>
<li><p>IPv6 works in the container.</p></li>
</ul><p>On the second server, I see:</p>

<pre spellcheck="false" tabindex="0">$ sudo tcpdump -pi uplink icmp6
00:07:53.617438 IP6 fe80::669d:99ff:feb1:55b8 &gt; ff02::1:ff72:d003:
    ICMP6, neighbor solicitation, who has 2001:db8:f440:2:646f:636b:6572:d003, length 32
00:07:53.617714 IP6 fe80::216:3eff:fedd:7c83 &gt; fe80::669d:99ff:feb1:55b8:
    ICMP6, neighbor advertisement, tgt is 2001:db8:f440:2:646f:636b:6572:d003, length 32
</pre>

<ul><li>The neighbor solicitation from the router comes from a <em>link-local</em> IPv6 address.</li>
<li>The server responds with a neighbor advertisement from its <em>link-local</em> IPv6 address.</li>
<li>IPv6 does not work in the container.</li>
</ul><p>Since IPv6 has been working on the second server for IPv6 addresses assigned to the server itself, I added a new IPv6 address and captured its NDP exchange:</p>

<pre spellcheck="false" tabindex="0">$ sudo tcpdump -pi uplink icmp6
00:29:39.378544 IP6 fe80::669d:99ff:feb1:55b8 &gt; ff02::1:ff00:a006:
    ICMP6, neighbor solicitation, who has 2001:db8:f440:2::a006, length 32
00:29:39.378581 IP6 2001:db8:f440:2::a006 &gt; fe80::669d:99ff:feb1:55b8:
    ICMP6, neighbor advertisement, tgt is 2001:db8:f440:2::a006, length 32
</pre>

<ul><li>The neighbor solicitation from the router comes from a <em>link-local</em> IPv6 address, same as above.</li>
<li>The server responds with a neighbor advertisement from the target <em>global</em> IPv6 address.</li>
<li>IPv6 works on the server from this address.</li>
</ul><p>In IPv6, each network interface can have multiple IPv6 addresses.<br />
When the Linux kernel responds to a neighbor solicitation in which the target address is assigned to the same network interface, it <a rel="nofollow" href="https://github.com/torvalds/linux/blob/v5.4/net/ipv6/ndisc.c#L528-L534">uses that particular address</a> as the source address.<br />
On the other hand, <em>ndppd</em> transmits neighbor advertisements via a <a rel="nofollow" href="https://github.com/DanielAdolfsson/ndppd/blob/0.2.5/src/iface.cc#L188">PF_INET6 socket</a> and <a rel="nofollow" href="https://github.com/DanielAdolfsson/ndppd/blob/0.2.5/src/iface.cc#L414">does not specify the source address</a>.<br />
In this case, some complicated rules for <a rel="nofollow" href="https://tools.ietf.org/html/rfc6724">default address selection</a> come into play.</p>

<p>One of these rules is preferring a source address that has the same <em>scope</em> as the destination address (i.e. the router).<br />
On my first server, the router uses a <em>global</em> address, and the server selects a <em>global</em> address as the source address on its neighbor advertisement.<br />
On my second server, the router uses a <em>link-local</em> address, and the server selects a <em>link-local</em> address, too.</p>

<p>In an unfiltered network, the router wouldn't care where the neighbor advertisements come from.<br />
However, when it comes to a KVM server on Virtualizor, the hypervisor would treat such packets as attempted IP spoofing attacks, and drop them via <a rel="nofollow" href="https://www.softaculous.com/board/index.php?tid=5662">ebtables rules</a>.<br />
Consequently, the neighbor advertisement never reaches the router, and the router has no way to know how to reach the container's IPv6 address.</p>

<h2 data-id="ndpresponder-ndp-responder-for-kvm-vps">ndpresponder: NDP Responder for KVM VPS</h2>

<p>I tried a few tricks such as <a rel="nofollow" href="https://yoursunny.com/t/2020/preferred-lft-netplan/">deprecating the link-local address</a>, but none of them worked.<br />
Thus, I made my own NDP responder that sends neighbor advertisements from the target address.</p>

<p><strong>ndpresponder</strong> is a Go program using the <a rel="nofollow" href="https://pkg.go.dev/github.com/google/gopacket">GoPacket</a> library.</p>

<ol><li>The program opens an AF_PACKET socket, with a BPF filter for ICMPv6 neighbor solicitation messages.</li>
<li>When a neighbor solicitation arrives, it checks the target address against a user-supplied IP range.</li>
<li>If the target address is in the range used for Docker containers, the program constructs an ICMPv6 neighbor advertisement messages and transmits it through the same AF_PACKET socket.</li>
</ol><p>A major difference from <em>ndppd</em> is that, the source IPv6 address on a neighbor advertisement message is always set to the same value as the target address of the neighbor solicitation, so that the message wouldn't be dropped by the hypervisor.<br />
This is made possible because I'm sending the message via an AF_PACKET socket, instead of the AF_INET6 socket used by <em>ndppd</em>.</p>

<p><strong>ndpresponder</strong> operates similarly as <em>ndppd</em> in "static" mode.<br />
It does not forward neighbor advertisements to the destination subnet like <em>ndppd</em> does in its "auto" mode, but this feature isn't important on a KVM server.</p>

<p>If <em>ndppd</em> doesn't seem to work on your KVM VPS, give <strong>ndpresponder</strong> a try!<br />
Head to my GitHub repository for installation and usage instructions:<br /><a rel="nofollow" href="https://github.com/yoursunny/ndpresponder">https://github.com/yoursunny/ndpresponder</a></p>
]]>
        </description>
    </item>
    <item>
        <title>VPS IPv6 /64 for SLAAC at home via wireguard?</title>
        <link>https://dev.lowendspirit.com/index.php?p=/discussion/2621/vps-ipv6-64-for-slaac-at-home-via-wireguard</link>
        <pubDate>Sat, 06 Mar 2021 05:49:18 +0000</pubDate>
        <category>Help</category>
        <dc:creator>topogio</dc:creator>
        <guid isPermaLink="false">2621@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>I'm looking to hand out public IPv6 addresses from my VPS /64 to my clients at home via SLAAC if possible. I have so far been able to get a single IPv6 public address to work via ndp_proxy (instructions <a rel="nofollow" href="https://github.com/burghardt/easy-wg-quick#enabling-ndp-proxy-instead-of-default-ipv6-masquerading" title="here">here</a>) BUT I have been unsuccessful at allowing multiple IPv6 thru the wireguard tunnel to become available to clients.</p>

<p>Here is a dirty diagram of how things would look like:</p>

<ol><li><p>VPS <br />
2602:fed2:8888:106:: /64 assigned<br />
eth0 = 2602:fed2:8888:106::1<br />
wg0 = 2602:fed2:8888:106:100::1<br />
-- wg tunnel --</p></li>
<li><p>Home client<br />
wg0 = 2602:fed2:8888:106:100::10 (this will become a 'default gateway' at home - receiving traffic from multiple hosts)<br />
eth0 = 192.168.1.100</p></li>
</ol><p>-- client 1 fowards packets to 192.168.1.100 asking for an IPv6 address. Hoping it automatically gets one from the available /64 space.</p>

<p>VPS provider won't give more IPv6 space than /64 unfortunately <img src="https://dev.lowendspirit.com/plugins/emojiextender/emoji/twitter/frown.png" title=":(" alt=":(" height="18" /> - I haven't tried asking for a /128 for a ptp thats routed to it - I was reading that may work but dont know.</p>

<p>I did try /etc/ndppd.conf with this config but did not see any requests comming from wg0 instance:</p>

<pre spellcheck="false" tabindex="0">proxy eth0 {
  autowire yes
  rule 2602:fed2:8888:106::/64 {
      iface wghub
  }
}

</pre>

<p>Anyone with experience that could comment?</p>
]]>
        </description>
    </item>
   </channel>
</rss>
