Proxmox VE ACME/Certbot Hooks

LetsEncrypt certificates are an easy way to help secure your Proxmox VE installation. However, sometimes you want to use them for more. In my case, I had a local service that I also wanted to use the certificate for, but when the certificate renewed the service would not restart. ACME.sh has built-in hook functionality to solve this exact need, but unfortunately It’s not easily accessed if you also want all the features of the Proxmox GUI and certificate management, since Proxmox handles all the calling to ACME and doesn’t provide a method to hook.

Of course it would be possible to use ACME independently, and then restart the needed PVE services, but I like to tinker and I wanted to find a way to go the other way; How can I keep the PVE certificate management and also restart my local service after renewal.

After a ton of Googling, I finally managed to find a solution. Proxmox VE calls the /usr/bin/pveupdate script to update certificates. This is just a Perl script, and if you scroll down you’ll find a $renew subroutine, with the lines:

print "Restarting pveproxy after renewing certificate\n";
PVE::Tools::run_command(['systemctl', 'reload-or-restart', 'pveproxy']);

So I took those and added:

print "Restarting myservice after renewing certificate\n";
PVE::Tools::run_command(['systemctl', 'reload-or-restart', 'myservice']);

A certificate renewal via the GUI now restarts my service after a certificate renewal. This functionality could be used to hook any type of action you need. The only downside, is that this script needs to be updated each time Proxmox is updated. Not the most elegant solution, but it works.

IPv6 and *sense on OVH

UPDATE 2024-08-01: Shortly after posting, OVH moved their vRack IPv6 project to Open Beta. You can now configure a routed IPv6 /56 into the vRack. While this article still applies to servers without a vRack interface, the vRack implementation doesn’t need all the hoop-jumping to work.

IPv6 is well on into deployment worldwide. According to Google, 45% of users access their services using IPv6 now. It’s more appealing than ever before to make sure you have a functional dual-stack network deployment.

This task is made difficult by some providers, whose IPv6 configurations are less than ideal (looking at you OVHCloud). Sure, in the most basic of situations may work, but anything more complex than that breaks and causes headaches. But the dual stack appeal is still there; so how do we make it work?

What is the goal?

For the purpose of this article, lets assume we’re trying to setup a dedicated Proxmox VE server with multiple VMs. Some of these VMs need to be directly accessible from the Internet, while others can be NATd. Critically, we need both IPv4 and IPv6 network stacks, particularly on the public facing services. We’re going to use OPNsense, but pfSense is similar enough that it will likely work just fine.

The problem.

The problems with OVH’s IPv6 implementation begin with the fact that they do not use prefix-delegation and they only give customers a single /64 network to work with (despite the rest of the /56 seeming to be unused). Instead of routing the entire block to your server and using PDs, they give you a fixed gateway address that’s within your /56, but outside the /64 that they assign to you. To that end, configuring devices becomes much more complicated if you’re doing anything more than setting up a single public facing server.

Additionally, their routers will not route any traffic without first receiving a Neighbour Discovery Protocol advertisement from an IPv6 address. For our use-case, this means we need to configure *sense to forward NPD packets to the LAN, or to reply to NDP packets as though it had those addresses.

Initial setup.

So lets deal with this.

The first step is getting IPv6 configured on the router. Use the web GUI to configure a Static IPv6 address on the WAN interface, within the /64 network provided by OVH. Setting up a gateway poses a bit of a challenge, because OPNsense doesn’t support far (outside the local subnet) IPv6 gateways. To add a default gateway outside the local subnet, we first have to tell the system how to reach that gateway. To do this we create an on-link route that tells the router it will find that address on the WAN interface link, using the following command:

route -6 add xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff -interface vtnet0

Make sure to replace the address with the gateway address OVH has provided you, and the interface with the interface of your WAN interface. You can then add the gateway via the web interface, or with another command:

route -6 add default xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff

These changes will disappear as soon as you restart the router, so add them into a custom rc script to have them start at boot time. /usr/local/etc/rc.d/ovhipv6

#!/bin/sh

. /etc/rc.subr

name="ovhipv6"
rcvar=ovhipv6_enable
start_cmd="${name}_start"
stop_cmd="${name}_stop"

load_rc_config $name
: ${ovhipv6_enable:=no}

ovhipv6_start()
{

        route -6 add xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff -interface vtnet0
        route -6 add default xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff

}

ovhipv6_stop()
{
        route -6 delete xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff -interface vtnet0
        route -6 delete default xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff
}

load_rc_config $name
run_rc_command "$1"

Then enable it in rc.conf by creating /etc/rc.conf.d/ovhipv6

ovhipv6_enable="YES"

You should now be able to ping -6 google.com and receive replies.

First Hiccup.

So we’ve assigned our only /64 to our WAN interface; how are we supposed to get IPv6 to our VMs behind it?

The hopeful among you might be saying “We have a router, lets route a small subset of the /64 – say a /65 – to the LAN. This does work, with an NDP proxy running to forward/reply to the NDP requests from the OVH router. I opted for a different configuration, albeit one that still requires an NDP proxy.

Using ULAs.

My solution uses Unique Local Addresses from fc00::/7 of the IPv6 address space. These addresses are supposed to be globally unique, and route-able, but are not intended for use on the open Internet.

To get a ULA you start with fd followed by 40 bits of random hex. Make sure you generate your own random 40 bits. This should get you something like fd3d:a7c3:2ef1::/48. You can then add up to another 16 bits to define your “subnet” and getting fd3d:a7c3:2ef1:1234::/64.

Configure a static address within this new subnet on the LAN interface.

So now we have a /64 for the LAN side, but OVH’s routers won’t route it; so what do we do?

NPTv6

The solution we need is NPTv6. Network Prefix Translation, “translates” one IPv6 prefix to another one. So we can tell OPNsense to translate our fd3d:a7c3:2ef1:1234::/64 prefix to the public xxxx:xxxx:xxxx:xxxx::/64 prefix that OVH provides. As IPv6 packets enter the WAN, the external prefix will be converted to our ULA one, and the packet will be routed to the local machine. The problem with translating the entire /64 is that our WAN interface is using one of the addresses in that space, and we don’t want it translated. The solution is to only translate a /65. This leaves half the network space available on the WAN interface, while anything in the other half will get converted into one of the local addresses.

xxxx:xxxx:xxxx:xxxx:: - xxxx:xxxx:xxxx:xxxx:79ff:ffff:ffff:ffff will be available on the WAN interface.

xxxx:xxxx:xxxx:xxxx:8000:: - xxxx:xxxx:xxxx:ffff:ffff:ffff:ffff will be translated to the ULA addresses.

Under Firewall > NAT > NPTv6 you can add a new rule for the WAN interface. The external prefix will be xxxx:xxxx:xxxx:xxxx:8000::/65 and the internal prefix will be fd3d:a7c3:2ef1:1234:8000::/65.

Now assign a VM on the LAN side a static address in the upper /65 of your ULA range. (anything 8000 or above in the 5th octet). You can also configure DHCPv6 if you prefer, but make sure the ranges being handed out are in the upper /65 as well, otherwise they won’t be translated to the public prefix.

You should now be able to ping the router and any other VMs on the LAN side with the ULAs assigned. If you packet capture on the WAN interface, you should see that packets from the LAN side are being translated correctly. But if you try to ping anything on the Internet side, no replies make it back. What gives?

NDP Proxy

Due to OVH’s absurd implementation of IPv6, the provided gateway will not route any IPv6 traffic until the address sending that traffic replies to a Neighbour Solicitation with a Neighbour Advertisement. NDP packets aren’t route-able so the NPTv6 won’t allow them to be translated and routed, and our router won’t reply to them, because it doesn’t have the address being solicited. If we give it the address being solicited it will break the whole setup.

The solution is ndproxy, which will reply to all NDP Solicitations from specified addresses, minus some exceptions. Since the addresses we’re using are route-able, and all the OVH gateway needs is to know the address is a neighbour, this solution works well. The problem is that ndproxy isn’t support by *sense so we have to compile it ourselves.

Compiling and configuring

Make sure you have git installed, and clone the OPNsense source as well as the upstream FreeBSD ports.

pkg install git
git clone --recurse-submodules https://github.com/opnsense/src /usr/src
git clone --recurse-submodules https://git.FreeBSD.org/ports.git /usr/ports-upstream

Move into the ndproxy directory and make and install the kernel module.

cd /usr/ports-upstream/net/ndproxy
make clean
make install

This process will have to be completed every time there is a kernel update.

ndproxy is configured with sysctl commands, but this occurs automatically on startup. To configure it we need to include the values we want in /etc/rc.conf.d/ndproxy.

# Start at boot
ndproxy_enable="YES"
# The WAN interface that will be listening for NDP Solicitations
ndproxy_uplink_interface="vtnet0"
# The MAC address of the WAN interface that will be used to fill the NDP Advertisements
ndproxy_downlink_mac_address="XX:XX:XX:XX:XX:XX"
# Addresess NOT to reply to.  Include the addresses (including link-local) of the router and any other
# system that is on the WAN side and needs to respond to it's own NDP packets. (Separate by semi-colon)
ndproxy_exception_ipv6_addresses="xxxx:xxxx:xxxx:xxxx:yyyy:yyyy:yyyy:yyyy;fe80::xxxx:xxxx:xxxx:xxxx"
# Only addresses listen below will handled by ndproxy.  So if an NDP packet is sent from an address not in this list it will not be handled.
# Put all addresses that the provider sends NDP packets from in this list. (Separate by semi-colon)
ndproxy_uplink_ipv6_addresses="xxxx:xxxx:xxxx:xxff:ff:ff:ff:ff;xxxx:xxxx:xxxx:xxff:ff:ff:ff:fe;xxxx:xxxx:xxxx:xxff:ff:ff:ff:fd"

Start ndproxy and test.

You should now be ready to test. Start ndproxy via /usr/local/etc/rc.d/ndproxy start and begin testing. Send a ping from one of the LAN machines and you should get replies within a few seconds. If you packet capture on the WAN interface again you will see the NDP solicitations and advertisements going to and from the gateway as ndproxy replies to them.