The Sob Story
I'm used to being spoiled and having static IPs. I just moved away from AT&T service where I was able to get a /27 for $30 a month (and 5gbit fiber!). Previously I worked for an ISP that gifted me a /23 to play around with (better make use of the delegated subnets from Sprint or they might try to take them back!). Currently I'm in a pickle: the only good ISP that can provide fiber and unmetered bandwidth to my area doesn't offer static IPs and also doesn't offer IPv6 service. Sometimes there's only so much you can do with a single IP address before you find yourself inventing solutions that in themselves are more of a problem than it's worth. Many cloud VPS providers offer cheap static IPs and unmetered bandwidth -- so let's borrow them for the homelab!
There are a few different ways to approach this problem, but the VPS provider I've selected for geographic and bandwidth reasons only offers Linux distros, so that's what I have to work with. If I could deploy something like an OpnSense firewall it would simplify management a bit as the appliance approach is easier to keep updated, so I would definitely choose that option if it was available.
VPS Config
I have a Debian 12 server to work with. First, buy additional Static IPs in the Cloud provider's control panel and assign them to the server. Install Wireguard. At this point it's just a stock server, no extra configuration done yet. We are not putting these additional IP addresses directly on this server -- they'll actually be in your homelab.
In this example the static IP address you want to assign to a home server will be 192.88.99.1
. I'm going to use 10.255.0.0/30
as my point to point subnet with 10.255.0.2
in the cloud and 10.255.0.1
on my home server.
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.255.0.2/30
PostUp = sysctl net.ipv4.ip_forward=1
PostUp = sysctl net.ipv4.conf.all.proxy_arp=1
PostUp = iptables -t nat -A POSTROUTING -s 10.255.0.1/32 -o ens3 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -s 10.255.0.1/32 -o ens3 -j MASQUERADE
ListenPort = 51820
PrivateKey = <CLOUD VPS PRIVATE KEY>
[Peer]
PublicKey = <HOMELAB PUBLIC KEY>
AllowedIPs = 10.255.0.0/30, 192.88.99.1/32
💭 Might it be better to use PreUp instead of PostUp here? Perhaps, but I don't particularly care about losing a couple packets.
One of the most important bits here is the net.ipv4.conf.all.proxy_arp=1
. This allows the server to respond to ARP requests for the static IP that is not actually assigned to any interfaces on the server, but actually exist on the other side of the Wireguard tunnel. Without this the outside world will not be able to reach you.
🪧 On FreeBSD (OpnSense/pfSense) you would need to set
net.inet.ip.forwarding=1
andnet.link.ether.inet.proxyall=1
The other detail here is that we're going to permit the other end of the tunnel to NAT their outbound traffic through the VPS. This is required because to allow the entire internet to pass traffic through this Wireguard tunnel we'll have to set AllowedIPs = 0.0.0.0/0
on the other end and that will force Wireguard to set that as a route.
🪧 You can try to avoid this by adding the
Table = off
setting under[Interface]
to disable to routing table changes, but the return traffic will not go back through the Wireguard tunnel and but through your normal default gateway which will obviously not work. You may be able to work around this if the server is FreeBSD/OpenBSD and you can run the pf firewall on the VM/server as you may be able to write a firewall rule using theroute-to
feature to force traffic from that static IP to go out through the Wireguard interface. I'm not sure what the equivalent would be in the Linux/iptables world.
Homelab Config
On the homelab side, you'll want to run Wireguard directly on the VM/jail that will have the static IP. My setup is a FreeBSD jail with VNET, so my config looks like this.
# /etc/rc.conf
# ( other config options redacted )
wireguard_enable="YES"
wireguard_interfaces="wg0"
# My internal network has 10.0.0.0/8 subnets
# so I want to be able to reach them directly
# and not force that traffic through the tunnel.
static_routes="static1"
route_static1="-net 10.0.0.0/8 10.27.2.254"
# A loopback interface for my static IP
cloned_interfaces="lo1"
ifconfig_lo1="inet 192.88.99.1/32 up"
# /usr/local/etc/wireguard/wg0.conf
[Interface]
Address = 10.255.0.1/32
PrivateKey = <HOMELAB PRIVATE KEY>
ListenPort = 51820
[Peer]
PublicKey = <CLOUD VPS PUBLIC KEY>
Endpoint = <CLOUD VPS IP>:51820
AllowedIPs = 10.255.0.0/30, 0.0.0.0/0
PersistentKeepalive = 5
Start it up, and observe the magic: the static IP from the cloud is now inside your homelab and you can actually see the real source IPs on connections! 🎉
Local Routing
This is great, but there's one problem: your other computers are going to send their traffic all the way to the cloud and have their packets transported over the Wireguard tunnel to the computer that's just inches away. You can fix this with a static route in your firewall/router. Configure it so the cloud IP address is routed to the private IP of the server with the static IP and it will ensure your local traffic never has to leave the local network.
Bonus Round: iBGP
My VMs and jails are on DHCP and I find it rather annoying to setup static leases for All The Things™️, so I just use iBGP to advertise the route to my router/firewall.
On my firewall, install and configure frr. All my servers are in a DMZ subnet of 10.27.2.0/24
and I want the BGP daemon to be listening on 10.255.255.179
🪧 I run all my internal services on the firewall with 10.255.255.0/24 IPs on loopback interfaces as a personal preference, but you'd probably just use the gateway IP address for your subnet. YMMV.
! frr.conf
!
! Zebra configuration saved from vty
! 2017/03/03 20:21:04
!
frr defaults traditional
!
!
!
router bgp 65551
bgp log-neighbor-changes
no bgp default ipv4-unicast
no bgp ebgp-requires-policy
bgp network import-check
bgp graceful-restart
bgp router-id 10.255.255.179
neighbor DMZ peer-group
neighbor DMZ remote-as 65551
bgp listen range 10.27.2.0/24 peer-group DMZ
address-family ipv4 unicast
neighbor DMZ activate
neighbor DMZ next-hop-self
exit-address-family
!
address-family ipv6 unicast
exit-address-family
!
!
!
!
!
!
!
line vty
!
This permits any server in 10.27.2.0/24
to open BGP sessions and inject routes into the routing table. This way even if my server's internal IP address changes everything just keeps working.
On the VM/jail side I run exabgp with this configuration:
# /usr/local/etc/exabgp/exabgp.conf
neighbor 10.255.255.179 {
router-id 10.27.2.5;
local-address 10.27.2.5;
local-as 65551;
peer-as 65551;
hold-time 30;
family {
ipv4 unicast;
}
static {
route 192.88.99.1/32 {
next-hop self;
withdraw;
}
}
}
🪧 You may notice that my local IP is hardcoded into this config file for the
router-id
andlocal-address
fields. I have a method to dynamically generate this file before the service starts using an m4 template. It might be archaic, but it works!
And that's it. The route is automatically injected into my firewall/router so all local devices can find the "cloud" IP address inside my homelab.