Discussion:
PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Andrew Lemin
2018-09-11 13:59:40 UTC
Permalink
Hi list,

I use an OpenVPN based internet access service (like NordVPN, AirVPN etc).

The issue with these public VPN services, is the VPN servers are always congested. The most I’ll get is maybe 10Mbits through one server.

Local connection is a few hundred mbps..

So I had the idea of running multiple openvpn tunnels to different servers, and load balancing outbound traffic across the tunnels.

Sounds simple enough..

However every vpn tunnel uses the same subnet and nexthop gw. This of course won’t work with normal routing.

So my question:
How can I use rdomains or rtables with openvpn clients, so that each VPN is started in its own logical VRF?

And is it then a case of just using PF to push the outbound packets into the various rdomains/rtables randomly (of course maintaining state)? LAN interface would be in the default rdomain/rtable..

My confusion is that an interface needs to be bound to the logical VRF, but the tunX interfaces are created dynamically by openvpn.

So I am not sure how to configure this within hostname.tunX etc, or if I’m even approaching this correctly?

Thanks, Andy.
Andreas Krüger
2018-09-11 20:59:13 UTC
Permalink
Maybe rdomains?
Post by Andrew Lemin
Hi list,
I use an OpenVPN based internet access service (like NordVPN, AirVPN etc).
The issue with these public VPN services, is the VPN servers are always congested. The most I’ll get is maybe 10Mbits through one server.
Local connection is a few hundred mbps..
So I had the idea of running multiple openvpn tunnels to different servers, and load balancing outbound traffic across the tunnels.
Sounds simple enough..
However every vpn tunnel uses the same subnet and nexthop gw. This of course won’t work with normal routing.
How can I use rdomains or rtables with openvpn clients, so that each VPN is started in its own logical VRF?
And is it then a case of just using PF to push the outbound packets into the various rdomains/rtables randomly (of course maintaining state)? LAN interface would be in the default rdomain/rtable..
My confusion is that an interface needs to be bound to the logical VRF, but the tunX interfaces are created dynamically by openvpn.
So I am not sure how to configure this within hostname.tunX etc, or if I’m even approaching this correctly?
Thanks, Andy.
Andy Lemin
2018-09-12 08:33:30 UTC
Permalink
Hi Andreas,

Thanks for your reply. Sorry I should have been more clear.

I know that rdomains are the correct method with overlapping addressing.

The challenge is that I cannot figure out how to get openvpn to initialise it’s resulting tunX interface directly into the correct rdomain?

You normally move interfaces to an rdomain with; ‘ifconfig em1 rdomain 1’

However is there a way I can get openvpn to do this at the time of setting up the interface?

The problem is that you cannot just create the tunnel, and then move it over to an rdomain afterwards if there is already another conflicting tunnel in the default rdomain (as the tunnel just won’t come up due to the address conflict).

I realise I could redesign it so that there is never a tunX in the default rdomain, so that tunnels can be setup in the default and then moved over. But this feels rather flawed/restricting and not the proper way of doing things?

I would like to script the management of these tunnels, and so if there was a way of setting up the tunnel in its own rdomain directly that would be a lot more robust :)

Thanks for your time. Andy.



Sent from a teeny tiny keyboard, so please excuse typos.
Post by Andreas Krüger
Maybe rdomains?
Post by Andrew Lemin
Hi list,
I use an OpenVPN based internet access service (like NordVPN, AirVPN etc).
The issue with these public VPN services, is the VPN servers are always congested. The most I’ll get is maybe 10Mbits through one server.
Local connection is a few hundred mbps..
So I had the idea of running multiple openvpn tunnels to different servers, and load balancing outbound traffic across the tunnels.
Sounds simple enough..
However every vpn tunnel uses the same subnet and nexthop gw. This of course won’t work with normal routing.
How can I use rdomains or rtables with openvpn clients, so that each VPN is started in its own logical VRF?
And is it then a case of just using PF to push the outbound packets into the various rdomains/rtables randomly (of course maintaining state)? LAN interface would be in the default rdomain/rtable..
My confusion is that an interface needs to be bound to the logical VRF, but the tunX interfaces are created dynamically by openvpn.
So I am not sure how to configure this within hostname.tunX etc, or if I’m even approaching this correctly?
Thanks, Andy.
Stuart Henderson
2018-09-12 16:48:28 UTC
Permalink
Post by Andrew Lemin
Hi list,
I use an OpenVPN based internet access service (like NordVPN, AirVPN etc).
The issue with these public VPN services, is the VPN servers are always congested. The most I’ll get is maybe 10Mbits through one server.
Local connection is a few hundred mbps..
So I had the idea of running multiple openvpn tunnels to different servers, and load balancing outbound traffic across the tunnels.
Sounds simple enough..
However every vpn tunnel uses the same subnet and nexthop gw. This of course won’t work with normal routing.
rtable/rdomain with openvpn might be a bit complex, I think it may need
persist-tun and create the tun device in advance with the wanted rdomain.
(you need the VPN to be in one, but the UDP/TCP connection in another).

Assuming you are using tun (and so point-to-point connections) rather
than tap, try one or other of these:

- PF route-to and 'probability', IIRC it works to just use a junk
address as long as the interface is correct ("route-to ***@tun0",
"route-to ***@tun1").

- ECMP (net.inet.ip.multipath=1) and multiple route entries with
the same priority. Use -ifp to set the interface ("route add
default -priority 8 -ifp $interface $dest").

The "destination address" isn't really very relevant for routing
on point-to-point interfaces (though current versions of OpenBSD
do require that it matches the destination address on the interface,
otherwise they won't allow the route to be added).
Andrew Lemin
2018-11-27 22:18:53 UTC
Permalink
Hi,

So using the information Stuart and Andreas provided, I have been testing
this (load balancing across multiple VPN servers to improve bandwidth).
And I have multiple VPNs working properly within there own rdomains.

* However 'route-to' is not load balancing with rdomains :(

I have not been able to use the more simple solution you highlighted Stuart
(using basic multipath routing), as the tunnel subnets overlap.
So I think this is a potential bug, but I need your wisdom to verify my
working first :)

Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in
unique rdomains (overlapping tunnel subnets)

Configure sysctl's
# Ensure '/etc/sysctl.conf' contains;
net.inet.ip.forwarding=1 # Permit forwarding (routing) of packets
net.inet.ip.multipath=1 # 1=Enable IP multipath routing

# Active sysctl's now without reboot
sysctl net.inet.ip.forwarding=1
sysctl net.inet.ip.multipath=1

Pre-create tunX interfaces (in their respective rdomains)
# Ensure '/etc/hostname.tun1' contains;
up
rdomain 1

# Ensure '/etc/hostname.tun2' contains;
up
rdomain 2

# Bring up the new tunX interfaces
sh /etc/netstart

fw1# ifconfig
tun1

tun1: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 1 mtu 1500
index 8 priority 0 llprio 3
groups: tun
status: down
fw1# ifconfig tun2
tun2: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 2 mtu 1500
index 9 priority 0 llprio 3
groups: tun
status: down

# Start all SSL VPN tunnels (in unique VRF/rdomain's)
/usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun1.pid --dev tun1 &
/usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun2.pid --dev tun2 &
('auth-user-pass' updated in config files)

Each openvpn tunnel should start using 'rtable 0' for the VPN's outer
connection itself, but with each virtual tunnel TunX interface being placed
into a unique routing domain.

This results in the following tunX interface and rtable updates;
fw1# ifconfig
tun1

tun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 1 mtu 1500
index 6 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.128 --> 10.8.8.1 netmask 0xffffff00
fw1# ifconfig tun2
tun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 2 mtu 1500
index 7 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.129 --> 10.8.8.1 netmask 0xffffff00
fw1# route -T 1 show
Routing tables
Internet:
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.128 UH 0 0 - 8
tun1
10.8.8.128 10.8.8.128 UHl 0 0 - 1
tun1
localhost localhost UHl 0 0 32768 1
lo1
fw1# route -T 2 show
Routing tables
Internet:
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.129 UH 0 0 - 8
tun2
10.8.8.129 10.8.8.129 UHl 0 0 - 1
tun2
localhost localhost UHl 0 0 32768 1
lo2

# Test each tunnel - Ping the remote connected vpn peer within each rdomain
ping -V 1 10.8.8.1
ping -V 2 10.8.8.1

Shows both VPN tunnels are working independently with the overlapping
addressing :)

# To be able to test each tunnel beyond the peer IP, add some default
routes to the rdomains;
route -T 1 -n add default 10.8.8.1
route -T 2 -n add default 10.8.8.1

# Test each tunnel - Ping beyond the connected peer
ping -V 1 8.8.8.8
ping -V 2 8.8.8.8

Shows both VPN tunnels are definitely working independently with the
overlapping addressing :)

# Reverse routing - I have read in various places that PF's 'route-to' can
be used for jumping rdomains's in the forward path of the session, but the
reply packets need any matching route in the remote rdomain for the reply
destination (the matching route is to ensure in the reply packet is passed
through the routing table and gets into the PF processing, where PF can
manage the return back to the default rdomain etc.

But as I am using outbound NATing on the tunX interfaces, there is always a
matching route for the reply traffic. And so a route for the internal
subnet is not needed within rdomain 1 and 2.


# Finally ensure '/etc/pf.conf' contains something like;
if_ext = "em0"
if_int = "em1"

#CDR = 80 Down/20 Up
queue out_ext on $if_ext flows 1024 bandwidth 18M max 19M qlimit 1024
default
queue out_tun1 on tun1 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_tun2 on tun2 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_int on $if_srx flows 1024 bandwidth 74M max 78M qlimit 1024
default

#MTU = 1500
match proto tcp all scrub (no-df max-mss 1460) set prio (2,5)
match proto udp all scrub (no-df max-mss 1472) set prio (2,5)
match proto icmp all scrub (no-df max-mss 1472) set prio 7

#NAT all outbound traffic
match out on $if_ext from any to any nat-to ($if_ext)
match out on tun1 from any to any nat-to (tun1) rtable 1
match out on tun2 from any to any nat-to (tun2) rtable 2

#Allow outbound traffic on egress for vpn tunnel setup etc
pass out quick on { $if_ext } from self to any set prio (3,6)

#Load balance outbound traffic from internal network across tun1 and tun2 -
THIS IS NOT WORKING - IT ONLY USES FIRST TUNNEL
pass in quick on { $if_int } to any route-to { (tun1 10.8.8.1), (tun2
10.8.8.1) } round-robin set prio (3,6)

#Allow outbound traffic over vpn tunnels
pass out quick on tun1 to any set prio (3,6)
pass out quick on tun2 to any set prio (3,6)


# Verify which tunnels are being used
systat ifstat

*This command shows that all the traffic is only flowing over the first
tun1 interface, and the second tun2 is never ever used.*


# NB; I have tried with and without 'set state-policy if-bound'.

I have tried all the load balancing policies; round-robin, random,
least-states and source-hash

If I change the 'route-to' pool to "{ (tun2 10.8.8.1), (tun1 10.8.8.1) }",
then only tun2 is used instead.. :(

So 'route-to' seems to only use the first tunnel in the pool.

Any advice on what is going wrong here. I am wondering if I am falling
victim to some processing-order issue with PF, or if this is a real bug?

Thanks, Andy.
Post by Andrew Lemin
Post by Andrew Lemin
Hi list,
I use an OpenVPN based internet access service (like NordVPN, AirVPN
etc).
Post by Andrew Lemin
The issue with these public VPN services, is the VPN servers are always
congested. The most I’ll get is maybe 10Mbits through one server.
Post by Andrew Lemin
Local connection is a few hundred mbps..
So I had the idea of running multiple openvpn tunnels to different
servers, and load balancing outbound traffic across the tunnels.
Post by Andrew Lemin
Sounds simple enough..
However every vpn tunnel uses the same subnet and nexthop gw. This of
course won’t work with normal routing.
rtable/rdomain with openvpn might be a bit complex, I think it may need
persist-tun and create the tun device in advance with the wanted rdomain.
(you need the VPN to be in one, but the UDP/TCP connection in another).
Assuming you are using tun (and so point-to-point connections) rather
- PF route-to and 'probability', IIRC it works to just use a junk
- ECMP (net.inet.ip.multipath=1) and multiple route entries with
the same priority. Use -ifp to set the interface ("route add
default -priority 8 -ifp $interface $dest").
The "destination address" isn't really very relevant for routing
on point-to-point interfaces (though current versions of OpenBSD
do require that it matches the destination address on the interface,
otherwise they won't allow the route to be added).
Philip Higgins
2018-11-28 01:40:17 UTC
Permalink
At a guess, route-to is confused by the same ip, but I haven't looked at the internals.

Maybe try adding pair interfaces (with different addresses) to each rdomain,
and you can use route-to to select between them.
You already have default route set in each rdomain, so it will find its way from there.

eg.

# /etc/hostname.pair1
group pinternal
rdomain 1
inet 10.255.1.1 255.255.255.0
!/sbin/route -T1 add <your internal subnet(s)> 10.255.1.2

# /etc/hostname.pair11
group pinternal
inet 10.255.1.2 255.255.255.0
patch pair1

# /etc/hostname.pair2
group pinternal
rdomain 2
inet 10.255.2.1 255.255.255.0
!/sbin/route -T2 add <your internal subnet(s)> 10.255.2.2

# /etc/hostname.pair12
group pinternal
inet 10.255.2.2 255.255.255.0
patch pair2

# /etc/pf.conf
...
pass on pinternal
...
pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \
round-robin set prio (3,6)

Have not tested exactly this, but similar to my current setup.
Might not need the static routes, if the right pf magic is happening.


-Phil
Post by Andrew Lemin
Hi,
So using the information Stuart and Andreas provided, I have been testing
this (load balancing across multiple VPN servers to improve bandwidth).
And I have multiple VPNs working properly within there own rdomains.
* However 'route-to' is not load balancing with rdomains :(
I have not been able to use the more simple solution you highlighted Stuart
(using basic multipath routing), as the tunnel subnets overlap.
So I think this is a potential bug, but I need your wisdom to verify my
working first :)
Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in
unique rdomains (overlapping tunnel subnets)
Configure sysctl's
# Ensure '/etc/sysctl.conf' contains;
net.inet.ip.forwarding=1 # Permit forwarding (routing) of packets
net.inet.ip.multipath=1 # 1=Enable IP multipath routing
# Active sysctl's now without reboot
sysctl net.inet.ip.forwarding=1
sysctl net.inet.ip.multipath=1
Pre-create tunX interfaces (in their respective rdomains)
# Ensure '/etc/hostname.tun1' contains;
up
rdomain 1
# Ensure '/etc/hostname.tun2' contains;
up
rdomain 2
# Bring up the new tunX interfaces
sh /etc/netstart
fw1# ifconfig
tun1
tun1: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 1 mtu 1500
index 8 priority 0 llprio 3
groups: tun
status: down
fw1# ifconfig tun2
tun2: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 2 mtu 1500
index 9 priority 0 llprio 3
groups: tun
status: down
# Start all SSL VPN tunnels (in unique VRF/rdomain's)
/usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun1.pid --dev tun1 &
/usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun2.pid --dev tun2 &
('auth-user-pass' updated in config files)
Each openvpn tunnel should start using 'rtable 0' for the VPN's outer
connection itself, but with each virtual tunnel TunX interface being placed
into a unique routing domain.
This results in the following tunX interface and rtable updates;
fw1# ifconfig
tun1
tun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 1 mtu 1500
index 6 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.128 --> 10.8.8.1 netmask 0xffffff00
fw1# ifconfig tun2
tun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 2 mtu 1500
index 7 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.129 --> 10.8.8.1 netmask 0xffffff00
fw1# route -T 1 show
Routing tables
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.128 UH 0 0 - 8
tun1
10.8.8.128 10.8.8.128 UHl 0 0 - 1
tun1
localhost localhost UHl 0 0 32768 1
lo1
fw1# route -T 2 show
Routing tables
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.129 UH 0 0 - 8
tun2
10.8.8.129 10.8.8.129 UHl 0 0 - 1
tun2
localhost localhost UHl 0 0 32768 1
lo2
# Test each tunnel - Ping the remote connected vpn peer within each rdomain
ping -V 1 10.8.8.1
ping -V 2 10.8.8.1
Shows both VPN tunnels are working independently with the overlapping
addressing :)
# To be able to test each tunnel beyond the peer IP, add some default
routes to the rdomains;
route -T 1 -n add default 10.8.8.1
route -T 2 -n add default 10.8.8.1
# Test each tunnel - Ping beyond the connected peer
ping -V 1 8.8.8.8
ping -V 2 8.8.8.8
Shows both VPN tunnels are definitely working independently with the
overlapping addressing :)
# Reverse routing - I have read in various places that PF's 'route-to' can
be used for jumping rdomains's in the forward path of the session, but the
reply packets need any matching route in the remote rdomain for the reply
destination (the matching route is to ensure in the reply packet is passed
through the routing table and gets into the PF processing, where PF can
manage the return back to the default rdomain etc.
But as I am using outbound NATing on the tunX interfaces, there is always a
matching route for the reply traffic. And so a route for the internal
subnet is not needed within rdomain 1 and 2.
# Finally ensure '/etc/pf.conf' contains something like;
if_ext = "em0"
if_int = "em1"
#CDR = 80 Down/20 Up
queue out_ext on $if_ext flows 1024 bandwidth 18M max 19M qlimit 1024
default
queue out_tun1 on tun1 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_tun2 on tun2 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_int on $if_srx flows 1024 bandwidth 74M max 78M qlimit 1024
default
#MTU = 1500
match proto tcp all scrub (no-df max-mss 1460) set prio (2,5)
match proto udp all scrub (no-df max-mss 1472) set prio (2,5)
match proto icmp all scrub (no-df max-mss 1472) set prio 7
#NAT all outbound traffic
match out on $if_ext from any to any nat-to ($if_ext)
match out on tun1 from any to any nat-to (tun1) rtable 1
match out on tun2 from any to any nat-to (tun2) rtable 2
#Allow outbound traffic on egress for vpn tunnel setup etc
pass out quick on { $if_ext } from self to any set prio (3,6)
#Load balance outbound traffic from internal network across tun1 and tun2 -
THIS IS NOT WORKING - IT ONLY USES FIRST TUNNEL
pass in quick on { $if_int } to any route-to { (tun1 10.8.8.1), (tun2
10.8.8.1) } round-robin set prio (3,6)
#Allow outbound traffic over vpn tunnels
pass out quick on tun1 to any set prio (3,6)
pass out quick on tun2 to any set prio (3,6)
# Verify which tunnels are being used
systat ifstat
*This command shows that all the traffic is only flowing over the first
tun1 interface, and the second tun2 is never ever used.*
# NB; I have tried with and without 'set state-policy if-bound'.
I have tried all the load balancing policies; round-robin, random,
least-states and source-hash
If I change the 'route-to' pool to "{ (tun2 10.8.8.1), (tun1 10.8.8.1) }",
then only tun2 is used instead.. :(
So 'route-to' seems to only use the first tunnel in the pool.
Any advice on what is going wrong here. I am wondering if I am falling
victim to some processing-order issue with PF, or if this is a real bug?
Thanks, Andy.
Tom Smyth
2018-11-28 06:02:55 UTC
Permalink
Howdy...
starting Openvpn in different rdomains works pretty well for us

a crude way of doing that ... is to add the following line to the
bottom of your tun interface...
(starting openvpn in rdomain2 )

!/sbin/route -T 2 exec /usr/local/sbin/openvpn --config
/etc/openvpn2.conf & /usr/bin/false

we were using the L2 tunnels (not l3) ... but this worked pretty well for us

you I think you can use rcctl and set rtable also as described very well here

using symbolic links in /etc/rc.d/ you can create multiple openvpn
services, each with their own
settings...
I hope this helps
Post by Philip Higgins
At a guess, route-to is confused by the same ip, but I haven't looked at the internals.
Maybe try adding pair interfaces (with different addresses) to each rdomain,
and you can use route-to to select between them.
You already have default route set in each rdomain, so it will find its way from there.
eg.
# /etc/hostname.pair1
group pinternal
rdomain 1
inet 10.255.1.1 255.255.255.0
!/sbin/route -T1 add <your internal subnet(s)> 10.255.1.2
# /etc/hostname.pair11
group pinternal
inet 10.255.1.2 255.255.255.0
patch pair1
# /etc/hostname.pair2
group pinternal
rdomain 2
inet 10.255.2.1 255.255.255.0
!/sbin/route -T2 add <your internal subnet(s)> 10.255.2.2
# /etc/hostname.pair12
group pinternal
inet 10.255.2.2 255.255.255.0
patch pair2
# /etc/pf.conf
...
pass on pinternal
...
pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \
round-robin set prio (3,6)
Have not tested exactly this, but similar to my current setup.
Might not need the static routes, if the right pf magic is happening.
-Phil
Post by Andrew Lemin
Hi,
So using the information Stuart and Andreas provided, I have been testing
this (load balancing across multiple VPN servers to improve bandwidth).
And I have multiple VPNs working properly within there own rdomains.
* However 'route-to' is not load balancing with rdomains :(
I have not been able to use the more simple solution you highlighted Stuart
(using basic multipath routing), as the tunnel subnets overlap.
So I think this is a potential bug, but I need your wisdom to verify my
working first :)
Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in
unique rdomains (overlapping tunnel subnets)
Configure sysctl's
# Ensure '/etc/sysctl.conf' contains;
net.inet.ip.forwarding=1 # Permit forwarding (routing) of packets
net.inet.ip.multipath=1 # 1=Enable IP multipath routing
# Active sysctl's now without reboot
sysctl net.inet.ip.forwarding=1
sysctl net.inet.ip.multipath=1
Pre-create tunX interfaces (in their respective rdomains)
# Ensure '/etc/hostname.tun1' contains;
up
rdomain 1
# Ensure '/etc/hostname.tun2' contains;
up
rdomain 2
# Bring up the new tunX interfaces
sh /etc/netstart
fw1# ifconfig
tun1
tun1: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 1 mtu 1500
index 8 priority 0 llprio 3
groups: tun
status: down
fw1# ifconfig tun2
tun2: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 2 mtu 1500
index 9 priority 0 llprio 3
groups: tun
status: down
# Start all SSL VPN tunnels (in unique VRF/rdomain's)
/usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun1.pid --dev tun1 &
/usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun2.pid --dev tun2 &
('auth-user-pass' updated in config files)
Each openvpn tunnel should start using 'rtable 0' for the VPN's outer
connection itself, but with each virtual tunnel TunX interface being placed
into a unique routing domain.
This results in the following tunX interface and rtable updates;
fw1# ifconfig
tun1
tun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 1 mtu 1500
index 6 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.128 --> 10.8.8.1 netmask 0xffffff00
fw1# ifconfig tun2
tun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 2 mtu 1500
index 7 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.129 --> 10.8.8.1 netmask 0xffffff00
fw1# route -T 1 show
Routing tables
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.128 UH 0 0 - 8
tun1
10.8.8.128 10.8.8.128 UHl 0 0 - 1
tun1
localhost localhost UHl 0 0 32768 1
lo1
fw1# route -T 2 show
Routing tables
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.129 UH 0 0 - 8
tun2
10.8.8.129 10.8.8.129 UHl 0 0 - 1
tun2
localhost localhost UHl 0 0 32768 1
lo2
# Test each tunnel - Ping the remote connected vpn peer within each rdomain
ping -V 1 10.8.8.1
ping -V 2 10.8.8.1
Shows both VPN tunnels are working independently with the overlapping
addressing :)
# To be able to test each tunnel beyond the peer IP, add some default
routes to the rdomains;
route -T 1 -n add default 10.8.8.1
route -T 2 -n add default 10.8.8.1
# Test each tunnel - Ping beyond the connected peer
ping -V 1 8.8.8.8
ping -V 2 8.8.8.8
Shows both VPN tunnels are definitely working independently with the
overlapping addressing :)
# Reverse routing - I have read in various places that PF's 'route-to' can
be used for jumping rdomains's in the forward path of the session, but the
reply packets need any matching route in the remote rdomain for the reply
destination (the matching route is to ensure in the reply packet is passed
through the routing table and gets into the PF processing, where PF can
manage the return back to the default rdomain etc.
But as I am using outbound NATing on the tunX interfaces, there is always a
matching route for the reply traffic. And so a route for the internal
subnet is not needed within rdomain 1 and 2.
# Finally ensure '/etc/pf.conf' contains something like;
if_ext = "em0"
if_int = "em1"
#CDR = 80 Down/20 Up
queue out_ext on $if_ext flows 1024 bandwidth 18M max 19M qlimit 1024
default
queue out_tun1 on tun1 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_tun2 on tun2 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_int on $if_srx flows 1024 bandwidth 74M max 78M qlimit 1024
default
#MTU = 1500
match proto tcp all scrub (no-df max-mss 1460) set prio (2,5)
match proto udp all scrub (no-df max-mss 1472) set prio (2,5)
match proto icmp all scrub (no-df max-mss 1472) set prio 7
#NAT all outbound traffic
match out on $if_ext from any to any nat-to ($if_ext)
match out on tun1 from any to any nat-to (tun1) rtable 1
match out on tun2 from any to any nat-to (tun2) rtable 2
#Allow outbound traffic on egress for vpn tunnel setup etc
pass out quick on { $if_ext } from self to any set prio (3,6)
#Load balance outbound traffic from internal network across tun1 and tun2 -
THIS IS NOT WORKING - IT ONLY USES FIRST TUNNEL
pass in quick on { $if_int } to any route-to { (tun1 10.8.8.1), (tun2
10.8.8.1) } round-robin set prio (3,6)
#Allow outbound traffic over vpn tunnels
pass out quick on tun1 to any set prio (3,6)
pass out quick on tun2 to any set prio (3,6)
# Verify which tunnels are being used
systat ifstat
*This command shows that all the traffic is only flowing over the first
tun1 interface, and the second tun2 is never ever used.*
# NB; I have tried with and without 'set state-policy if-bound'.
I have tried all the load balancing policies; round-robin, random,
least-states and source-hash
If I change the 'route-to' pool to "{ (tun2 10.8.8.1), (tun1 10.8.8.1) }",
then only tun2 is used instead.. :(
So 'route-to' seems to only use the first tunnel in the pool.
Any advice on what is going wrong here. I am wondering if I am falling
victim to some processing-order issue with PF, or if this is a real bug?
Thanks, Andy.
--
Kindest regards,
Tom Smyth

Mobile: +353 87 6193172
The information contained in this E-mail is intended only for the
confidential use of the named recipient. If the reader of this message
is not the intended recipient or the person responsible for
delivering it to the recipient, you are hereby notified that you have
received this communication in error and that any review,
dissemination or copying of this communication is strictly prohibited.
If you have received this in error, please notify the sender
immediately by telephone at the number above and erase the message
You are requested to carry out your own virus check before
opening any attachment.
Tom Smyth
2018-11-28 06:04:16 UTC
Permalink
Sorry the "here" I was referring to earlier was "here" as shown below
https://lab.rickauer.com/post/2017/07/16/OpenBSD-rtables-and-rdomains
Post by Tom Smyth
Howdy...
starting Openvpn in different rdomains works pretty well for us
a crude way of doing that ... is to add the following line to the
bottom of your tun interface...
(starting openvpn in rdomain2 )
!/sbin/route -T 2 exec /usr/local/sbin/openvpn --config
/etc/openvpn2.conf & /usr/bin/false
we were using the L2 tunnels (not l3) ... but this worked pretty well for us
you I think you can use rcctl and set rtable also as described very well here
using symbolic links in /etc/rc.d/ you can create multiple openvpn
services, each with their own
settings...
I hope this helps
Post by Philip Higgins
At a guess, route-to is confused by the same ip, but I haven't looked at the internals.
Maybe try adding pair interfaces (with different addresses) to each rdomain,
and you can use route-to to select between them.
You already have default route set in each rdomain, so it will find its way from there.
eg.
# /etc/hostname.pair1
group pinternal
rdomain 1
inet 10.255.1.1 255.255.255.0
!/sbin/route -T1 add <your internal subnet(s)> 10.255.1.2
# /etc/hostname.pair11
group pinternal
inet 10.255.1.2 255.255.255.0
patch pair1
# /etc/hostname.pair2
group pinternal
rdomain 2
inet 10.255.2.1 255.255.255.0
!/sbin/route -T2 add <your internal subnet(s)> 10.255.2.2
# /etc/hostname.pair12
group pinternal
inet 10.255.2.2 255.255.255.0
patch pair2
# /etc/pf.conf
...
pass on pinternal
...
pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \
round-robin set prio (3,6)
Have not tested exactly this, but similar to my current setup.
Might not need the static routes, if the right pf magic is happening.
-Phil
Post by Andrew Lemin
Hi,
So using the information Stuart and Andreas provided, I have been testing
this (load balancing across multiple VPN servers to improve bandwidth).
And I have multiple VPNs working properly within there own rdomains.
* However 'route-to' is not load balancing with rdomains :(
I have not been able to use the more simple solution you highlighted Stuart
(using basic multipath routing), as the tunnel subnets overlap.
So I think this is a potential bug, but I need your wisdom to verify my
working first :)
Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in
unique rdomains (overlapping tunnel subnets)
Configure sysctl's
# Ensure '/etc/sysctl.conf' contains;
net.inet.ip.forwarding=1 # Permit forwarding (routing) of packets
net.inet.ip.multipath=1 # 1=Enable IP multipath routing
# Active sysctl's now without reboot
sysctl net.inet.ip.forwarding=1
sysctl net.inet.ip.multipath=1
Pre-create tunX interfaces (in their respective rdomains)
# Ensure '/etc/hostname.tun1' contains;
up
rdomain 1
# Ensure '/etc/hostname.tun2' contains;
up
rdomain 2
# Bring up the new tunX interfaces
sh /etc/netstart
fw1# ifconfig
tun1
tun1: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 1 mtu 1500
index 8 priority 0 llprio 3
groups: tun
status: down
fw1# ifconfig tun2
tun2: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 2 mtu 1500
index 9 priority 0 llprio 3
groups: tun
status: down
# Start all SSL VPN tunnels (in unique VRF/rdomain's)
/usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun1.pid --dev tun1 &
/usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun2.pid --dev tun2 &
('auth-user-pass' updated in config files)
Each openvpn tunnel should start using 'rtable 0' for the VPN's outer
connection itself, but with each virtual tunnel TunX interface being placed
into a unique routing domain.
This results in the following tunX interface and rtable updates;
fw1# ifconfig
tun1
tun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 1 mtu 1500
index 6 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.128 --> 10.8.8.1 netmask 0xffffff00
fw1# ifconfig tun2
tun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 2 mtu 1500
index 7 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.129 --> 10.8.8.1 netmask 0xffffff00
fw1# route -T 1 show
Routing tables
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.128 UH 0 0 - 8
tun1
10.8.8.128 10.8.8.128 UHl 0 0 - 1
tun1
localhost localhost UHl 0 0 32768 1
lo1
fw1# route -T 2 show
Routing tables
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.129 UH 0 0 - 8
tun2
10.8.8.129 10.8.8.129 UHl 0 0 - 1
tun2
localhost localhost UHl 0 0 32768 1
lo2
# Test each tunnel - Ping the remote connected vpn peer within each rdomain
ping -V 1 10.8.8.1
ping -V 2 10.8.8.1
Shows both VPN tunnels are working independently with the overlapping
addressing :)
# To be able to test each tunnel beyond the peer IP, add some default
routes to the rdomains;
route -T 1 -n add default 10.8.8.1
route -T 2 -n add default 10.8.8.1
# Test each tunnel - Ping beyond the connected peer
ping -V 1 8.8.8.8
ping -V 2 8.8.8.8
Shows both VPN tunnels are definitely working independently with the
overlapping addressing :)
# Reverse routing - I have read in various places that PF's 'route-to' can
be used for jumping rdomains's in the forward path of the session, but the
reply packets need any matching route in the remote rdomain for the reply
destination (the matching route is to ensure in the reply packet is passed
through the routing table and gets into the PF processing, where PF can
manage the return back to the default rdomain etc.
But as I am using outbound NATing on the tunX interfaces, there is always a
matching route for the reply traffic. And so a route for the internal
subnet is not needed within rdomain 1 and 2.
# Finally ensure '/etc/pf.conf' contains something like;
if_ext = "em0"
if_int = "em1"
#CDR = 80 Down/20 Up
queue out_ext on $if_ext flows 1024 bandwidth 18M max 19M qlimit 1024
default
queue out_tun1 on tun1 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_tun2 on tun2 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_int on $if_srx flows 1024 bandwidth 74M max 78M qlimit 1024
default
#MTU = 1500
match proto tcp all scrub (no-df max-mss 1460) set prio (2,5)
match proto udp all scrub (no-df max-mss 1472) set prio (2,5)
match proto icmp all scrub (no-df max-mss 1472) set prio 7
#NAT all outbound traffic
match out on $if_ext from any to any nat-to ($if_ext)
match out on tun1 from any to any nat-to (tun1) rtable 1
match out on tun2 from any to any nat-to (tun2) rtable 2
#Allow outbound traffic on egress for vpn tunnel setup etc
pass out quick on { $if_ext } from self to any set prio (3,6)
#Load balance outbound traffic from internal network across tun1 and tun2 -
THIS IS NOT WORKING - IT ONLY USES FIRST TUNNEL
pass in quick on { $if_int } to any route-to { (tun1 10.8.8.1), (tun2
10.8.8.1) } round-robin set prio (3,6)
#Allow outbound traffic over vpn tunnels
pass out quick on tun1 to any set prio (3,6)
pass out quick on tun2 to any set prio (3,6)
# Verify which tunnels are being used
systat ifstat
*This command shows that all the traffic is only flowing over the first
tun1 interface, and the second tun2 is never ever used.*
# NB; I have tried with and without 'set state-policy if-bound'.
I have tried all the load balancing policies; round-robin, random,
least-states and source-hash
If I change the 'route-to' pool to "{ (tun2 10.8.8.1), (tun1 10.8.8.1) }",
then only tun2 is used instead.. :(
So 'route-to' seems to only use the first tunnel in the pool.
Any advice on what is going wrong here. I am wondering if I am falling
victim to some processing-order issue with PF, or if this is a real bug?
Thanks, Andy.
Andy Lemin
2018-11-28 21:36:12 UTC
Permalink
Hi,

So for completeness, I did some more testing with your suggestions.

First I tried using different nexthop’s in each of the interface-nexthop pairs in the route-to pool (as the next hop doesn’t really matter with p2p interfaces). And it did start to work! :)

But after some more testing it still wasn’t behaving very well, and was nearly always using the first tunX defined.. (I didn’t do much more testing). And if I ran a bunch of downloads to load test, they all ended up on tun1..


I had never heard of the ‘pair’ interfaces before, but as soon as I realised what they were I realised what a great idea! :)

So I created multiple pairs of ‘pair’ interfaces, one pair member in the default rdomain along with the internal interface, and the other pair member in the relevant vpn rdomain along with the tun interface. I.e. Creating an rdomain tunnel between domain 0 and each vpn rdomain.

This allowed me to configure unique none-overlapping subnets on each of the ‘pair’ based p2p tunnels.

And that of course simplified the ‘route-to’ statement to just be a list of different next-hops; one for each of the ‘pair’ interfaces in the default rdomain, for each vpn (without needing to define the interfaces as they are all in rdomain 0).

So without requiring PF to do any rdomain jumping/tunnelling (leaving rdomain tunnelling to the ‘pair’ interfaces), vpn load balancing is now working really very well.

I can now utilise all the cpu cores on my router where I couldn’t before :) I have four real cores, so I’m running four OpenVPN tunnels :D


This appears to confirm that ‘route-to’ only works well for rdomain tunnelling when routing towards a single rdomain, per pf configuration line.

I guess Henning would need to pass his wisdom on this and decide if it’s a bug or just not supported yet.

Thanks guys :)
Post by Tom Smyth
Sorry the "here" I was referring to earlier was "here" as shown below
https://lab.rickauer.com/post/2017/07/16/OpenBSD-rtables-and-rdomains
Post by Tom Smyth
Howdy...
starting Openvpn in different rdomains works pretty well for us
a crude way of doing that ... is to add the following line to the
bottom of your tun interface...
(starting openvpn in rdomain2 )
!/sbin/route -T 2 exec /usr/local/sbin/openvpn --config
/etc/openvpn2.conf & /usr/bin/false
we were using the L2 tunnels (not l3) ... but this worked pretty well for us
you I think you can use rcctl and set rtable also as described very well here
using symbolic links in /etc/rc.d/ you can create multiple openvpn
services, each with their own
settings...
I hope this helps
Post by Philip Higgins
At a guess, route-to is confused by the same ip, but I haven't looked at the internals.
Maybe try adding pair interfaces (with different addresses) to each rdomain,
and you can use route-to to select between them.
You already have default route set in each rdomain, so it will find its way from there.
eg.
# /etc/hostname.pair1
group pinternal
rdomain 1
inet 10.255.1.1 255.255.255.0
!/sbin/route -T1 add <your internal subnet(s)> 10.255.1.2
# /etc/hostname.pair11
group pinternal
inet 10.255.1.2 255.255.255.0
patch pair1
# /etc/hostname.pair2
group pinternal
rdomain 2
inet 10.255.2.1 255.255.255.0
!/sbin/route -T2 add <your internal subnet(s)> 10.255.2.2
# /etc/hostname.pair12
group pinternal
inet 10.255.2.2 255.255.255.0
patch pair2
# /etc/pf.conf
...
pass on pinternal
...
pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \
round-robin set prio (3,6)
Have not tested exactly this, but similar to my current setup.
Might not need the static routes, if the right pf magic is happening.
-Phil
Post by Andrew Lemin
Hi,
So using the information Stuart and Andreas provided, I have been testing
this (load balancing across multiple VPN servers to improve bandwidth).
And I have multiple VPNs working properly within there own rdomains.
* However 'route-to' is not load balancing with rdomains :(
I have not been able to use the more simple solution you highlighted Stuart
(using basic multipath routing), as the tunnel subnets overlap.
So I think this is a potential bug, but I need your wisdom to verify my
working first :)
Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in
unique rdomains (overlapping tunnel subnets)
Configure sysctl's
# Ensure '/etc/sysctl.conf' contains;
net.inet.ip.forwarding=1 # Permit forwarding (routing) of packets
net.inet.ip.multipath=1 # 1=Enable IP multipath routing
# Active sysctl's now without reboot
sysctl net.inet.ip.forwarding=1
sysctl net.inet.ip.multipath=1
Pre-create tunX interfaces (in their respective rdomains)
# Ensure '/etc/hostname.tun1' contains;
up
rdomain 1
# Ensure '/etc/hostname.tun2' contains;
up
rdomain 2
# Bring up the new tunX interfaces
sh /etc/netstart
fw1# ifconfig
tun1
tun1: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 1 mtu 1500
index 8 priority 0 llprio 3
groups: tun
status: down
fw1# ifconfig tun2
tun2: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 2 mtu 1500
index 9 priority 0 llprio 3
groups: tun
status: down
# Start all SSL VPN tunnels (in unique VRF/rdomain's)
/usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun1.pid --dev tun1 &
/usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid
/var/run/openvpn.tun2.pid --dev tun2 &
('auth-user-pass' updated in config files)
Each openvpn tunnel should start using 'rtable 0' for the VPN's outer
connection itself, but with each virtual tunnel TunX interface being placed
into a unique routing domain.
This results in the following tunX interface and rtable updates;
fw1# ifconfig
tun1
tun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 1 mtu 1500
index 6 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.128 --> 10.8.8.1 netmask 0xffffff00
fw1# ifconfig tun2
tun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 2 mtu 1500
index 7 priority 0 llprio 3
groups: tun
status: active
inet 10.8.8.129 --> 10.8.8.1 netmask 0xffffff00
fw1# route -T 1 show
Routing tables
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.128 UH 0 0 - 8
tun1
10.8.8.128 10.8.8.128 UHl 0 0 - 1
tun1
localhost localhost UHl 0 0 32768 1
lo1
fw1# route -T 2 show
Routing tables
Destination Gateway Flags Refs Use Mtu Prio
Iface
10.8.8.1 10.8.8.129 UH 0 0 - 8
tun2
10.8.8.129 10.8.8.129 UHl 0 0 - 1
tun2
localhost localhost UHl 0 0 32768 1
lo2
# Test each tunnel - Ping the remote connected vpn peer within each rdomain
ping -V 1 10.8.8.1
ping -V 2 10.8.8.1
Shows both VPN tunnels are working independently with the overlapping
addressing :)
# To be able to test each tunnel beyond the peer IP, add some default
routes to the rdomains;
route -T 1 -n add default 10.8.8.1
route -T 2 -n add default 10.8.8.1
# Test each tunnel - Ping beyond the connected peer
ping -V 1 8.8.8.8
ping -V 2 8.8.8.8
Shows both VPN tunnels are definitely working independently with the
overlapping addressing :)
# Reverse routing - I have read in various places that PF's 'route-to' can
be used for jumping rdomains's in the forward path of the session, but the
reply packets need any matching route in the remote rdomain for the reply
destination (the matching route is to ensure in the reply packet is passed
through the routing table and gets into the PF processing, where PF can
manage the return back to the default rdomain etc.
But as I am using outbound NATing on the tunX interfaces, there is always a
matching route for the reply traffic. And so a route for the internal
subnet is not needed within rdomain 1 and 2.
# Finally ensure '/etc/pf.conf' contains something like;
if_ext = "em0"
if_int = "em1"
#CDR = 80 Down/20 Up
queue out_ext on $if_ext flows 1024 bandwidth 18M max 19M qlimit 1024
default
queue out_tun1 on tun1 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_tun2 on tun2 flows 1024 bandwidth 17M max 18M qlimit 1024 default
queue out_int on $if_srx flows 1024 bandwidth 74M max 78M qlimit 1024
default
#MTU = 1500
match proto tcp all scrub (no-df max-mss 1460) set prio (2,5)
match proto udp all scrub (no-df max-mss 1472) set prio (2,5)
match proto icmp all scrub (no-df max-mss 1472) set prio 7
#NAT all outbound traffic
match out on $if_ext from any to any nat-to ($if_ext)
match out on tun1 from any to any nat-to (tun1) rtable 1
match out on tun2 from any to any nat-to (tun2) rtable 2
#Allow outbound traffic on egress for vpn tunnel setup etc
pass out quick on { $if_ext } from self to any set prio (3,6)
#Load balance outbound traffic from internal network across tun1 and tun2 -
THIS IS NOT WORKING - IT ONLY USES FIRST TUNNEL
pass in quick on { $if_int } to any route-to { (tun1 10.8.8.1), (tun2
10.8.8.1) } round-robin set prio (3,6)
#Allow outbound traffic over vpn tunnels
pass out quick on tun1 to any set prio (3,6)
pass out quick on tun2 to any set prio (3,6)
# Verify which tunnels are being used
systat ifstat
*This command shows that all the traffic is only flowing over the first
tun1 interface, and the second tun2 is never ever used.*
# NB; I have tried with and without 'set state-policy if-bound'.
I have tried all the load balancing policies; round-robin, random,
least-states and source-hash
If I change the 'route-to' pool to "{ (tun2 10.8.8.1), (tun1 10.8.8.1) }",
then only tun2 is used instead.. :(
So 'route-to' seems to only use the first tunnel in the pool.
Any advice on what is going wrong here. I am wondering if I am falling
victim to some processing-order issue with PF, or if this is a real bug?
Thanks, Andy.
Loading...