Discussion:
Any typical pf.conf or sysctl settings to tweak/speedup NAT/networking stack throughput? (+ don't use USB dongles?)
(too old to reply)
t***@openmailbox.org
2017-12-14 04:30:14 UTC
Permalink
Raw Message
Hi!

Do you see any typical pf.conf or sysctl settings to tweak/speedup NAT/networking stack throughput?

(On USB2 dongles, sigh.

Current speed is quite OK actually, a client with good hardware would get up to 70mbps through the NAT. I was still curious to know if there are any obvious toggles in sysctl/pf.conf for up:ing NAT/networking stack throughput though. RAM is not an issue with me, I have plenty. I thought possibly some settings were set to unnecessarily low defaults, for OpenBSD to work well on mac
Solène Rapenne
2017-12-14 09:32:50 UTC
Permalink
Raw Message
Post by t***@openmailbox.org
Hi!
Do you see any typical pf.conf or sysctl settings to tweak/speedup
NAT/networking stack throughput?
(On USB2 dongles, sigh.
Current speed is quite OK actually, a client with good hardware would
get up to 70mbps through the NAT. I was still curious to know if there
are any obvious toggles in sysctl/pf.conf for up:ing NAT/networking
stack throughput though. RAM is not an issue with me, I have plenty. I
thought possibly some settings were set to unnecessarily low defaults,
for OpenBSD to work well on machines with <1GB RAM, say.)
Tinker
Hello,

What is the USB dongle here, a network adapter ? Maybe it's simply the
dongle limiting the bandwidth.

Regards
Stuart Henderson
2017-12-14 17:11:26 UTC
Permalink
Raw Message
Post by t***@openmailbox.org
Hi!
Do you see any typical pf.conf or sysctl settings to tweak/speedup NAT/networking stack throughput?
(On USB2 dongles, sigh.
Current speed is quite OK actually, a client with good hardware would get up to 70mbps through the NAT. I was still curious to know if there are any obvious toggles in sysctl/pf.conf for up:ing NAT/networking stack throughput though. RAM is not an issue with me, I have plenty. I thought possibly some settings were set to unnecessarily low defaults, for OpenBSD to work well on machines with <1GB RAM, say.)
Tinker
Generally not. The most common things to touch are:

- raising net.inet.ip.ifq.maxlen if net.inet.ip.ifq.drops is
increasing (trade-off against latency).

- increasing "set limit states" on busier systems if needed.

- using a wider port range than the default 50001:65535 on busier
systems if needed (in PF nat rules; avoid starving the host itself
of free ephemeral ports for locally initiated connections).
t***@openmailbox.org
2017-12-14 17:52:15 UTC
Permalink
Raw Message
Hi Stuart,

Thanks a lot for your response. I guess you made a point that for any few-users usecase the default configuration is fine alrady really. If relevant some followup question at bottom.
Post by Stuart Henderson
Post by t***@openmailbox.org
Hi!
Do you see any typical pf.conf or sysctl settings to tweak/speedup NAT/networking stack throughput?
(On USB2 dongles, sigh.
Current speed is quite OK actually, a client with good hardware would get up to 70mbps through the NAT. I was still curious to know if there are any obvious toggles in sysctl/pf.conf for up:ing NAT/networking stack throughput though. RAM is not an issue with me, I have plenty. I thought possibly some settings were set to unnecessarily low defaults, for OpenBSD to work well on machines with <1GB RAM, say.)
Tinker
- raising net.inet.ip.ifq.maxlen if net.inet.ip.ifq.drops is
increasing (trade-off against latency).
My net.inet.ip.ifq.drops is 0 so I guess this one is not relevant to me.
Post by Stuart Henderson
- increasing "set limit states" on busier systems if needed.
There's not users enough for the system to hit the 10,000 states default cap.
Post by Stuart Henderson
- using a wider port range than the default 50001:65535 on busier
systems if needed (in PF nat rules; avoid starving the host itself
of free ephemeral ports for locally initiated connections).
This should be fine also, again not enough users.

Indeed on USB3 NIC on USB2 port.



1. Could I set the NAT in some type of less secure / more promiscuous mode, that would give multimedia applications such as video calling, more space to act?

2. I was thinking, is there any cap for the networking stack's use of buffer space, that could constrian throughput, and that would be configurable?

3. Could it be useful in any few-users scenario to up the "frags" pf limit (and so the "kern.maxclusters" sysctl)?

4. What are good commands to run to monitor NAT/networking stac
Stuart Henderson
2017-12-14 21:27:20 UTC
Permalink
Raw Message
Post by t***@openmailbox.org
Hi Stuart,
Thanks a lot for your response. I guess you made a point that for any few-users usecase the default configuration is fine alrady really. If relevant some followup question at bottom.
Post by Stuart Henderson
Post by t***@openmailbox.org
Hi!
Do you see any typical pf.conf or sysctl settings to tweak/speedup NAT/networking stack throughput?
(On USB2 dongles, sigh.
Current speed is quite OK actually, a client with good hardware would get up to 70mbps through the NAT. I was still curious to know if there are any obvious toggles in sysctl/pf.conf for up:ing NAT/networking stack throughput though. RAM is not an issue with me, I have plenty. I thought possibly some settings were set to unnecessarily low defaults, for OpenBSD to work well on machines with <1GB RAM, say.)
Tinker
- raising net.inet.ip.ifq.maxlen if net.inet.ip.ifq.drops is
increasing (trade-off against latency).
My net.inet.ip.ifq.drops is 0 so I guess this one is not relevant to me.
Post by Stuart Henderson
- increasing "set limit states" on busier systems if needed.
There's not users enough for the system to hit the 10,000 states default cap.
Post by Stuart Henderson
- using a wider port range than the default 50001:65535 on busier
systems if needed (in PF nat rules; avoid starving the host itself
of free ephemeral ports for locally initiated connections).
This should be fine also, again not enough users.
Indeed on USB3 NIC on USB2 port.
1. Could I set the NAT in some type of less secure / more promiscuous mode, that would give multimedia applications such as video calling, more space to act?
PF only has binat (1:1 mapping between external and internal addresses, tries
to maintain the same port numbers) and a type which is sometimes referred to
as 'symmetric nat' - see diagrams on
https://en.wikipedia.org/wiki/Network_address_translation#Methods_of_translation
Post by t***@openmailbox.org
2. I was thinking, is there any cap for the networking stack's use of buffer space, that could constrian throughput, and that would be configurable?
TCP buffer size limits only apply at the endpoints of the connection.
Unless you're using relayd relays, that wouldn't be on the PF box.

Other than that I think it would be mostly net.inet.ip.ifq.maxlen that
would be involved.
Post by t***@openmailbox.org
3. Could it be useful in any few-users scenario to up the "frags" pf limit (and so the "kern.maxclusters" sysctl)?
No idea, it's not something I usually touch.
Post by t***@openmailbox.org
4. What are good commands to run to monitor NAT/networking stack throughput & health?
These are relatively easy to interpret

"pfctl -si"
"systat if" (bandwidth / packets-per-sec), though actually i normally
use bwm-ng from packages for this

Can help when tracking down problems, but harder to interpret /
less immediately useful

"systat mbuf" (stats relating to mbuf use per interface)
"vmstat -m" (pool statistics, giving an idea of kernel memory use
for things like PF state entries etc)

But if you are running into this system being slow, I think the first
thing anyone is going to suggest is to find a way to avoid the USB nic.
Loading...