I once wrote a similar post to an DVD industry centric mailing list (remember those?) regarding switching to FCP7 from Adobe Premiere with a huge difference in how FCP7 would allow capturing of discrete audio channels vs Premiere forcing an interleaved audio stream. Eventually, a rep from Adobe contacted me through my company's PR team (a first for me) to go over the list of complaints. At the end, he agreed these were all valid complaints, and then asked "if Premiere added these changes would I be willing to switch back"? At that point, I said probably not as we'd now be fully switched to FCP7 in all departments. So I understand that sentiment as well. Honestly, I was shocked that someone actually read my missive and actually paid any mind to it. So maybe someone at OpenBSD will be as receptive if not equally unable to do anything about it.
This post sounded faintly crazy to me, so I went into a little wiki-hole consisting primarily of mailing lists and dev docs
Turns out, the main reason `pf` is non-portable is that half of it runs inside Berkeley-type network stacks, often in kernel space, but the remainder is in user space.
So the miserable single-threaded `pf` on OpenBSD is still, in some part, single-threaded on FreeBSD, but for certain rule-sets, you will get the benefits of FreeBSD's intensively re-entrant and multithreaded TCP/IP, because those parts of `pf` are embedded in the network stack.
So depending on workload, a given `pf` configuration on OpenBSD might be perfectly equal to its FreeBSD counterpart, or hundreds of times slower. I feel like this gives a lot of context to the OP's grousing around "10 gbps"
P.S. To confess my own biases: a port of a `pf` configuration to a platform where some rulesets are high performance and others are not, that would not be very attractive to me. An improvement, but not a solution. I would be looking to move to a Linux stack. Baby steps, I guess. I have done worse things to better people!
P.P.S. I suspect this coupling between a re-entrant TCP/IP stack and a single-threaded firewall process is also why FreeBSD `pf` is never even close to feature parity with its OpenBSD counterpart -- it is just easier to do new stuff with a simpler model
Root on ZFS is an easy sell for me. OpenBSD's ancient filesystem is notoriously flaky, and they have no interest in replacing it anytime soon.
I can't be worried that critical parts of my network won't come back up because the box spontaneously rebooted or the UPS battery ran out (yes it happens — do you load test your batteries — probably not) and their bubblegum-and-string filesystem has corruption and / and /usr won't mount and I gotta visit the console like Sam Jackson in Jurassic Park to fsck the damn thing.
Firewalls are critical infra — by definition they can't be the least reliable device in the network.
This is the first I've read that OpenBSD's file system is "notoriously flaky", "bubblegum-and-string" (the opposite of the OpenBSD approach) or make "the least reliable device in the network". The reputation is the opposite.
> visit the console like Sam Jackson in Jurassic Park
Consoles aren't so unusual for most server admins, IME. They're the most common tool.
It seems you’ve read too much general OpenBSD hype material and too little specific information about details like the filesystem. The OpenBSD filesystem notoriously lacks journalling support. It used to support soft updates, but that got removed too. There are no seatbelts. If you suddenly lose power, there is a high likelihood you lose data. OpenBSD is notorious for it.
For those that don't know soft updates are a clever method to prevent filesystem corruption.
Journaling: write the journal, write the filesystem, in event of sudden power outage either the journal will be partially corrupt and discarded or the filesystem will be corrupt and the journal can be replayed to fix it, the problem is that now you are duplicating all metadata writes.
Softupates: reorder the writes in memory so that as the filesystem is written it is always in a consistent state.
So softupdates was a clever system to reduce metadata writes, perhaps too clever, apparently it had to be implemented chained through every layer of the filesystem, nobody but the original author really understood it and everyone was avoiding doing filesystem work for fear of accidentally breaking it. And it may not of even worked, there were definitely workloads where softupdates would hose your data.(I am not exactly sure, But I think it was a ton of small metadata rewrites into a disk full) So when someone wanted to do work on the filesystem but did not want to deal with softupdates, obsd in characteristic fashion said "sure, tear it out" It may come back, I don't know the details, but I doubt it. It sounds like it was a maintenance problem for the team.
Journaling conversely is a sort of inelegant brute force sort of mechanism, but at least it is simple to understand and can be implemented in one layer of the filesystem.
Log message:
Make softdep mounts a no-op
Softdep is a significant impediment to progressing in the vfs layer
so we plan to get it out of the way. It is too clever for us to
continue maintaining as it is."
You don't need to talk about other commenters; because you don't know them (with few possible exceptions) you're certain to make a fool of yourself.
The components and their potential benefits aren't really consequential; performance is. Sometimes, the hi-spec components are technologically interesting and exciting to geeks (me too), but have little practical value, especially maximalist components like ZFS. I've never needed it, for example. Very rarely could a journaling file system provide an actual benefit, though I don't object to them.
Sometimes the value is negative because the complexity of those components adds risk. KISS.
conversely, running a firewall on something like ZFS also sounds like too much. Ideally I'd want a read-only root FS with maybe an /etc and /var managed by an overlay.
Sounds like overcomplicating in the name of simplification. ZFS is a good, reliable, general-purpose system; often the right answer is to just put everything on ZFS and get on with your life.
Problems like? I run zfs on 20gb VMs and a 100tb pool and I’ve never had a problem that wasn’t my own fault. I love root on zfs, you can snapshot your entire OS at a whim. The only other way to get that I know of is btrfs which genuinely does have well known issues.
Not an OP but I have similar experience with ZFS. Over 22 years of maintaining servers, I have had serious issues exclusively with ZFS.
My pool is there, but it doesnt want to mount no matter what amount of IRC/reddit/SO/general googling I apply to try and help it boot.
After it happened for the second time, I removed ZFS from the list of technologies I want to work with (I still have to, due to Proxmox, but without being fascinated).
I've been working with systems for a long time, too. I've screwed things up.
I once somehow decided that using an a.out kernel would be a good match for a Slackware diskset that used elf binaries. (It didn't go well.)
In terms of filesystems: I've had issues with FAT, FAT32, HPFS, NTFS, EXT2, ReiserFS, EXT3, UFS, EXT4, and exFAT. Most of those filesystems are very old now, but some of of these issues have trashed parts of systems beyond comprehension and those issues are part of my background in life whether I like it or not.
I've also had issues with ZFS. I've only been using ZFS in any form at all for about 9 years so far, but in that time I've always able to wrest the system back into order even on the seemingly most-unlikely, least-resilient, garbage-tier hardware -- including after experiencing unlikely problems that I introduced myself by dicking around with stuff in unusual ways.
Can you elaborate upon the two particular unrecoverable issues you experienced?
(And yeah, Google is/was/has been poisoned for a long time as it relates to ZFS. There was a very long streak of people proffering bad mojo about ZFS under an air of presumed authority, and this hasn't been helpful to anyone. The sheer perversity of the popular myths that have popularly surrounded ZFS are profoundly bizarre, and do not help with finding actual solutions to real-world problems.
> Over 22 years of maintaining servers, I have had serious issues exclusively with ZFS.
I've been using ZFS since it initially debuted in Solaris 10 6/06 (also: zones and DTrace), before then using it on FreeBSD and Linux, and I've never had issues with it. ¯\_(ツ)_/¯
Not to be deliberately argumentative but still no concrete examples of zfs failures are shown, just hand wavey "I had issues I couldn't google my way out of". I've never heard of a healthy pool not mounting and I've never heard of a pool being unhealthy without a hardware failure of some sort. To the contrary, zfs has perfectly preserved my bytes for over a decade now in the face of shit failing hardware, from memory that throws errors when clocked faster than stock JEDEC speeds to brand new hard drives that just return garbage after reporting successful writes.
When it is treated as just a filesystem, then it works about like any other modern filesystem does.
ZFS features like scrubs aren't necessary. Multiple datasets aren't necessary -- using the one created by default is fine. RAIDZ, mirrors, slog, l2arc: None of that is necessary. Snapshots, transparent compression? Nope, those aren't necessary functions for proper use, either.
There's a lot of features that a person may elect to use, but it is no worse than, say, ext4 or FFS2 is when those features are ignored completely.
(It can be tricky to get Linux booting properly with a ZFS root filesystem. But that difficulty is not shared at all with FreeBSD, wherein ZFS is a native built-in.)
Linux takes more skill to manage than Windows or macOS, yet we all know Linux. zfs is _the_ one true filesystem, and the last one you need to know. Besides that, to know zfs is to have a deeper understanding of what a filesystem is and does.
I will admit though, to truly get zfs you need to change how you think about filesystems.
> conversely, running a firewall on something like ZFS also sounds like too much.
this makes no sense. firewalling does not touch the filesystem very much if at all.
what FS is being used is essentially orthogonal to firewalling performances.
if anything, having a copy-on-write filesystem like ZFS on your firewall/router means you have better integrity in case of configuration mistakes and OS upgrade (just rollback the dataset to the previous snapshot!)
my point was that if a hardware vendor were to approach this problem, they'd probably have 2 (prev,next) partitions that they write firmware to, plus separate mounts for config and logs, rather than a kitchen-sink CoW FS
> Needing (emphasis there) to fail over should be for emergencies, not standard operating procedure.
You should be failing testing failover regularly, just like you're testing backups and recovery, and other things that should not "need" to happen but have to actually work when they do.
A good time would be during your monthly/quarterly/(bi)annual/whatever patch cycle (and if there are no patches, then you should just test failover).
OpenBSD's pre-journaling FFS is ancient and creaky but also extremely robust
I am not sure there is a more robust, or simple, filesystem in use today. Most networking devices, including, yes, your UPS, use something like FFS to handle writeable media.
I am not accustomed to defending OpenBSD in any context. It is, in every way, a museum, an exhibition of clever works and past curiosities for the modern visitor.
But this is a deeply weird hill to die on. The "Fast File System," fifty years old and audited to no end, is your greatest fear? It ain't fast, and it's barely a "file system," but its robustness? indubitable. It is very reliably boring and slow. It is the cutting edge circa 2BSD
edit: I am mistaken, the FFS actually dates to 4.1BSD. It is only 44 years old, not 50. Pardon me for my error.
As noted, recent changes to OpenBSD TCP handling[1] may improve performance.
On a 4 core machine I see between 12% to 22% improvement with 10
parallel TCP streams. When testing only with a single TCP stream,
throughput increases between 38% to 100%.
I'm not sure that directly translates to better pf performance, and four cores is hardly remarkable these days but might be typical on a small low-power router?
Would be interesting if someone had a recent benchmark comparison of OpenBSD 7.8 PF vs. FreeBSD's latest.
That particular change improves throughput received locally. Though over the past few years there's been a ton of work on unlocking the network layer generally to support more parallelism.
For a firewall I guess the critical question is the degree of parallelism supported by OpenBSD's PF stack, especially as it relates to common features like connection statefulness, NAT, etc.
I was using OpenBSD for my firewalls for a long time, but with the arrival of 10Gbit/s ethernet, I realized that I had to move back to ASIC based firewalls.
Yes, you can forward 10Gbit/s with linux using VPP, but you cannot forward at that rate with small packets and stateful firewall. And it requires a lot of tuning and a large machine.
A used SRX4200 from juniper runs at around 3k USD and you can even buy support for it and you can forward at like 40Gb/s IMIX with it.
I still prefer PF syntax over everything else though.
You can definitely build an x86 system to route 40Gb/s with small packets for under $3k and it's been the case for many years. A Xeon-D can hit 100gbps forwarding and filtering.
OpenBSD is going through a slow fine grained locking transformation that FreeBSD started over 20 years ago. Eventually they will figure out they need something like epoch reclaimation, hazard pointers, or rcu.
I just today deployed an $800 mikrotik in my house that can route 10 gbps at wire speed. on the CPU. with firewall and nat rules applied. no joke. 4 million packets per second is, like, a lot, post-filtering and with any substantial packet size.
This was doable back in 2008 with about $15k of x86 gear and a Linux kernel and a little trickery with pf_ring. The minute AMD K10 and Intel Nehalem dropped, high routing performance was mostly a software problem... Which is cool as hell, compared to the era when it required elaborate dedicated hardware, but it does not make it cheap or easy. Just, commodity. Expensive commodity.
Now you can buy a device off the shelf for $800 that will do it on the CPU, to avoid the cost of Cisco or Juniper, and it has a super simple configuration interface for all the software-based features. Everything you could do in L3/L4 on a Linux platform in 2008, for like, 1/16th the price, with vastly less engineering effort. It is just like, a thing you buy, and it all kinda works outta the box.
No pf_ring trickery, no deep in-house experience, just a box you buy on a web site and it moves 10 gbps with filtering for $800
There's no real magic here: they use absolutely shockingly enormous ARM chips from Amazon/Annapurna. You can build an $800 commodity platform that rivals a $15k commodity platform in 2008, and both of them replace what used to cost $500k.
Is it as good as Cisco or Juniper? oh, certainly not. Will it route and filter traffic at much greater rates, for $800, than anything they have ever been bothered to offer? ABSOLUTELY
I'm really confused by "about $15k of x86 gear ... The minute AMD K10 and Intel Nehalem dropped, high routing performance was mostly a software problem". What kind of $15k machine would you have needed? That's a heck of a lot more than even the most expensive K10 2008 CPU (which according to Wikipedia seems to be Opteron 8384 (quad core, 2.7GHz, 1.0GHz HT, $2149 November 2008), supports up to 8 CPUs per machine, I guess that's what you mean.)
One thing I like about using OpenBSD for my home router is almost all the necessary daemons being developed and included with the OS. DHCPv4 server/client, DHCPv6 client, IPv6 RA server, NTP, and of course SSH are all impeccably documented, use consistent config file formats/command-line arg styles, and are privilege-separated with pledge.
Also it's a really well trodden path. You aren't likely to run into an OpenBSD firewall problem that hasn't been seen before.
Regarding any BSD used for any purpose, BSD has a more consistent logic to how everything works. That said, if you're used to Linux then you're going to be annoyed that everything is very slightly different. I am always glad that multiple BSD projects have survived and still have some real users, I think that's good for computing in general.
The recent addition of dhcp6leased is a great example: Built into the base system, simpler to configure than either dhcp6c or dhcpcd, and presumably also more secure than either.
Compared to working with iptables, PF is like this haiku:
A breath of fresh air,
floating on white rose petals,
eating strawberries.
Now I'm getting carried away:
Hartmeier codes now,
Henning knows not why it fails,
fails only for n00b.
Tables load my lists,
tarpit for the asshole spammer,
death to his mail store.
CARP due to Cisco,
redundant blessed packets,
licensed free for me.
pf has been ported to Debian/kFreeBSD, but afaik no effort has been made to port it to the Linux kernel. A lot of networking gear already runs a BSD kernel, so my guess is the really high-level network devs don't bother because they already know BSD so well.
I assume in this case they already had a bunch of firewall rules for PF and switching from OpenBSD -> FreeBSD is a much easier lift then going to linux because both the BSDs are using PF, although IIRC there are some differences between both implementations.
I'm pretty die-hard Linux, but I had a client who needed to do traffic shaping on hundreds or thousands of this ISPs users. I've tried multiple times to get anything more than the most simple traffic shaping working under Linux, with pretty bad luck at it. I set them up with a FreeBSD box and the shaping config, IIRC, was a one-liner and just worked, I never heard any complaints about it.
I've run a lot of Linux firewalls over the decades, but FreeBSDs shaping is <chefs kiss>
What features have you used for shaping with pf/FreeBSD? I remember (around 8ish years ago) using dummynet with pf, but it wasn't supported out of the box and I used some patches from the mailing lists for this purpose. It wasn't perfect, at times buggy. Back then ipfw had better support for such features, but I didn't like the syntax just as much as iptables. I eventually settled on Linux as I have grown to understand iptables (I hate that nftables is the brand new thing with entirely different syntax to learn again... and even requires more work upfront because basic chains are not preconfigured...) but traffic shaping sucked big time on linux, I never understood the tc tool to be effective, it's just too arcane. I always admired pf, especially on OpenBSD since it had more features but the single threaded nature killed it for any serious usage for me.
The user interface is literally 1000x better. That's all
Linux is enormously higher performance but it is a huge pain in the ass to squeeze the performance out AND retain any level of readability
which is why there are like a dozen vendors selling various solutions that quietly compile their proprietary filter definitions to bpf for use natively in the kernel netfilter code...
Too many random changes, too fiddly to maintain, too much general flakiness. Especially for simple single-purpose devices that you want to set up once and leave alone for years, BSD is generally much nicer than Linux. I'd actually flip your question: why would you ever use Linux rather than FreeBSD?
Do you have any specific examples where a Linux-based firewall was too "random" or "fiddly" or "flaky"? Or provide examples of ways that BSD "much nicer"?
It sounds to me like you picked a bad Linux distro for your use case.
I've seen plenty of single-purpose Linux-based network appliances, and none of them have come across as flaky or unreliable because of the OS. In fact they can be easier to use for people who have more operational experience using Linux already.
> Do you have any specific examples where a Linux-based firewall was too "random" or "fiddly" or "flaky"?
They switched out ifconfig for some other thing. There's been about 3 different firewall systems that you've have to migrate between. Some of the newer systems (docker and I think maybe flatpak/the other one) bypass your firewall rules by default, which is a nasty surprise. A couple of times I did a system upgrade and my system wouldn't boot because drivers or boot systems or what have you had changed. That stuff doesn't happen on FreeBSD.
I'm sure to someone who lives and breathes Linux, or who works on this stuff, it's all trivial. But if it's not something you work on day-to-day, it's something you want to set and forget as an appliance, Linux adds pain.
> It sounds to me like you picked a bad Linux distro for your use case.
Were there any grounds at all in what I said for thinking that, or did you just make it up out of blind tribalism?
> In fact they can be easier to use for people who have more operational experience using Linux already.
Of course, but that's purely circular logic. Whatever OS you use for most of your systems, systems using that OS will be easier for you to use.
tcp_pass = "{ 22 25 80 110 123 }"
udp_pass = "{ 110 631 }"
block all
pass out on fxp0 proto tcp to any port $tcp_pass keep state
pass out on fxp0 proto udp to any port $udp_pass keep state
Note last rule matching wins, so you put your catch-all at the top, "block all". Then in this case fxp0 is the network interface. So they're defining where traffic can go to from the machine in question, in this case any source as long as it's to port 22, 25, 80, 110, or 123 for TCP, and either 110 or 631, for UDP.
<action> <direction> on <interface> proto <protocol> to <destination> port <port> <state instructions>
The BSDs still tend to use device-specific names versus the generic ethX or location-specific ensNN, so if you have multiple interfaces knowing about internal and external may help the next person who sees your code to grok it.
One thing unexpected I found when setting up an OpenBSD based router recently: the web isn’t riddled with low-quality and often wrong SEO and AI slop about OpenBSD like it is for Linux. I guess there just isn’t enough money to be made producing it for it for such a niche audience.
If you search up a problem, you get real documentation, real technical blog posts, and real forum posts with actual useful conversations happening.
I've been using OpenBSD and PF for nearly 25 years (PF debuted December 2001). Over those years there have been syntax changes to pf.conf, but the most disruptive were early on, and I can't remember the last syntax change that effected my configs (mostly NAT, spamd, and connection rate limiting).
During that time the firewall tool du jour on Linux was ipchains, then iptables, and now nftables, and there have been at least some incompatible changes within the lifespan of each tool.
PF is also from 2001. But its roots go further back, I once used a very PF-like syntax on a Unix firewall from 1997. I forget which type of Unix it was, maybe Solaris.
Either way, I don't think there is any defense for the strange syntax of IPtables, the chains, the tables. And that's coming from a person who transitioned fully from BSD to Linux 15 years ago, and has designed commercial solutions using IPtables and ipset.
PF is really nice. (Source: me. Cissp and a couple decades of professional experience with open source and proprietary firewalls).
And if they are already using it on openbsd, it’s almost certainly an easier lift to move from one BSD PF implementation to another versus migrating everything to Linux and iptables.
I've gotta me-too this. I've written any number of firewall rulesets on various OSes and appliances over the years, and pf is delightful. It was the first and only time I've seen a configuration file that was clearly The Way It Should Be.
I am not very familiar with FreeBSD's pf but my understanding is that fbsd integrated it from OpenBSD and then proceeded to put a fair amount of work in making it more performant(multi core) while OpenBSD put most of it's work into improving pf's features, At this point the two pf's are different enough that they are not really compatible. OpenBSD can't really use much of fbsds multi core work and FreeBSD is A. Is a lot more hesitant about breaking backwards compatibility and B. would need get the queuing structures to work with their kernel. In short FreeBSD pf is like using an old fast version of OpenBSD pf
In fact if you asked me to explain the difference between obsd and fbsd it is exactly this. fbsd focuses on performance and obsd focuses on ergonomics.
The pf maintainer in FreeBSD has been doing a ton of work to bring more recent improvements over from OpenBSD, trying to bring them in sync as much as possible without breaking compatibility:
https://cgit.freebsd.org/src/log/sys/netpfil/pf
The state of affairs you described is much less the case now than in the past.
> There are some things about FreeBSD that we're not entirely enthused about.
Damn I wish that they had expanded on this a bit (not to start a flame war, but to give readers a fuller picture, or even to prod the FreeBSD community into "fixing" those things)
One issue, as they point out, is that we now do minor version updates every 6 months, and you need to update for each one. (We have a 3-4 month period where both are supported, but e.g. 15.0 will be EoL before 15.2 is released.)
We are aware that this isn't ideal for some users, but it was a necessary tradeoff. We might be able to improve this in the future (possibly as "security updates for the base system, but no ports support") but no guarantees.
The computers that moved from OpenBSD to Ubuntu were our local resolving DNS servers. These don't use PF and we also wanted to switch from our previous OpenBSD setup to Bind, where we were already running Bind on Ubuntu for our DNS master servers. The gory details were written up here: https://utcc.utoronto.ca/~cks/space/blog/sysadmin/UsingBindN...
We may at some point switch our remaining OpenBSD DHCP server to Ubuntu (instead of to FreeBSD); like our DNS resolvers, it doesn't use PF, and we already operate a couple of Ubuntu DHCP servers. In general Ubuntu is our default choice for a Unix OS because we already run a lot of Ubuntu servers. But we have lots of PF firewall rules and no interest in trying to convert them to Linux firewall rules, so anything significant involving them is a natural environment for FreeBSD.
Why do you say OpenBSD stopped "supporting bind"? You mean they don't include it in the base system anymore since the switch to unbound?
I mean.. It's one pkg_add away. It's a weird constraint to give yourself if that was the problem, considering you absolutely had to install it on your replacement ubuntu servers.
The short version is that we wound up not feeling particularly enthused about OpenBSD itself. We have a much better developed framework for handling Ubuntu machines, making it simply easier to have some more Ubuntu machines instead of OpenBSD machines, and we also felt Bind on Ubuntu was likely to be better supported than a ports Bind on OpenBSD. If everything else is equal we're going to make a machine Ubuntu instead of OpenBSD.
So you don't like OpenBSD, but you do like Ubuntu?
This person seems like they know wht they are talking about and given it serious thought, but I cannot fathom how you could make such a conclusion today.
If they're concerned about performance, yeah. OpenBSD doesn't do the basics that you need to get the most out of your SMP hardware; there's no way to set cpu affinity at least from userland, and it's clear that this sort of work is not a priority for OpenBSD; it's not easy work, but FreeBSD has done it. Beyond CPU affinity, you also need your network structures setup to reduce lock contention, things like fine grained locks, hashed subtables and/or "lockless" tables, configuring the NICs as close as possible to one queue per core and keeping flows on the same queue which is pinned to a single core so that the per flow locks never contend and don't bounce between cores.
Ubuntu/Linux do have reasonable performance, but I think they prefer PF firewalls, so that makes Linux a non-option for firewalls.
Personally, I don't really care for PF, but it offers pfsync, which I do care for, so I use it and ipfw... but I need to check in, I think FreeBSD PF may have added the hooks I use ipfw for (bandwidth limits/shaping/queue discipline).
It's not necessarily that OpenBSD can't implement the basics, it's that they don't want to. A lot of the high-performance features introduce potential security vulnerabilities. Their main focus is security and correctness. Not speed.
The gp already answered you, "this sort of work is not a priority for OpenBSD."
OpenBSD is a small, niche operating system, and it really only gets support for something if it solves a problem for someone who writes OpenBSD code. In a way, this is nice, because you never get half-assed features that kinda-sorta work sometimes, maybe. Everything either works exactly as you'd expect, or it's just not there.
I love OpenBSD, but there are some tasks it's just not suited for, and that's fine, too.
I was pretty sure I had seen a mailing list post from Theo about it, but I can't find it now. The only relevant thread I can find is this one [1], which pretty much just says "we don't do it for userland"; but does say it is available inside the kernel, and I have seen some mentions in recent release notes for OpenBSD of binding PF things by toeplitz hash, which indicates the right progression for that ... but it's still hard to get max performance from a simple network daemon without binding the userland threads to same core that the kernel processes the flow with. Once your daemon starts doing substantial work, binding cpus isn't as important, but if it's something like an authoritative DNS server or HAProxy with plain sockets, the performance benefit from eliminating cross-core communication can be tremendous.
It appears they have different requirements for those machines. They state the Ubuntu machines are for non-firewall applications. Ubuntu and Debian can configured relatively easily for a number of workstation and server roles.
Also many IT professionals that have used Linux will be familiar with a Debian or a Debian derivative such as Ubuntu. That simply isn't the case with OpenBSD.
I recently installed OpenBSD on my old laptop to try it out and I found it difficult even though I used to use it at University back in the late 2000s.
Amusingly, I started using OpenBSD in 2000 because after repeatedly trying to get Debian running on my PowerPC G4 and failing (for months), I discovered that OpenBSD had a PowerPC port that immediately worked. Honestly the hardest part about OpenBSD is the installer, which has a few small improvements over the one back in 2000, but is essentially the same. I'm sure that kids these days will turn to ChatGPT for help, but I learned most of what I knew about hacking on a UNIX machine from OpenBSD's amazingly good man pages; they are still great.
I went through the process again just this weekend, because the disk in my firewall died. It's obvious that they continue to put a lot of effort into the OS. It's too bad that I can't use it as my daily driver, because I gladly would.
Their ports to older non-x86 stuff does work well but I can't justify using it as a desktop OS. Too many compromises you have to make without a lot of benefit IME.
I find with the BSDs is that it is difficult to look up how to do something quick via a web search. Yes that is a man page that will tell you how to use whatever, but knowing where you are supposed to look to solve "why doesn't two button scroll work" isn't immediately obvious.
I was mucking around with FreeBSD on my old laptop and it works well and it isn't too bad to get stuff going if you are following the handbook, there is still that "how do I get <thing> working". I think the OS is good underneath, but I kinda want two finger scrolling to kinda work when I install cinnamon and X.
Debian is at the stage now of install, you have desktop and most stuff just works at least on a x86-64 system. If I want to install anything, it is download deb / flatpak and I am done.
> BSD documentation is great because it the systems change so little you don't find twenty out of date references on how to configure your DHCP client.
While there are a out of date tutorials in Linux land, at least I can find out how I might do something and then I can figure things out from there. I do know how to use the man page system, however simply knowing what to look for is the biggest challenge.
e.g I was trying to configure two finger scrolling. The freebsd wiki itself appeared out of date. So it looks like you use libinput X driver package (which I forgot the name of now) and do some config in X. It would be nice if this was covered in the handbook as I think a lot of people would like two finger scrolling working on their laptops.
> But as a desktop OS, yes they lack in a lot of areas, mainly hardware support/laptop support.
Actually FreeBSD appears quite well hardware wise at least on some of the hardware I have. My laptops are all boring corp business refurbs that I know work well with Linux/BSDs.
The problem is that often I require using software which does not work on FreeBSD/OpenBSD or is difficult to configure.
The other issue is that there are things that appear to be broken for quite a while that are in pkgs (at least with FreeBSD) so trying to configure a VM with a desktop resolution over something relatively low isn't possible at least with Qemu.
It's actually pretty shocking how poorly and sluggish OpenBSD performs, and it's not meaningfully more secure than a properly-configured Linux or freebsd box.
I'm honestly not sure what its use case is in 2025, beyond as a research OS.
> Why are OpenBSD people always so rude and defensive? Sheesh
Because there is a limited amount of maintainers and a clearly stated goal/direction. There are also a lot of people requesting features that don't actually contribute to the goal or don't even use OpenBSD. It is a way to manage resources.
There is also the sentiment "if you need it you implement and maintain it" hence if someone is requesting without any investment it doesn't seem like they are serious.
There seem to be one group of people that seem to take offence by people being hyperbolic (which this is) and another group of people that aren't. I personally find it baffling why anyone would be bothered by that comment.
For me, the only drawback for corporations is the 6 month upgrade. There is no LTS on OpenBSD.
I use OpenBSD as a workstation and it works great, but in a production environment I doubt I would use OpenBSD for critical items, mainly because no LTS.
It is a sad state of affairs because Companies do not want nor will want a system you need to upgrade so often even if its security very good.
On the other hand though, updates on OpenBSD are the most painless updates I have ever done. I am more concerned about it's usage of UFS instead of something more robust for drives.
> updates on OpenBSD are the most painless updates I have ever done
I see we have a post-syspatch (6.1 - 2017), post-sysupgrade (6.6 - 2019) OpenBSD user in our midsts. ;D
You are positively a newbie in the OpenBSD world !
Some of us are old enough to remember when OpenBSD updates were a complete pain in the ass in involving downloading shit to /usr/src and compiling it yourself !
According to Wikipedia, Debian has had apt since 1998.
My point is OpenBSD didn't have binary updates until well into the 2000's as mentioned above. Initially in 2017 with syspatch and the finally full coverage in 2019 when sysupgrade came along.
As you can see on some old OpenBSD Mailing List posts[1] there was a high degree of resistance to the very idea of binary updates. People even being called trolls when they brought up the subject[2] or being told they "don't understand the philosophy of the system"[3]
I just felt it was an important point of clarification on your original post. Yes, I agree, OpenBSD updates are painless ... now, today. But until very recent history they were far from painless.
I'm grossly generalizing here, but it seems like OpenBSD boxes seem to be commonly used for the sorts of things that don't write a lot of data to local drives, except maybe logfiles. You can obviously use it for fileservers and such but I don't recall ever seeing that in the wild. So in that situation, UFS is fine.
(IMO it's fine for heavier-write cases, too. It's just especially alright for the common deployment case where it's practically read-only anyway.)
I've used it as a mail server, a web server, and a database (postgres) server. It's also my main desktop OS. Did/does fine, but I never really stressed it. I would certainly welcome a more capable filesystem option, as well as something like logical volumes, but I can't say that ufs has ever failed me.
You'll definitely want to have it on a UPS to avoid some potentially long and sometimes manual intervention on fscks after a power failure. And of course, backups for anything important.
Yet companies insist on enabling unattended upgrades at least for "security" patches, which have introduced breakage or even their own vulnerabilities in the past (Crowdstrike was a recent dramatic example).
OpenBSD will just tell you that maintaining an LTS release is not one of their goals and if that's what you need you'll be better served by running another OS.
I think it depends on your needs. Working corporate environments with 1000+ hosts, LTS operating systems are big help. On the other hand, for smaller cases, call it a work group or smaller, I think OpenBSD provides a base system that doesn't typically make drastic changes, along with a ports collection that does a pretty good job of keeping up with the third party applications. It's a good balance. I've recently seen some "Immutable" Linux distributions that are basically spins of upstream distributions. They leave the inherited distribution mostly alone and load the extras using Flatpak or the like. Sounds similar to BSD ports in a way.
It's ironic because I chose Linux over FreeBSD due to 10G performance. Be it a TrueNAS box with dual Xeons and Marvell 10G card or a ThinkCentre Tiny with an Intel NIC running opnSense I could never get anywhere near full 10G throughput. Switch to Linux (TrueNAS SCALE/openWrt) and it just worked at full speed.
Although the article also uses weasel words like "sufficiently good" performance so it sounds like their BSD 10G performance isn't that good either.
I can't remember the 10G firewall figures we got in testing off the top of my head, but we didn't max out the 10G network; I think we were getting somewhere in the 8G range. This is significantly better than our OpenBSD performance but not quite up to the level of full network speed or the full speed that two Linux machines can get talking directly to each other over our 10G network. I also suspect that performance depends on which specific NICs you have due to driver issues. The live performance of our deployed FreeBSD firewalls is harder to assess because people here don't push the network that hard very often (although every so often someone downloads a big thing from the right Internet source to get really good rates).
I imagine a near future where TCP/IP stacks, and device drivers are interchangeable between operating systems. In Linux, NDISWrapper [1] enables to use Windows drivers in Linux but it's a wrapper (with all due respect to this project).
Sorta, but only with ancient windows XP drivers. It was a useful stopgap of it's era but linux networking drivers have more than caught up in the meantime.
That sent me looking it up. It seems that NetBSD, as the only one, has a rump kernel, but it also looks like work on it stagnated around 10 years ago. That could be because the guy doing a thesis on them, moved on. There is quite some bitrot when following links. Do you know what happened? Were they a failure? Maybe they were surpassed by other OS architectures?
Just more navel-gazing from UTCC. I still don't understand why all of these submissions get upvoted so often. 10G performance just really isn't that interesting anymore, maybe around 2005 when it was the new kid on the block. If they were talking about squeezing firewall performance out of a box with a couple of 200g or 400g adapters and on run-of-the-mill CPUs and no offloading or something like Netflix publishes with their BSD work, I'd be more interested.
Despite 10G being far from new it seems like somehow they're still not even achieving that with FreeBSD. Would be more interesting to see an upgrade to Linux.
I once wrote a similar post to an DVD industry centric mailing list (remember those?) regarding switching to FCP7 from Adobe Premiere with a huge difference in how FCP7 would allow capturing of discrete audio channels vs Premiere forcing an interleaved audio stream. Eventually, a rep from Adobe contacted me through my company's PR team (a first for me) to go over the list of complaints. At the end, he agreed these were all valid complaints, and then asked "if Premiere added these changes would I be willing to switch back"? At that point, I said probably not as we'd now be fully switched to FCP7 in all departments. So I understand that sentiment as well. Honestly, I was shocked that someone actually read my missive and actually paid any mind to it. So maybe someone at OpenBSD will be as receptive if not equally unable to do anything about it.
The OpenBSD community does not operate like that. It's expected that you fix it yourself.
This post sounded faintly crazy to me, so I went into a little wiki-hole consisting primarily of mailing lists and dev docs
Turns out, the main reason `pf` is non-portable is that half of it runs inside Berkeley-type network stacks, often in kernel space, but the remainder is in user space.
So the miserable single-threaded `pf` on OpenBSD is still, in some part, single-threaded on FreeBSD, but for certain rule-sets, you will get the benefits of FreeBSD's intensively re-entrant and multithreaded TCP/IP, because those parts of `pf` are embedded in the network stack.
So depending on workload, a given `pf` configuration on OpenBSD might be perfectly equal to its FreeBSD counterpart, or hundreds of times slower. I feel like this gives a lot of context to the OP's grousing around "10 gbps"
P.S. To confess my own biases: a port of a `pf` configuration to a platform where some rulesets are high performance and others are not, that would not be very attractive to me. An improvement, but not a solution. I would be looking to move to a Linux stack. Baby steps, I guess. I have done worse things to better people!
P.P.S. I suspect this coupling between a re-entrant TCP/IP stack and a single-threaded firewall process is also why FreeBSD `pf` is never even close to feature parity with its OpenBSD counterpart -- it is just easier to do new stuff with a simpler model
Root on ZFS is an easy sell for me. OpenBSD's ancient filesystem is notoriously flaky, and they have no interest in replacing it anytime soon.
I can't be worried that critical parts of my network won't come back up because the box spontaneously rebooted or the UPS battery ran out (yes it happens — do you load test your batteries — probably not) and their bubblegum-and-string filesystem has corruption and / and /usr won't mount and I gotta visit the console like Sam Jackson in Jurassic Park to fsck the damn thing.
Firewalls are critical infra — by definition they can't be the least reliable device in the network.
This is the first I've read that OpenBSD's file system is "notoriously flaky", "bubblegum-and-string" (the opposite of the OpenBSD approach) or make "the least reliable device in the network". The reputation is the opposite.
> visit the console like Sam Jackson in Jurassic Park
Consoles aren't so unusual for most server admins, IME. They're the most common tool.
It seems you’ve read too much general OpenBSD hype material and too little specific information about details like the filesystem. The OpenBSD filesystem notoriously lacks journalling support. It used to support soft updates, but that got removed too. There are no seatbelts. If you suddenly lose power, there is a high likelihood you lose data. OpenBSD is notorious for it.
For those that don't know soft updates are a clever method to prevent filesystem corruption.
Journaling: write the journal, write the filesystem, in event of sudden power outage either the journal will be partially corrupt and discarded or the filesystem will be corrupt and the journal can be replayed to fix it, the problem is that now you are duplicating all metadata writes.
Softupates: reorder the writes in memory so that as the filesystem is written it is always in a consistent state.
So softupdates was a clever system to reduce metadata writes, perhaps too clever, apparently it had to be implemented chained through every layer of the filesystem, nobody but the original author really understood it and everyone was avoiding doing filesystem work for fear of accidentally breaking it. And it may not of even worked, there were definitely workloads where softupdates would hose your data.(I am not exactly sure, But I think it was a ton of small metadata rewrites into a disk full) So when someone wanted to do work on the filesystem but did not want to deal with softupdates, obsd in characteristic fashion said "sure, tear it out" It may come back, I don't know the details, but I doubt it. It sounds like it was a maintenance problem for the team.
Journaling conversely is a sort of inelegant brute force sort of mechanism, but at least it is simple to understand and can be implemented in one layer of the filesystem.
Are you saying softupdates have been removed? When, and what was the name of the file system that used them?
Openbsd's ffs
https://undeadly.org/cgi?action=article;sid=20230706044554Thanks. So in 2023. I was getting the impression that it was a prior FS a long time ago.
You don't need to talk about other commenters; because you don't know them (with few possible exceptions) you're certain to make a fool of yourself.
The components and their potential benefits aren't really consequential; performance is. Sometimes, the hi-spec components are technologically interesting and exciting to geeks (me too), but have little practical value, especially maximalist components like ZFS. I've never needed it, for example. Very rarely could a journaling file system provide an actual benefit, though I don't object to them.
Sometimes the value is negative because the complexity of those components adds risk. KISS.
> If you suddenly lose power, there is a high likelihood you lose data. OpenBSD is notorious for it.
This can happen on any filesystem unless you have a very good power supply which can buffer flushing of buffers to disk and/or battery backed caches.
Filesystem is one of the rare areas (albeit crucial) where OpenBSD is flaky, tbf.
OpenBSD user here. I wish OpenBSD borrowed WAPBL from NetBSD.
conversely, running a firewall on something like ZFS also sounds like too much. Ideally I'd want a read-only root FS with maybe an /etc and /var managed by an overlay.
> Ideally I'd want a read-only root FS with maybe an /etc and /var managed by an overlay.
OpenZFS 2.2 added support for overlays, so you can have the main pool(s) mounted as read-only:
* https://github.com/openzfs/zfs/releases/tag/zfs-2.2.0
Sounds like overcomplicating in the name of simplification. ZFS is a good, reliable, general-purpose system; often the right answer is to just put everything on ZFS and get on with your life.
I’ve had more problems with zfs than all other filesystems combined including FAT. It’s IMO overkill for a root partition.
Problems like? I run zfs on 20gb VMs and a 100tb pool and I’ve never had a problem that wasn’t my own fault. I love root on zfs, you can snapshot your entire OS at a whim. The only other way to get that I know of is btrfs which genuinely does have well known issues.
Not an OP but I have similar experience with ZFS. Over 22 years of maintaining servers, I have had serious issues exclusively with ZFS. My pool is there, but it doesnt want to mount no matter what amount of IRC/reddit/SO/general googling I apply to try and help it boot. After it happened for the second time, I removed ZFS from the list of technologies I want to work with (I still have to, due to Proxmox, but without being fascinated).
As another anecdote:
I've been working with systems for a long time, too. I've screwed things up.
I once somehow decided that using an a.out kernel would be a good match for a Slackware diskset that used elf binaries. (It didn't go well.)
In terms of filesystems: I've had issues with FAT, FAT32, HPFS, NTFS, EXT2, ReiserFS, EXT3, UFS, EXT4, and exFAT. Most of those filesystems are very old now, but some of of these issues have trashed parts of systems beyond comprehension and those issues are part of my background in life whether I like it or not.
I've also had issues with ZFS. I've only been using ZFS in any form at all for about 9 years so far, but in that time I've always able to wrest the system back into order even on the seemingly most-unlikely, least-resilient, garbage-tier hardware -- including after experiencing unlikely problems that I introduced myself by dicking around with stuff in unusual ways.
Can you elaborate upon the two particular unrecoverable issues you experienced?
(And yeah, Google is/was/has been poisoned for a long time as it relates to ZFS. There was a very long streak of people proffering bad mojo about ZFS under an air of presumed authority, and this hasn't been helpful to anyone. The sheer perversity of the popular myths that have popularly surrounded ZFS are profoundly bizarre, and do not help with finding actual solutions to real-world problems.
The timeline is corrupt.)
>The sheer perversity of the popular myths that have popularly surrounded ZFS are profoundly bizarre
Cyberjock sends his regards, I'm sure.
> Over 22 years of maintaining servers, I have had serious issues exclusively with ZFS.
I've been using ZFS since it initially debuted in Solaris 10 6/06 (also: zones and DTrace), before then using it on FreeBSD and Linux, and I've never had issues with it. ¯\_(ツ)_/¯
Not to be deliberately argumentative but still no concrete examples of zfs failures are shown, just hand wavey "I had issues I couldn't google my way out of". I've never heard of a healthy pool not mounting and I've never heard of a pool being unhealthy without a hardware failure of some sort. To the contrary, zfs has perfectly preserved my bytes for over a decade now in the face of shit failing hardware, from memory that throws errors when clocked faster than stock JEDEC speeds to brand new hard drives that just return garbage after reporting successful writes.
> I’ve never had a problem that wasn’t my own fault.
I'm including that. zfs takes more skill to manage properly.
From my understand of ZFS:
When it is treated as just a filesystem, then it works about like any other modern filesystem does.
ZFS features like scrubs aren't necessary. Multiple datasets aren't necessary -- using the one created by default is fine. RAIDZ, mirrors, slog, l2arc: None of that is necessary. Snapshots, transparent compression? Nope, those aren't necessary functions for proper use, either.
There's a lot of features that a person may elect to use, but it is no worse than, say, ext4 or FFS2 is when those features are ignored completely.
(It can be tricky to get Linux booting properly with a ZFS root filesystem. But that difficulty is not shared at all with FreeBSD, wherein ZFS is a native built-in.)
Linux takes more skill to manage than Windows or macOS, yet we all know Linux. zfs is _the_ one true filesystem, and the last one you need to know. Besides that, to know zfs is to have a deeper understanding of what a filesystem is and does.
I will admit though, to truly get zfs you need to change how you think about filesystems.
Interesting, can you share specifics?
> conversely, running a firewall on something like ZFS also sounds like too much.
this makes no sense. firewalling does not touch the filesystem very much if at all.
what FS is being used is essentially orthogonal to firewalling performances.
if anything, having a copy-on-write filesystem like ZFS on your firewall/router means you have better integrity in case of configuration mistakes and OS upgrade (just rollback the dataset to the previous snapshot!)
my point was that if a hardware vendor were to approach this problem, they'd probably have 2 (prev,next) partitions that they write firmware to, plus separate mounts for config and logs, rather than a kitchen-sink CoW FS
What aspect of ZFS prevents the kind of layout that you envision, do you suppose?
ZFS works just fine with partitions, if that's how a person/company/org wants to use it today.
> Firewalls are critical infra — by definition they can't be the least reliable device in the network.
This is why you have failover for firewalls. The loss of any single device isn't that important.
Sure but you still want them as stable as possible. Needing (emphasis there) to fail over should be for emergencies, not standard operating procedure.
> Needing (emphasis there) to fail over should be for emergencies, not standard operating procedure.
You should be failing testing failover regularly, just like you're testing backups and recovery, and other things that should not "need" to happen but have to actually work when they do.
A good time would be during your monthly/quarterly/(bi)annual/whatever patch cycle (and if there are no patches, then you should just test failover).
That’s why I emphasized “needing”.
Generally it would be part of SOP for updates requiring a service or system restart.
That said, I can't find fault in the filesystem, haven't personally encountered an issue with it, other than it being slow.
OpenBSD's pre-journaling FFS is ancient and creaky but also extremely robust
I am not sure there is a more robust, or simple, filesystem in use today. Most networking devices, including, yes, your UPS, use something like FFS to handle writeable media.
I am not accustomed to defending OpenBSD in any context. It is, in every way, a museum, an exhibition of clever works and past curiosities for the modern visitor.
But this is a deeply weird hill to die on. The "Fast File System," fifty years old and audited to no end, is your greatest fear? It ain't fast, and it's barely a "file system," but its robustness? indubitable. It is very reliably boring and slow. It is the cutting edge circa 2BSD
edit: I am mistaken, the FFS actually dates to 4.1BSD. It is only 44 years old, not 50. Pardon me for my error.
As noted, recent changes to OpenBSD TCP handling[1] may improve performance.
On a 4 core machine I see between 12% to 22% improvement with 10 parallel TCP streams. When testing only with a single TCP stream, throughput increases between 38% to 100%.
I'm not sure that directly translates to better pf performance, and four cores is hardly remarkable these days but might be typical on a small low-power router?
Would be interesting if someone had a recent benchmark comparison of OpenBSD 7.8 PF vs. FreeBSD's latest.
[1] https://undeadly.org/cgi?action=article;sid=20250508122430
That particular change improves throughput received locally. Though over the past few years there's been a ton of work on unlocking the network layer generally to support more parallelism.
For a firewall I guess the critical question is the degree of parallelism supported by OpenBSD's PF stack, especially as it relates to common features like connection statefulness, NAT, etc.
Thanks. Yes after I posted that I started wondering if it was really relevant to pf.
Can confirm. Lots of performance improvements lately in OpenBSD. Our Load Balancers basically doubled throughput after updating from 7.6 to 7.7
I was using OpenBSD for my firewalls for a long time, but with the arrival of 10Gbit/s ethernet, I realized that I had to move back to ASIC based firewalls.
Yes, you can forward 10Gbit/s with linux using VPP, but you cannot forward at that rate with small packets and stateful firewall. And it requires a lot of tuning and a large machine.
A used SRX4200 from juniper runs at around 3k USD and you can even buy support for it and you can forward at like 40Gb/s IMIX with it.
I still prefer PF syntax over everything else though.
You can definitely build an x86 system to route 40Gb/s with small packets for under $3k and it's been the case for many years. A Xeon-D can hit 100gbps forwarding and filtering.
OpenBSD is going through a slow fine grained locking transformation that FreeBSD started over 20 years ago. Eventually they will figure out they need something like epoch reclaimation, hazard pointers, or rcu.
Thats also what I thought, but on my test system under freebsd with mellanox 25Gbit/s nic and ryzen 9 I peak at around 2Mpps.
Enable hyperthreading.
I just today deployed an $800 mikrotik in my house that can route 10 gbps at wire speed. on the CPU. with firewall and nat rules applied. no joke. 4 million packets per second is, like, a lot, post-filtering and with any substantial packet size.
This was doable back in 2008 with about $15k of x86 gear and a Linux kernel and a little trickery with pf_ring. The minute AMD K10 and Intel Nehalem dropped, high routing performance was mostly a software problem... Which is cool as hell, compared to the era when it required elaborate dedicated hardware, but it does not make it cheap or easy. Just, commodity. Expensive commodity.
Now you can buy a device off the shelf for $800 that will do it on the CPU, to avoid the cost of Cisco or Juniper, and it has a super simple configuration interface for all the software-based features. Everything you could do in L3/L4 on a Linux platform in 2008, for like, 1/16th the price, with vastly less engineering effort. It is just like, a thing you buy, and it all kinda works outta the box.
No pf_ring trickery, no deep in-house experience, just a box you buy on a web site and it moves 10 gbps with filtering for $800
There's no real magic here: they use absolutely shockingly enormous ARM chips from Amazon/Annapurna. You can build an $800 commodity platform that rivals a $15k commodity platform in 2008, and both of them replace what used to cost $500k.
Is it as good as Cisco or Juniper? oh, certainly not. Will it route and filter traffic at much greater rates, for $800, than anything they have ever been bothered to offer? ABSOLUTELY
I'm really confused by "about $15k of x86 gear ... The minute AMD K10 and Intel Nehalem dropped, high routing performance was mostly a software problem". What kind of $15k machine would you have needed? That's a heck of a lot more than even the most expensive K10 2008 CPU (which according to Wikipedia seems to be Opteron 8384 (quad core, 2.7GHz, 1.0GHz HT, $2149 November 2008), supports up to 8 CPUs per machine, I guess that's what you mean.)
The first x86 project I saw doing line speed route+filter on 10gpbs used 4x top-end Nehalem chips, an output of the RouteBrick project
Although, their original paper says they used a 2-socket prototype and got some very impressive numbers: https://www.sigops.org/s/conferences/sosp/2009/papers/dobres...
So maybe you could skate by with a slightly cheaper machine ;)
What's wrong with Linux for firewalls? Either openwrt, or any distro really.
Why would any BSD perform better?
(edit: genuinely curious why BSDs are such popular firewalls)
One thing I like about using OpenBSD for my home router is almost all the necessary daemons being developed and included with the OS. DHCPv4 server/client, DHCPv6 client, IPv6 RA server, NTP, and of course SSH are all impeccably documented, use consistent config file formats/command-line arg styles, and are privilege-separated with pledge.
Also it's a really well trodden path. You aren't likely to run into an OpenBSD firewall problem that hasn't been seen before.
Regarding any BSD used for any purpose, BSD has a more consistent logic to how everything works. That said, if you're used to Linux then you're going to be annoyed that everything is very slightly different. I am always glad that multiple BSD projects have survived and still have some real users, I think that's good for computing in general.
The recent addition of dhcp6leased is a great example: Built into the base system, simpler to configure than either dhcp6c or dhcpcd, and presumably also more secure than either.
Nftables has improved the situation on Linux somewhat, but PF is incredibly intuitive and powerful. A league of its own when it comes to firewalling.
Nftables is alright IME
Has there ever been an effort to port PF over to linux, or to create an adaption layer that makes things compatible?
pf has been ported to Debian/kFreeBSD, but afaik no effort has been made to port it to the Linux kernel. A lot of networking gear already runs a BSD kernel, so my guess is the really high-level network devs don't bother because they already know BSD so well.
Uhh... no idea but yea. Its that much better that it deserves a poem.
I assume in this case they already had a bunch of firewall rules for PF and switching from OpenBSD -> FreeBSD is a much easier lift then going to linux because both the BSDs are using PF, although IIRC there are some differences between both implementations.
I'm pretty die-hard Linux, but I had a client who needed to do traffic shaping on hundreds or thousands of this ISPs users. I've tried multiple times to get anything more than the most simple traffic shaping working under Linux, with pretty bad luck at it. I set them up with a FreeBSD box and the shaping config, IIRC, was a one-liner and just worked, I never heard any complaints about it.
I've run a lot of Linux firewalls over the decades, but FreeBSDs shaping is <chefs kiss>
I did traffic shaping per user for a few hundred users on 1GHz Pentium III on Linux. It can be done just fine.
What features have you used for shaping with pf/FreeBSD? I remember (around 8ish years ago) using dummynet with pf, but it wasn't supported out of the box and I used some patches from the mailing lists for this purpose. It wasn't perfect, at times buggy. Back then ipfw had better support for such features, but I didn't like the syntax just as much as iptables. I eventually settled on Linux as I have grown to understand iptables (I hate that nftables is the brand new thing with entirely different syntax to learn again... and even requires more work upfront because basic chains are not preconfigured...) but traffic shaping sucked big time on linux, I never understood the tc tool to be effective, it's just too arcane. I always admired pf, especially on OpenBSD since it had more features but the single threaded nature killed it for any serious usage for me.
The user interface is literally 1000x better. That's all
Linux is enormously higher performance but it is a huge pain in the ass to squeeze the performance out AND retain any level of readability
which is why there are like a dozen vendors selling various solutions that quietly compile their proprietary filter definitions to bpf for use natively in the kernel netfilter code...
Too many random changes, too fiddly to maintain, too much general flakiness. Especially for simple single-purpose devices that you want to set up once and leave alone for years, BSD is generally much nicer than Linux. I'd actually flip your question: why would you ever use Linux rather than FreeBSD?
Do you have any specific examples where a Linux-based firewall was too "random" or "fiddly" or "flaky"? Or provide examples of ways that BSD "much nicer"?
It sounds to me like you picked a bad Linux distro for your use case.
I've seen plenty of single-purpose Linux-based network appliances, and none of them have come across as flaky or unreliable because of the OS. In fact they can be easier to use for people who have more operational experience using Linux already.
> Do you have any specific examples where a Linux-based firewall was too "random" or "fiddly" or "flaky"?
They switched out ifconfig for some other thing. There's been about 3 different firewall systems that you've have to migrate between. Some of the newer systems (docker and I think maybe flatpak/the other one) bypass your firewall rules by default, which is a nasty surprise. A couple of times I did a system upgrade and my system wouldn't boot because drivers or boot systems or what have you had changed. That stuff doesn't happen on FreeBSD.
I'm sure to someone who lives and breathes Linux, or who works on this stuff, it's all trivial. But if it's not something you work on day-to-day, it's something you want to set and forget as an appliance, Linux adds pain.
> It sounds to me like you picked a bad Linux distro for your use case.
Were there any grounds at all in what I said for thinking that, or did you just make it up out of blind tribalism?
> In fact they can be easier to use for people who have more operational experience using Linux already.
Of course, but that's purely circular logic. Whatever OS you use for most of your systems, systems using that OS will be easier for you to use.
Not the OP, but I've been with Linux since 2.2 times. You had ipchains, iptables, nftables and now god knows what. *BSD had pf ever since.
What's wrong with using any BSD? Can't people use whatever suits their needs?
Of course, I'm genuinely curious why BSDs are more popular as firewalls.
Because of pf[1]. It's just a very capable firewall with a pleasurable configuration language.
[1] https://www.openbsd.org/faq/pf/
Agreed, `pf` is a delight to use.
Borrowing a demonstration from https://srobb.net/pf.html
Note last rule matching wins, so you put your catch-all at the top, "block all". Then in this case fxp0 is the network interface. So they're defining where traffic can go to from the machine in question, in this case any source as long as it's to port 22, 25, 80, 110, or 123 for TCP, and either 110 or 631, for UDP.<action> <direction> on <interface> proto <protocol> to <destination> port <port> <state instructions>
One can further parametrize things with, e.g.,
The BSDs still tend to use device-specific names versus the generic ethX or location-specific ensNN, so if you have multiple interfaces knowing about internal and external may help the next person who sees your code to grok it.doing the same thing with nftables is not really complicated either
The documentation on BSDs, and in particular OpenBSD, are generally high quality
One thing unexpected I found when setting up an OpenBSD based router recently: the web isn’t riddled with low-quality and often wrong SEO and AI slop about OpenBSD like it is for Linux. I guess there just isn’t enough money to be made producing it for it for such a niche audience.
If you search up a problem, you get real documentation, real technical blog posts, and real forum posts with actual useful conversations happening.
I've used both and the main advantage is PF/ipfw syntax.
But now with nftables I actually am going back to RHEL on Firewalls. I want something ultra-stable and long lived.
I've been using OpenBSD and PF for nearly 25 years (PF debuted December 2001). Over those years there have been syntax changes to pf.conf, but the most disruptive were early on, and I can't remember the last syntax change that effected my configs (mostly NAT, spamd, and connection rate limiting).
During that time the firewall tool du jour on Linux was ipchains, then iptables, and now nftables, and there have been at least some incompatible changes within the lifespan of each tool.
OpenBSD has an additional leg up in that incompatible changes between releases are concisely, clearly, and consistently documented, e.g. https://www.openbsd.org/faq/upgrade78.html The last incompatible pf.conf syntax change I could find was for 6.9, nearly 5 years ago, https://www.openbsd.org/faq/upgrade69.html
And iptables has been around since 2001, and can still be used.
Alternatively you can use nftables which has only been around for the past 12 years.
I realise that one change per quarter century is possibly a little fast paced for BSD but I can cope with it.
PF is also from 2001. But its roots go further back, I once used a very PF-like syntax on a Unix firewall from 1997. I forget which type of Unix it was, maybe Solaris.
Either way, I don't think there is any defense for the strange syntax of IPtables, the chains, the tables. And that's coming from a person who transitioned fully from BSD to Linux 15 years ago, and has designed commercial solutions using IPtables and ipset.
You left off ipfwadm before ipchains.
PF is really nice. (Source: me. Cissp and a couple decades of professional experience with open source and proprietary firewalls).
And if they are already using it on openbsd, it’s almost certainly an easier lift to move from one BSD PF implementation to another versus migrating everything to Linux and iptables.
Agreed. Once you've gone pf you'll pine for it when working with anything else.
I've gotta me-too this. I've written any number of firewall rulesets on various OSes and appliances over the years, and pf is delightful. It was the first and only time I've seen a configuration file that was clearly The Way It Should Be.
The only configuration language I like more is Juniper. I picked that up and became fluent in it within about a day.
Let me extend the question to what’s wrong with NFTables on Linux? It’s a different way to manage Netfilter, out of IPTables
Because of PF or Packet Filter (the PF in pfSense FWIW): https://en.wikipedia.org/wiki/PF_(firewall)
We migrated to a linux nftables based firewall.
I never liked iptables, but nftables is pretty nice to write and use.
And with one "flowtable" line added to your nftables.conf you can even in theory have faster routing when conntrack is active
https://thermalcircle.de/doku.php?id=blog:linux:flowtables_1...
I am not very familiar with FreeBSD's pf but my understanding is that fbsd integrated it from OpenBSD and then proceeded to put a fair amount of work in making it more performant(multi core) while OpenBSD put most of it's work into improving pf's features, At this point the two pf's are different enough that they are not really compatible. OpenBSD can't really use much of fbsds multi core work and FreeBSD is A. Is a lot more hesitant about breaking backwards compatibility and B. would need get the queuing structures to work with their kernel. In short FreeBSD pf is like using an old fast version of OpenBSD pf
In fact if you asked me to explain the difference between obsd and fbsd it is exactly this. fbsd focuses on performance and obsd focuses on ergonomics.
The pf maintainer in FreeBSD has been doing a ton of work to bring more recent improvements over from OpenBSD, trying to bring them in sync as much as possible without breaking compatibility: https://cgit.freebsd.org/src/log/sys/netpfil/pf
The state of affairs you described is much less the case now than in the past.
> There are some things about FreeBSD that we're not entirely enthused about.
Damn I wish that they had expanded on this a bit (not to start a flame war, but to give readers a fuller picture, or even to prod the FreeBSD community into "fixing" those things)
edit: typo fix
One issue, as they point out, is that we now do minor version updates every 6 months, and you need to update for each one. (We have a 3-4 month period where both are supported, but e.g. 15.0 will be EoL before 15.2 is released.)
We are aware that this isn't ideal for some users, but it was a necessary tradeoff. We might be able to improve this in the future (possibly as "security updates for the base system, but no ports support") but no guarantees.
It does seem like a weird omission doesn’t it?
I find it a bit odd that they seem to have gone from having OpenBSD as the standard and are not moving to FreeBSD and Ubuntu.
I an not sure what role these computers that may transition to Ubuntu do, there are probably good reasons, I wish he had expanded on it.
The computers that moved from OpenBSD to Ubuntu were our local resolving DNS servers. These don't use PF and we also wanted to switch from our previous OpenBSD setup to Bind, where we were already running Bind on Ubuntu for our DNS master servers. The gory details were written up here: https://utcc.utoronto.ca/~cks/space/blog/sysadmin/UsingBindN...
We may at some point switch our remaining OpenBSD DHCP server to Ubuntu (instead of to FreeBSD); like our DNS resolvers, it doesn't use PF, and we already operate a couple of Ubuntu DHCP servers. In general Ubuntu is our default choice for a Unix OS because we already run a lot of Ubuntu servers. But we have lots of PF firewall rules and no interest in trying to convert them to Linux firewall rules, so anything significant involving them is a natural environment for FreeBSD.
(I'm the author of the linked-to article.)
Why do you say OpenBSD stopped "supporting bind"? You mean they don't include it in the base system anymore since the switch to unbound?
I mean.. It's one pkg_add away. It's a weird constraint to give yourself if that was the problem, considering you absolutely had to install it on your replacement ubuntu servers.
The short version is that we wound up not feeling particularly enthused about OpenBSD itself. We have a much better developed framework for handling Ubuntu machines, making it simply easier to have some more Ubuntu machines instead of OpenBSD machines, and we also felt Bind on Ubuntu was likely to be better supported than a ports Bind on OpenBSD. If everything else is equal we're going to make a machine Ubuntu instead of OpenBSD.
Who is the "we" here? I've poked around a few pages on this fellow's site, and apparently haven't found the right one to answer that.
Where he works... Or perhaps more accurately his co-workers. which is the domain. The university of Toronto.
So you don't like OpenBSD, but you do like Ubuntu?
This person seems like they know wht they are talking about and given it serious thought, but I cannot fathom how you could make such a conclusion today.
If they're concerned about performance, yeah. OpenBSD doesn't do the basics that you need to get the most out of your SMP hardware; there's no way to set cpu affinity at least from userland, and it's clear that this sort of work is not a priority for OpenBSD; it's not easy work, but FreeBSD has done it. Beyond CPU affinity, you also need your network structures setup to reduce lock contention, things like fine grained locks, hashed subtables and/or "lockless" tables, configuring the NICs as close as possible to one queue per core and keeping flows on the same queue which is pinned to a single core so that the per flow locks never contend and don't bounce between cores.
Ubuntu/Linux do have reasonable performance, but I think they prefer PF firewalls, so that makes Linux a non-option for firewalls.
Personally, I don't really care for PF, but it offers pfsync, which I do care for, so I use it and ipfw... but I need to check in, I think FreeBSD PF may have added the hooks I use ipfw for (bandwidth limits/shaping/queue discipline).
It's not necessarily that OpenBSD can't implement the basics, it's that they don't want to. A lot of the high-performance features introduce potential security vulnerabilities. Their main focus is security and correctness. Not speed.
> A lot of the high-performance features introduce potential security vulnerabilities.
I am particularly-reminded of speculative execution optimizations allowing attacks like Spectre and Meltdown in 2017.
https://en.wikipedia.org/wiki/Transient_execution_CPU_vulner...
> "there's no way to set cpu affinity at least from userland"
How is that even possible. What's the excuse?
On Windows, setting process affinity has been around since the Windows NT days.
The gp already answered you, "this sort of work is not a priority for OpenBSD."
OpenBSD is a small, niche operating system, and it really only gets support for something if it solves a problem for someone who writes OpenBSD code. In a way, this is nice, because you never get half-assed features that kinda-sorta work sometimes, maybe. Everything either works exactly as you'd expect, or it's just not there.
I love OpenBSD, but there are some tasks it's just not suited for, and that's fine, too.
I was pretty sure I had seen a mailing list post from Theo about it, but I can't find it now. The only relevant thread I can find is this one [1], which pretty much just says "we don't do it for userland"; but does say it is available inside the kernel, and I have seen some mentions in recent release notes for OpenBSD of binding PF things by toeplitz hash, which indicates the right progression for that ... but it's still hard to get max performance from a simple network daemon without binding the userland threads to same core that the kernel processes the flow with. Once your daemon starts doing substantial work, binding cpus isn't as important, but if it's something like an authoritative DNS server or HAProxy with plain sockets, the performance benefit from eliminating cross-core communication can be tremendous.
[1] https://marc.info/?l=openbsd-misc&m=152507006602422&w=2
It's the OS's job to manage resources.
The OS doesn't always know everything about workloads to be able to make the right decisions.
It appears they have different requirements for those machines. They state the Ubuntu machines are for non-firewall applications. Ubuntu and Debian can configured relatively easily for a number of workstation and server roles.
Also many IT professionals that have used Linux will be familiar with a Debian or a Debian derivative such as Ubuntu. That simply isn't the case with OpenBSD.
I recently installed OpenBSD on my old laptop to try it out and I found it difficult even though I used to use it at University back in the late 2000s.
Amusingly, I started using OpenBSD in 2000 because after repeatedly trying to get Debian running on my PowerPC G4 and failing (for months), I discovered that OpenBSD had a PowerPC port that immediately worked. Honestly the hardest part about OpenBSD is the installer, which has a few small improvements over the one back in 2000, but is essentially the same. I'm sure that kids these days will turn to ChatGPT for help, but I learned most of what I knew about hacking on a UNIX machine from OpenBSD's amazingly good man pages; they are still great.
I went through the process again just this weekend, because the disk in my firewall died. It's obvious that they continue to put a lot of effort into the OS. It's too bad that I can't use it as my daily driver, because I gladly would.
Their ports to older non-x86 stuff does work well but I can't justify using it as a desktop OS. Too many compromises you have to make without a lot of benefit IME.
I find with the BSDs is that it is difficult to look up how to do something quick via a web search. Yes that is a man page that will tell you how to use whatever, but knowing where you are supposed to look to solve "why doesn't two button scroll work" isn't immediately obvious.
I was mucking around with FreeBSD on my old laptop and it works well and it isn't too bad to get stuff going if you are following the handbook, there is still that "how do I get <thing> working". I think the OS is good underneath, but I kinda want two finger scrolling to kinda work when I install cinnamon and X.
Debian is at the stage now of install, you have desktop and most stuff just works at least on a x86-64 system. If I want to install anything, it is download deb / flatpak and I am done.
BSD documentation is great because it the systems change so little you don't find twenty out of date references on how to configure your DHCP client.
But as a desktop OS, yes they lack in a lot of areas, mainly hardware support/laptop support.
> BSD documentation is great because it the systems change so little you don't find twenty out of date references on how to configure your DHCP client.
While there are a out of date tutorials in Linux land, at least I can find out how I might do something and then I can figure things out from there. I do know how to use the man page system, however simply knowing what to look for is the biggest challenge.
e.g I was trying to configure two finger scrolling. The freebsd wiki itself appeared out of date. So it looks like you use libinput X driver package (which I forgot the name of now) and do some config in X. It would be nice if this was covered in the handbook as I think a lot of people would like two finger scrolling working on their laptops.
> But as a desktop OS, yes they lack in a lot of areas, mainly hardware support/laptop support.
Actually FreeBSD appears quite well hardware wise at least on some of the hardware I have. My laptops are all boring corp business refurbs that I know work well with Linux/BSDs.
The problem is that often I require using software which does not work on FreeBSD/OpenBSD or is difficult to configure.
The other issue is that there are things that appear to be broken for quite a while that are in pkgs (at least with FreeBSD) so trying to configure a VM with a desktop resolution over something relatively low isn't possible at least with Qemu.
> "how do I get <thing> working"
OpenBSD is very different from FreeBSD in this regard. OpenBSD mostly works out of the box.
"Mostly" is doing a lot of heavy lifting. FreeBSD also mostly works out of the box also.
I am quite familiar with the BSDs. I've tried NetBSD, OpenBSD and FreeBSD when I used to muck around with this stuff daily.
> but I cannot fathom how you could make such a conclusion today.
https://utcc.utoronto.ca/~cks/space/blog/sysadmin/UsingBindN...
It's actually pretty shocking how poorly and sluggish OpenBSD performs, and it's not meaningfully more secure than a properly-configured Linux or freebsd box.
I'm honestly not sure what its use case is in 2025, beyond as a research OS.
[flagged]
>Theo has gone on record stating that it IS a research OS, which allows them to prototype new ideas like pledge().
Makes sense
>before the Linux distros add all sorts of digital AIDS to it. Remind me again how the xz backdoor happened and why OpenBSD wasn't affected.
Why are OpenBSD people always so rude and defensive? Sheesh
> Why are OpenBSD people always so rude and defensive? Sheesh
Because there is a limited amount of maintainers and a clearly stated goal/direction. There are also a lot of people requesting features that don't actually contribute to the goal or don't even use OpenBSD. It is a way to manage resources.
There is also the sentiment "if you need it you implement and maintain it" hence if someone is requesting without any investment it doesn't seem like they are serious.
Digital AIDS?
Jesus.
There seem to be one group of people that seem to take offence by people being hyperbolic (which this is) and another group of people that aren't. I personally find it baffling why anyone would be bothered by that comment.
[dead]
For me, the only drawback for corporations is the 6 month upgrade. There is no LTS on OpenBSD.
I use OpenBSD as a workstation and it works great, but in a production environment I doubt I would use OpenBSD for critical items, mainly because no LTS.
It is a sad state of affairs because Companies do not want nor will want a system you need to upgrade so often even if its security very good.
On the other hand though, updates on OpenBSD are the most painless updates I have ever done. I am more concerned about it's usage of UFS instead of something more robust for drives.
> updates on OpenBSD are the most painless updates I have ever done
I see we have a post-syspatch (6.1 - 2017), post-sysupgrade (6.6 - 2019) OpenBSD user in our midsts. ;D
You are positively a newbie in the OpenBSD world !
Some of us are old enough to remember when OpenBSD updates were a complete pain in the ass in involving downloading shit to /usr/src and compiling it yourself !
So did Ubuntu or Debian. But I am talking nowadays.
And yes, back then I wasn't using OpenBSD.
> So did Ubuntu or Debian
According to Wikipedia, Debian has had apt since 1998.
My point is OpenBSD didn't have binary updates until well into the 2000's as mentioned above. Initially in 2017 with syspatch and the finally full coverage in 2019 when sysupgrade came along.
As you can see on some old OpenBSD Mailing List posts[1] there was a high degree of resistance to the very idea of binary updates. People even being called trolls when they brought up the subject[2] or being told they "don't understand the philosophy of the system"[3]
I just felt it was an important point of clarification on your original post. Yes, I agree, OpenBSD updates are painless ... now, today. But until very recent history they were far from painless.
[1] https://misc.openbsd.narkive.com/IOf20unK/openbsd-binary-upd... [2] https://marc.info/?l=openbsd-misc&m=117255609026625&w=2 [3] https://marc.info/?l=openbsd-misc&m=117256318700031&w=2
The only real problem I can remember with Debian was the transition from Lilo to grub ...
And there was something with ... Postgres ? At some point? Both of course upstream issues that couldn't be helped.
> So did Ubuntu or Debian. But I am talking nowadays.
For Debian, I've managed to do in-place upgrades of a few machines from Debian 5 to 10 at the last place I worked.
I'm grossly generalizing here, but it seems like OpenBSD boxes seem to be commonly used for the sorts of things that don't write a lot of data to local drives, except maybe logfiles. You can obviously use it for fileservers and such but I don't recall ever seeing that in the wild. So in that situation, UFS is fine.
(IMO it's fine for heavier-write cases, too. It's just especially alright for the common deployment case where it's practically read-only anyway.)
I've used it as a mail server, a web server, and a database (postgres) server. It's also my main desktop OS. Did/does fine, but I never really stressed it. I would certainly welcome a more capable filesystem option, as well as something like logical volumes, but I can't say that ufs has ever failed me.
You'll definitely want to have it on a UPS to avoid some potentially long and sometimes manual intervention on fscks after a power failure. And of course, backups for anything important.
Yet companies insist on enabling unattended upgrades at least for "security" patches, which have introduced breakage or even their own vulnerabilities in the past (Crowdstrike was a recent dramatic example).
OpenBSD will just tell you that maintaining an LTS release is not one of their goals and if that's what you need you'll be better served by running another OS.
I think it depends on your needs. Working corporate environments with 1000+ hosts, LTS operating systems are big help. On the other hand, for smaller cases, call it a work group or smaller, I think OpenBSD provides a base system that doesn't typically make drastic changes, along with a ports collection that does a pretty good job of keeping up with the third party applications. It's a good balance. I've recently seen some "Immutable" Linux distributions that are basically spins of upstream distributions. They leave the inherited distribution mostly alone and load the extras using Flatpak or the like. Sounds similar to BSD ports in a way.
> For me, the only drawback for corporations is the 6 month upgrade. There is no LTS on OpenBSD.
One of the reasons the OP is moving to FreeBSD: five-year support cycles for the major release branches.
* https://www.freebsd.org/security/#sup
ok, just don't upgrade it, the focus is security by default anyway there are a lot of openbsd boxes online with a huge uptime running older versions.
I don't understand why this has 29 points and no comments. What's so amazing about this?
It's actually about computers unlike many of the threads today on HN?
Discussion threads about performance?
I just like the reference to 10G ethernet. It can't become normal soon enough.
It's ironic because I chose Linux over FreeBSD due to 10G performance. Be it a TrueNAS box with dual Xeons and Marvell 10G card or a ThinkCentre Tiny with an Intel NIC running opnSense I could never get anywhere near full 10G throughput. Switch to Linux (TrueNAS SCALE/openWrt) and it just worked at full speed.
Although the article also uses weasel words like "sufficiently good" performance so it sounds like their BSD 10G performance isn't that good either.
I can't remember the 10G firewall figures we got in testing off the top of my head, but we didn't max out the 10G network; I think we were getting somewhere in the 8G range. This is significantly better than our OpenBSD performance but not quite up to the level of full network speed or the full speed that two Linux machines can get talking directly to each other over our 10G network. I also suspect that performance depends on which specific NICs you have due to driver issues. The live performance of our deployed FreeBSD firewalls is harder to assess because people here don't push the network that hard very often (although every so often someone downloads a big thing from the right Internet source to get really good rates).
(I'm the author of the linked-to article.)
I imagine a near future where TCP/IP stacks, and device drivers are interchangeable between operating systems. In Linux, NDISWrapper [1] enables to use Windows drivers in Linux but it's a wrapper (with all due respect to this project).
[1] https://en.wikipedia.org/wiki/NDISwrapper
Microsoft started out with BSD's TCP/IP stack, but dropped it for their own (back in Windows 3.5 apparently - https://news.ycombinator.com/item?id=41495551)
Adam Barr, formerly with Microsoft, goes into some detail about it here: https://web.archive.org/web/20051114154320/http://www.kuro5h...
Sorta, but only with ancient windows XP drivers. It was a useful stopgap of it's era but linux networking drivers have more than caught up in the meantime.
You mean like DPDK?
I'd think something like Rump Kernel's is a closer analogue: https://en.wikipedia.org/wiki/Rump_kernel
That sent me looking it up. It seems that NetBSD, as the only one, has a rump kernel, but it also looks like work on it stagnated around 10 years ago. That could be because the guy doing a thesis on them, moved on. There is quite some bitrot when following links. Do you know what happened? Were they a failure? Maybe they were surpassed by other OS architectures?
Just more navel-gazing from UTCC. I still don't understand why all of these submissions get upvoted so often. 10G performance just really isn't that interesting anymore, maybe around 2005 when it was the new kid on the block. If they were talking about squeezing firewall performance out of a box with a couple of 200g or 400g adapters and on run-of-the-mill CPUs and no offloading or something like Netflix publishes with their BSD work, I'd be more interested.
Despite 10G being far from new it seems like somehow they're still not even achieving that with FreeBSD. Would be more interesting to see an upgrade to Linux.
It reads a bit like someone LARP'ing a sysadmin. Perhaps they're students or something.
Nah, Chris is definitely a real sysadmin and his blog has been pretty popular in this space for a long time.