I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.
I think all the other explanations are just double-think. Why? If "bugs are just bugs" is really a true sentiment, why is there a separate disclosure process for security bugs? What does it even mean to classify a bug as a security bug during reporting if it's no different than any other bug report? Why are fixes developed in secret & potential embargoes sometimes invoked? I guess some bugs are more equal than others?
As mentioned in the article, every bug is potentially a security problem to someone.
If you know that something is a security issue to your organization, you definitely don't want to paint a target on your back by reporting the bug publicly with an email address <your_name>@<your_org>.com. In the end, it is really actually quite rare (given the size of the code base and the popularity of linux) that a bug has a very wide security impact.
The vast majority of security issues don't affect organizations that are serious about security (yes really, SELinux eliminates or seriously reduces the impact of the vast majority of security bugs).
The problem with that argument is that the reports don’t necessarily come from the organization for whom it’s an issue. Security researchers unaffiliated not impacted by any such issue still report it this way (eg Project Zero reporting issues that don’t impact Google at all).
Also Android uses SELinux and still has lots of kernel exploits. Believing SELinux solves the vast majority of security issues is fallacious, especially since it’s primarily about securing userspace, not the kernel itself .
> If you are forced to use encryption to report security problems, please reconsider this policy as it feels counterproductive (UK government, this means you…)
It is a valid thing to point out, when implementing https on gkh's site would take all of 15 minutes to set up (let's encrypt or cloudflare or whatever you wish).
Things should be https by default these days. There's zero downside anymore.
Sure, but that's ultimately a pretty unlikely attack vector. An attacker still needs to exploit some unknown vulnerability of your web browser in order to get something malicious going.
I basically expect that sort of attack to only be pulled off by a state actor or by a black hat convention for the lolz.
Reading a blog post about linux security? Do you actually care if the NSA/FBI/CIA/FDA/USDA or anyone else knows you read this particular blog post?
I could understand this argument if we were talking about a social media site, or something more substantial. But a blog post?
> authenticity
It's a linux security blog post. While it's technically possible for a MITM to get in between an inject false information... about linux security? Is that really a real threat?
> integrity
Maybe a real problem, assuming a malicious MITM is trying to target you. But, I suspect, there are other avenues that'd be more fruitful in that case. Just hoping that someone would visit an http site seems like a far fetched concern.
Sometimes I dream about a 100% secure OS. Maybe formal verification is the key, or Rust, I don’t know. But I would love to know that I can't be hacked.
The problem is that for the overwhelming majority of use cases the isolation features that are violated by security bugs are not being used for real isolation, but for manageability and convenience. Virtualization, physical host segregation, etc are used to achieve greater isolation. People don't necessarily care about these flaws because they aren't actually exposed to the worst case preconditions. So the amount of contributor attention you could get behind a "100% secure OS" might not be as large as you are hoping. Anyway if you want to work on such things there are various OS development efforts floating around.
Isolation is one thing, correctness is another. You may have architecturally perfect, hardware-assisted isolation, but triggering a bug would breach it. This is how a typical break out of a VM, or a container, or a privilege escalation, happens.
There is a difference between a provably secure-by-design system, and a formally proven secure implementation, like Sel4.
Their view that security bugs are just normal bugs remains very immature and damaging. It it somewhat mitigated by Linux having so many eyes on it and so many developers, but a lot of problems in the past could have bee avoided if they adopted the stance the rest of the industry recognizes as correct.
From their perspective, on their project, with the constraints they operate under, bugs are just bugs. You're free to operationalize some other taxonomy of bugs in your organization; I certainly wouldn't run with "bugs are just bugs" in mine (security bugs are distinctive in that they're paired implicitly with adversaries).
To complicate matters further, it's not as if you could rely on any more "sophisticated" taxonomy from the Linux kernel team, because they're not the originators of most Linux kernel security findings, and not all the actual originators are benevolent.
In the context of the kernel, it’s hard to say when that’s true. It’s very easy to fix some bug that resulted in a kernel crash without considering that it could possibly be part of some complex exploit chain. Basically any bug could be considered a security bug.
> From their perspective, on their project, with the constraints they operate under, bugs are just bugs.
That's a pretty poor justification. Their perspective is wrong, and their constraints don't prevent them from treating security bugs differently as they should.
> almost any bugfix at the level of an operating system kernel can be a “security issue” given the issues involved (memory leaks, denial of service, information leaks, etc.)
On the level of the Linux kernel, this does seem convincing. There is no shared user space on Linux where you know how each component will react/recover in the face of unexpected kernel behaviour, and no SKUs targeting specific use cases in which e.g. a denial of service might be a worse issue than on desktop.
I guess CVEs provide some of this classification, but they seem to cause drama amongst kernel people.
You have a pretty strongly worded stance, but you don't provide an argument for it. May I suggest you detail why exactly you think their perspective is wrong, apart from "a lot of problems in the past could have been avoided"?
Linus has been very clear on avoiding the opposite, which is the OpenBSD situation: they obsess about security so much that nothing else matters to them, which is how you end up with a mature 30 year old OS that still has a dogshit unreliable filesystem in 2026.
To paraphrase LT, security bugs are important, but so are all the other bugs.
OpenBSD doesn't really stress about security so much as they made that their identity and marketing campaign - their OS is lacking too many basic capabilities a security focused OS should have.
> To paraphrase LT, security bugs are important, but so are all the other bugs.
Right, this is wrong, and that's the problem. Security bugs as a class are always going to be more important than certain other classes of bugs.
And their ‘no remote holes’ is true for a base install with no packages, not necessarily a full system.
I think the OpenBSD approach of secure coding is outdated. The goal should have always been to take human error out of the equation as much as possible. Rust and other modern memory safe languages move things in that direction, you don’t need ultra strict coding standards and a bible of compiler flags.
Classifying bugs as security bugs is just theater - and any company or organization that tries to classify bugs that way is immature and hasn't put any thought into it.
First of all "security" is undefined. Second, nearly every bug can be be exploited in a malicious way, but that way is usually not easy to find. So should every bug be classified as a security bug?
Or should only bugs where a person can think of a way on the spot during triage to exploit that bug as a security bug? In that case only a small subset of your "security" bugs are classified as such.
> nearly every bug can be be exploited in a malicious way
This is a bit contextually dependent. "This widget is the wrong color" is probably not a security issue in most cases, unless the widget happens to be a traffic signal, in which case it is a major safety concern.
Even the line between "this is a bug" and "this is just a missing, incomplete, or poorly thought out feature" can get a bit blurry. At a certain point, many engineers get frustrated trying to pick apart the difference between all these ways of classifying the code they are writing and just want to get on with making the system work better.
I think all the other explanations are just double-think. Why? If "bugs are just bugs" is really a true sentiment, why is there a separate disclosure process for security bugs? What does it even mean to classify a bug as a security bug during reporting if it's no different than any other bug report? Why are fixes developed in secret & potential embargoes sometimes invoked? I guess some bugs are more equal than others?
If you know that something is a security issue to your organization, you definitely don't want to paint a target on your back by reporting the bug publicly with an email address <your_name>@<your_org>.com. In the end, it is really actually quite rare (given the size of the code base and the popularity of linux) that a bug has a very wide security impact.
The vast majority of security issues don't affect organizations that are serious about security (yes really, SELinux eliminates or seriously reduces the impact of the vast majority of security bugs).
Also Android uses SELinux and still has lots of kernel exploits. Believing SELinux solves the vast majority of security issues is fallacious, especially since it’s primarily about securing userspace, not the kernel itself .
There must be a way to ship a docker image without a kernel, since it doesn’t get used for anything anyway.
LOL
Things should be https by default these days. There's zero downside anymore.
I basically expect that sort of attack to only be pulled off by a state actor or by a black hat convention for the lolz.
Ads injection is a relatively benign kind of tampering. It could be much more creative and sinister.
Reading a blog post about linux security? Do you actually care if the NSA/FBI/CIA/FDA/USDA or anyone else knows you read this particular blog post?
I could understand this argument if we were talking about a social media site, or something more substantial. But a blog post?
> authenticity
It's a linux security blog post. While it's technically possible for a MITM to get in between an inject false information... about linux security? Is that really a real threat?
> integrity
Maybe a real problem, assuming a malicious MITM is trying to target you. But, I suspect, there are other avenues that'd be more fruitful in that case. Just hoping that someone would visit an http site seems like a far fetched concern.
Cool. So social engineering it is. You are your own worst enemy anyways.
There is a difference between a provably secure-by-design system, and a formally proven secure implementation, like Sel4.
To complicate matters further, it's not as if you could rely on any more "sophisticated" taxonomy from the Linux kernel team, because they're not the originators of most Linux kernel security findings, and not all the actual originators are benevolent.
QED.
That's a pretty poor justification. Their perspective is wrong, and their constraints don't prevent them from treating security bugs differently as they should.
On the level of the Linux kernel, this does seem convincing. There is no shared user space on Linux where you know how each component will react/recover in the face of unexpected kernel behaviour, and no SKUs targeting specific use cases in which e.g. a denial of service might be a worse issue than on desktop.
I guess CVEs provide some of this classification, but they seem to cause drama amongst kernel people.
To paraphrase LT, security bugs are important, but so are all the other bugs.
> To paraphrase LT, security bugs are important, but so are all the other bugs.
Right, this is wrong, and that's the problem. Security bugs as a class are always going to be more important than certain other classes of bugs.
I think the OpenBSD approach of secure coding is outdated. The goal should have always been to take human error out of the equation as much as possible. Rust and other modern memory safe languages move things in that direction, you don’t need ultra strict coding standards and a bible of compiler flags.
First of all "security" is undefined. Second, nearly every bug can be be exploited in a malicious way, but that way is usually not easy to find. So should every bug be classified as a security bug?
Or should only bugs where a person can think of a way on the spot during triage to exploit that bug as a security bug? In that case only a small subset of your "security" bugs are classified as such.
It is meaningless in all cases.
Even the line between "this is a bug" and "this is just a missing, incomplete, or poorly thought out feature" can get a bit blurry. At a certain point, many engineers get frustrated trying to pick apart the difference between all these ways of classifying the code they are writing and just want to get on with making the system work better.
Nonsense.
Such as?