A Brand New Mindcraft Second?

Drag to rearrange sections
Rich Text Content

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Link]


1. this WP article was the 5th in a sequence of articles following the security of the internet from its beginnings to relevant matters of at the moment. discussing the security of linux (or lack thereof) matches properly in there. it was also a properly-researched article with over two months of analysis and interviews, something you can't quite declare your self to your recent items on the subject. you do not like the info? then say so. and even better, do something constructive about them like Kees and others have been trying. nevertheless foolish comparisons to old crap like the Mindcraft research and fueling conspiracies don't precisely help your case.
2. "We do a reasonable job of finding and fixing bugs."
let's start right here. is that this assertion based mostly on wishful considering or cold hard information you're going to share in your response? in line with Kees, the lifetime of security bugs is measured in years. that is more than the lifetime of many gadgets people purchase and use and ditch in that period.
3. "Issues, whether they are security-associated or not, are patched shortly,"
some are, some aren't: let's not forget the recent NMI fixes that took over 2 months to trickle down to stable kernels and we also have a consumer who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-systems.btrfs/49500 (FYI, the overflow plugin is the first one Kees is trying to upstream, think about the shitstorm if bugreports will be handled with this perspective, let's hope btrfs guys are an exception, not the rule). anyway, two examples aren't statistics, so as soon as once more, do you've got numbers or is all of it wishful pondering? (it is partly a trick question because you will even have to explain how one thing gets to be determined to be safety related which as we all know is a messy business in the linux world)
4. "and the stable-update mechanism makes those patches obtainable to kernel customers."
besides when it does not. and sure, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree.
5. "Specifically, the few developers who're working in this space have by no means made a critical attempt to get that work built-in upstream."
you do not should be shy about naming us, in spite of everything you did so elsewhere already. and we additionally explained the the reason why we have not pursued upstreaming our code: https://lwn.web/Articles/538600/ . since i don't anticipate you and your readers to learn any of it, here is the tl;dr: if you need us to spend hundreds of hours of our time to upstream our code, you'll have to pay for it. no ifs no buts, that's how the world works, that is how >90% of linux code gets in too. i personally find it pretty hypocritic that well paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter free of charge. and earlier than somebody brings up the CII, go examine their mail archives, after some initial exploratory discussions i explicitly requested them about supporting this lengthy drawn out upstreaming work and acquired no solutions.


Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Link]


Cash (aha) quote :
> I propose you spend none of your free time on this. Zero. I suggest you get paid to do this. And nicely.
Nobody expect you to serve your code on a silver platter without cost. The Linux basis and big companies utilizing Linux (Google, Purple Hat, Oracle, Samsung, and many others.) should pay security specialists like you to upstream your patchs.


Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Link]


I'd simply like to level out that the best way you phrased this makes your comment a tone argument[1][2]; you have (in all probability unintentionally) dismissed all of the guardian's arguments by pointing at its presentation. The tone of PAXTeam's remark displays the frustration constructed up through the years with the way in which things work which I feel should be taken at face value, empathized with, and understood quite than simply dismissed.
1. http://rationalwiki.org/wiki/Tone_argument
2. http://geekfeminism.wikia.com/wiki/Tone_argument
Cheers,


Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Link]


Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]


why, is upstream known for its fundamental civility and decency? have you even learn the WP post underneath dialogue, never mind previous lkml site visitors?


Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]


Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (guest, #58961) [Link]


No Argument


Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]


Please do not; it does not belong there both, and it particularly would not need a cheering part as the tech press (LWN usually excepted) tends to provide.


Posted Nov 8, 2015 8:36 UTC (Sun) by gmatht (guest, #58961) [Hyperlink]


Ok, but I used to be pondering of Linus Torvalds


Posted Nov 8, 2015 16:Eleven UTC (Solar) by pbonzini (subscriber, #60935) [Link]


Posted Nov 6, 2015 22:43 UTC (Fri) by PaXTeam (visitor, #24616) [Hyperlink]


Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]


Why must you assume only cash will fix this problem? Yes, I agree more resources ought to be spent on fixing Linux kernel safety points, however do not assume somebody giving a corporation (ahem, PAXTeam) cash is the one answer. (Not mean to impugn PAXTeam's safety efforts.)


The Linux development neighborhood could have had the wool pulled over its collective eyes with respect to safety issues (either actual or perceived), but simply throwing cash at the issue will not repair this.


And yes, I do notice the industrial Linux distros do tons (most?) of the kernel development nowadays, and that implies indirect monetary transactions, however it is a lot more concerned than just that.


Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (visitor, #24616) [Link]


Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link]


Posted Nov 7, 2015 9:Forty nine UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]


Posted Nov 6, 2015 23:Thirteen UTC (Fri) by dowdle (subscriber, #659) [Link]


I think you undoubtedly agree with the gist of Jon's argument... not sufficient focus has been given to security within the Linux kernel... the article gets that half right... cash hasn't been going in the direction of safety... and now it must. Aren't you glad?


Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]


they talked to spender, not me personally, but yes, this facet of the coin is properly represented by us and others who had been interviewed. the identical way Linus is an effective representative of, well, his personal pet mission called linux.
> And if Jon had only talked to you, his would have been too.
provided that i'm the author of PaX (part of grsec) sure, speaking to me about grsec issues makes it one of the best ways to analysis it. but if you recognize of another person, be my guest and title them, i am fairly sure the lately formed kernel self-protection of us can be dying to have interaction them (or not, i don't assume there's a sucker on the market with thousands of hours of free time on their hand).
> [...]it additionally contained quite just a few of groan-worthy statements.
nothing is perfect but contemplating the audience of the WP, this is one among the higher journalistic items on the subject, regardless of the way you and others do not just like the sorry state of linux security uncovered in there. if you would like to debate more technical details, nothing stops you from speaking to us ;).
talking of your complaints about journalistic qualities, since a earlier LWN article saw it fit to incorporate several typical dismissive claims by Linus about the standard of unspecified grsec features with no proof of what experience he had with the code and how recent it was, how come we didn't see you or anyone else complaining about the quality of that article?
> Aren't you glad?
no, or not yet anyway. i've heard plenty of empty phrases through the years and nothing ever manifested or worse, all the money has gone to the pointless exercise of fixing individual bugs and associated circus (that Linus rightfully despises FWIW).


Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Link]


Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Link]


Right now we have received developers from huge names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Unfortunately, ROKANTHEMES encompassing cultural perspective of developers is to hit purposeful targets, and occasionally efficiency targets. Safety targets are often overlooked. Ideally, the tradition would shift in order that we make it tough to follow insecure habits, patterns or paradigms -- that is a job that can take a sustained effort, not merely the upstreaming of patches.
Regardless of the culture, these patches will go upstream finally anyway because the ideas that they embody are now well timed. I can see a approach to make it occur: Linus will accept them when a big finish-user (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here is a set of enhancements, we're already using them to solve this kind of problem, this is how every part will stay working because $evidence, note fastidiously that you're staring down the barrels of a fork because your tree is now evolutionarily disadvantaged'. It's a game and will be gamed; I would want that the community shepherds users to follow the sample of declaring problem + answer + functional take a look at evidence + efficiency take a look at proof + safety check evidence.
K3n.


Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]


And about that fork barrel: I might argue it is the other method around. Google forked and misplaced already.


Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (guest, #99377) [Hyperlink]


Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]


Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Link]


So I need to confess to a certain quantity of confusion. I may swear that the article I wrote said precisely that, however you've got put a good quantity of effort into flaming it...?


Posted Nov 8, 2015 1:34 UTC (Solar) by PaXTeam (visitor, #24616) [Link]


Posted Nov 6, 2015 22:52 UTC (Fri) by flussence (subscriber, #85566) [Hyperlink]


I personally think you and Nick Krause share reverse sides of the same coin. Programming ability and fundamental civility.


Posted Nov 6, 2015 22:Fifty nine UTC (Fri) by dowdle (subscriber, #659) [Link]


Posted Nov 7, 2015 0:16 UTC (Sat) by rahvin (guest, #16953) [Hyperlink]


I hope I'm improper, however a hostile attitude is not going to assist anyone get paid. It is a time like this the place one thing you appear to be an "skilled" at and there is a demand for that experience where you show cooperation and willingness to take part as a result of it's a possibility. I am relatively shocked that someone does not get that, however I'm older and have seen a number of of those opportunities in my profession and exploited the hell out of them. You solely get just a few of these in the common profession, and handful at the most.
Sometimes you must spend money on proving your skills, and that is one of those moments. It seems the Kernel community might finally take this safety lesson to coronary heart and embrace it, as mentioned within the article as a "mindcraft moment". This is an opportunity for builders that may wish to work on Linux safety. Some will exploit the opportunity and others will thumb their noses at it. In the end those builders that exploit the opportunity will prosper from it.
I feel old even having to put in writing that.


Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]


Perhaps there is a hen and egg problem right here, however when seeking out and funding people to get code upstream, it helps to pick out individuals and groups with a historical past of with the ability to get code upstream.
It's completely cheap to choose working out of tree, offering the ability to develop impressive and important security advances unconstrained by upstream requirements. That's work somebody may also want to fund, if that meets their needs.


Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (guest, #24616) [Link]


Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Hyperlink]


You make this argument (implying you do research and Josh would not) and then fail to assist it by any cite. It would be rather more convincing for those who surrender on the Onus probandi rhetorical fallacy and actually cite information.
> working example, it was *them* who steered that they would not fund out-of-tree work however would consider funding upstreaming work, except when pressed for the small print, all i acquired was silence.
For those following along at dwelling, this is the relevant set of threads:
http://lists.coreinfrastructure.org/pipermail/cii-focus on...
A quick precis is that they informed you your venture was unhealthy as a result of the code was by no means going upstream. You told them it was because of kernel builders attitude so they should fund you anyway. They told you to submit a grant proposal, you whined more about the kernel attitudes and eventually even your apologist told you that submitting a proposal may be the smartest thing to do. At that point you went silent, not vice versa as you imply above.
> obviously i won't spend time to write up a begging proposal simply to be told that 'no sorry, we don't fund multi-year initiatives at all'. that's one thing that one ought to be told in advance (or heck, be part of some public guidelines in order that others will know the foundations too).
You seem to have a fatally flawed grasp of how public funding works. If you don't inform individuals why you want the cash and how you may spend it, they're unlikely to disburse. Saying I'm sensible and I know the problem now hand over the money doesn't even work for most Teachers who've a strong reputation in the field; which is why most of them spend >30% of their time writing grant proposals.
> as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not properly credited)?
jejb@jarvis> git log|grep -i 'Creator: pax.*workforce'|wc -l
1
Stellar, I must say. And earlier than you light off on these who've misappropriated your credit score, please do not forget that getting code upstream on behalf of reluctant or incapable actors is a vastly beneficial and time consuming talent and one of the reasons groups like Linaro exist and are nicely funded. If extra of your stuff does go upstream, it will be because of the not inconsiderable efforts of different folks on this area.
You now have a enterprise model promoting non-upstream security patches to clients. There's nothing unsuitable with that, it's a fairly usual first stage enterprise mannequin, however it does somewhat depend on patches not being upstream in the first place, calling into query the earnestness of your try to place them there.
Now this is some free advice in my field, which is aiding corporations align their companies in open supply: The selling out of tree patch route is at all times an eventual failure, significantly with the kernel, as a result of if the performance is that useful, it will get upstreamed or reinvented in your regardless of, leaving you with nothing to sell. In case your business plan B is selling expertise, you will have to bear in mind that it may be a tough sell when you've no out of tree differentiator left and git history denies that you simply had anything to do with the in-tree patches. In actual fact "loopy safety individual" will develop into a self fulfilling prophecy. The recommendation? it was apparent to everyone else who learn this, but for you, it is do the upstreaming yourself earlier than it gets done for you. That approach you could have a professional historical claim to Plan B and also you would possibly also have a Plan A selling a rollup of upstream track patches built-in and delivered before the distributions get round to it. Even your software to the CII couldn't be dismissed as a result of your work wasn't going wherever. Your alternative is to continue taking part in the role of Cassandra and doubtless endure her eventual fate.


Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (visitor, #24616) [Link]


> Second, for the potentially viable items this would be a multi-yr
> full time job. Is the CII prepared to fund projects at that stage? If not
> all of us would end up with plenty of unfinished and partially damaged options.
please show me the answer to that query. and not using a definitive 'sure' there isn't any point in submitting a proposal as a result of that is the time frame that in my opinion the job will take and any proposal with that requirement would be shot down instantly and be a waste of my time. and that i stand by my claim that such easy fundamental necessities should be public information.
> Stellar, I need to say.
"Lies, damned lies, and statistics". you understand there's more than one way to get code into the kernel? how about you use your git-fu to seek out all the bugreports/prompt fixes that went in as a consequence of us? as for particularly me, Greg explicitly banned me from future contributions via af45f32d25cc1 so it's no wonder i don't ship patches instantly in (and that one commit you found that went in regardless of mentioned ban is actually a very bad instance as a result of additionally it is the one which Linus censored for no good motive and made me resolve to by no means ship security fixes upstream until that observe changes).
> You now have a enterprise mannequin promoting non-upstream safety patches to clients.
now? we have had paid sponsorship for our varied stable kernel series for 7 years. i wouldn't call it a business mannequin though as it hasn't paid anyone's payments.
> [...]calling into query the earnestness of your try to put them there.
i must be missing one thing here however what attempt? i've by no means in my life tried to submit PaX upstream (for all the reasons mentioned already). the CII mails have been exploratory to see how severe that complete group is about truly securing core infrastructure. in a way i've obtained my solutions, there's nothing more to the story.
as to your free advice, let me reciprocate: complex problems don't solve themselves. code solving complicated issues doesn't write itself. individuals writing code fixing advanced issues are few and much between that one can find out in short order. such folks (area experts) do not work without spending a dime with few exceptions like ourselves. biting the hand that feeds you'll only finish you up in starvation.
PS: since you're so positive about kernel developers' means to reimplement our code, possibly take a look at what parallel options i nonetheless maintain in PaX regardless of vanilla having a 'totally-not-reinvented-here' implementation and take a look at to understand the rationale. or just have a look at all the CVEs that affected say vanilla's ASLR but didn't have an effect on mine.
PPS: Cassandra by no means wrote code, i do. criticizing the sorry state of kernel safety is a side undertaking when i'm bored or just waiting for the following kernel to compile (i want LTO was more efficient).


Posted Nov 8, 2015 2:28 UTC (Solar) by jejb (subscriber, #6654) [Link]


In different phrases, you tried to outline their process for them ... I can not think why that wouldn't work.
> "Lies, damned lies, and statistics".
The issue with ad hominem attacks is that they're singularly ineffective against a transparently factual argument. I posted a one line command anyone may run to get the number of patches you've got authored in the kernel. Why don't you publish an equivalent that gives figures you like extra?
> i've by no means in my life tried to submit PaX upstream (for all the reasons discussed already).
So the grasp plan is to exhibit your experience by the variety of patches you have not submitted? great plan, world domination beckons, sorry that one obtained away from you, however I am sure you will not let it occur again.


Posted Nov 8, 2015 2:56 UTC (Sun) by PaXTeam (guest, #24616) [Link]


what? since when does asking a query define something? isn't that how we find out what someone else thinks? is not that what *they* have that webform (by no means mind the mailing lists) for as well? in other phrases you admit that my query was not really answered .
> The issue with ad hominem attacks is that they're singularly ineffective in opposition to a transparently factual argument.
you didn't have an argument to start with, that is what i defined within the half you carefully selected to not quote. i am not right here to defend myself towards your clearly idiotic makes an attempt at proving no matter you're making an attempt to show, as they are saying even in kernel circles, code speaks, bullshit walks. you'll be able to have a look at mine and resolve what i can or cannot do (not that you have the information to grasp most of it, thoughts you). that mentioned, there're clearly different extra succesful people who've done so and decided that my/our work was worth something else no person would have been feeding off of it for the past 15 years and nonetheless counting. and as unimaginable as it could appear to you, life doesn't revolve across the vanilla kernel, not everyone's dying to get their code in there particularly when it means to place up with such silly hostility on lkml that you simply now also demonstrated here (it's ironic how you came to the protection of josh who particularly requested individuals not to bring that infamous lkml fashion right here. nice job there James.). as for world domination, there're many ways to attain it and something tells me that you're clearly out of your league right here since PaX has already achieved that. you are operating such code that implements PaX features as we converse.


Posted Nov 8, 2015 16:52 UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]


I posted the one line git script giving your authored patches in response to this authentic request by you (this one, just in case you've forgotten http://lwn.net/Articles/663591/):
> as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not properly credited)?
I take it, by the way you've got shifted ground in the earlier threads, that you wish to withdraw that request?


Posted Nov 8, 2015 19:31 UTC (Sun) by PaXTeam (guest, #24616) [Link]


Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Hyperlink]


Please present one that's not improper, or less improper. It'll take much less time than you've already wasted here.


Posted Nov 8, 2015 22:Forty nine UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]


anyway, since it's you guys who have a bee in your bonnet, let's check your degree of intelligence too. first figure out my e mail tackle and mission name then try to search out the commits that say they come from there (it brought back some memories from 2004 already, how occasions flies! i'm shocked i actually managed to accomplish this a lot with explicitly not making an attempt, think about if i did :). it's an incredibly complex process so by engaging in it you may show your self to be the highest canine right here on lwn, no matter that is value ;).


Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Link]


*shrug* Or do not; you're only sullying your personal fame.


Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]


Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Link]


I wouldn't both


Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Hyperlink]


Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (visitor, #62367) [Hyperlink]


Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (visitor, #24616) [Link]


Posted Nov 12, 2015 13:Forty seven UTC (Thu) by nix (subscriber, #2304) [Hyperlink]


Ah. I assumed my reminiscence wasn't failing me. Examine to PaXTeam's response to .
PaXTeam shouldn't be averse to outright mendacity if it means he will get to look proper, I see. Maybe PaXTeam's memory is failing, and this apparent contradiction will not be a brazen lie, but given that the 2 posts have been made within a day of one another I doubt it. (PaXTeam's whole unwillingness to assume good religion in others deserves some reflection. Yes, I *do* assume he is mendacity by implication right here, and doing so when there's virtually nothing at stake. God alone knows what he is prepared to stoop to when something *is* at stake. Gosh I'm wondering why his fixes aren't going upstream very fast.)


Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (visitor, #24616) [Link]


> and that one commit you found that went in despite said ban
also somebody's ban does not imply it will translate into someone else's execution of that ban as it's clear from the commit in query. it's somewhat unhappy that it takes a safety fix to expose the fallacy of this coverage though. the rest of your pithy ad hominem speaks for itself better than i ever might ;).


Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Link]


Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Hyperlink]


I don't see this message in my mailbox, so presumably it got swallowed.


Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]


You're conscious that it is solely doable that everyone is fallacious here , right?
That the kernel maintainers need to focus more on security, that the article was biased, that you're irresponsible to decry the state of safety, and do nothing to assist, and that your patchsets would not assist that much and are the wrong course for the kernel? That just because the kernel maintainers aren't 100% proper it doesn't mean you are?


Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (guest, #5770) [Link]


I believe you've gotten him backwards there. Jon is comparing this to Mindcraft because he thinks that regardless of being unpalatable to a variety of the community, the article might in reality comprise quite a lot of truth.


Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Hyperlink]


Posted Nov 9, 2015 15:13 UTC (Mon) by spender (visitor, #23067) [Link]


"There are rumors of darkish forces that drove the article in the hopes of taking Linux down a notch. All of this could effectively be true"
Simply as you criticized the article for mentioning Ashley Madison despite the fact that within the very first sentence of the following paragraph it mentions it did not involve the Linux kernel, you cannot give credence to conspiracy theories with out incurring the identical criticism (in other phrases, you can't play the Glenn Beck "I'm just asking the questions here!" whose "questions" gas the conspiracy theories of others). Very similar to mentioning Ashley Madison for example for non-technical readers in regards to the prevalence of Linux in the world, if you are criticizing the mention then shouldn't likening a non-FUD article to a FUD article additionally deserve criticism, especially given the rosy, self-congratulatory image you painted of upstream Linux safety?
As the PaX Group pointed out in the preliminary submit, the motivations aren't laborious to know -- you made no point out at all about it being the 5th in an extended-working series following a pretty predictable time trajectory.
No, we did not miss the general analogy you have been attempting to make, we simply do not think you possibly can have your cake and eat it too.
-Brad


Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]


Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]


It's gracious of you to not blame your readers. I figure they're a fair goal: there's that line about these ignorant of historical past being condemned to re-implement Unix -- as your readers are! :-)
K3n.


Posted Nov 9, 2015 18:Forty three UTC (Mon) by bojan (subscriber, #14302) [Link]


Unfortunately, I don't perceive neither the "security" folks (PaXTeam/spender), nor the mainstream kernel of us in terms of their attitude. I confess I have completely no technical capabilities on any of those topics, but when they all decided to work together, as a substitute of getting endless and pointless flame wars and blame sport exchanges, a lot of the stuff would have been performed already. And all the while everyone involved could have made another huge pile of cash on the stuff. They all appear to want to have a greater Linux kernel, so I've acquired no thought what the issue is. It appears that evidently no person is willing to yield any of their positions even a little bit bit. Instead, each sides seem like bent on making an attempt to insult their method into forcing the opposite facet to hand over. Which, after all, never works - it just causes extra pushback.
Perplexing stuff...


Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]


Posted Nov 9, 2015 19:Forty four UTC (Mon) by bojan (subscriber, #14302) [Link]


Take a scientific computational cluster with an "air gap", as an illustration. You'd most likely need most of the safety stuff turned off on it to achieve maximum performance, as a result of you may trust all users. Now take a few billion cellphones that could be difficult or gradual to patch. You'd probably need to kill lots of the exploit classes there, if those devices can nonetheless run moderately nicely with most security options turned on.
So, it is not both/or. It's in all probability "it relies upon". But, if the stuff is not there for everybody to compile/use within the vanilla kernel, will probably be tougher to make it part of on a regular basis selections for distributors and customers.


Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Link]


How unhappy. This Dijkstra quote comes to thoughts immediately:
Software program engineering, after all, presents itself as another worthy trigger, but that's eyewash: in case you carefully read its literature and analyse what its devotees truly do, you'll discover that software engineering has accepted as its charter "Methods to program if you can not."


Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Hyperlink]


I suppose that fact was too unpleasant to suit into Dijkstra's world view.


Posted Nov 7, 2015 10:52 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]


Indeed. And the fascinating thing to me is that once I attain that time, assessments are not enough - model checking at a minimal and actually proofs are the only method forwards. I am no safety skilled, my discipline is all distributed systems. I perceive and have implemented Paxos and i consider I can explain how and why it works to anyone. But I'm currently doing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No take a look at is sufficient because there are infinite interleavings of events and my head simply could not cope with engaged on this either at the computer or on paper - I discovered I could not intuitively reason about this stuff at all. So I started defining the properties and wanted and step by step proving why every of them holds. With out my notes and proofs I can't even clarify to myself, not to mention anyone else, why this factor works. I discover this both completely apparent that this will occur and completely terrifying - the upkeep value of these algorithms is now an order of magnitude larger.


Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Hyperlink]


> Indeed. And the interesting thing to me is that after I reach that time, assessments are usually not sufficient - model checking at a minimal and really proofs are the one approach forwards.
Or are you just utilizing the incorrect maths? Hobbyhorse time once more :-) however to quote a fellow Decide developer ... "I often walk right into a SQL improvement shop and see that wall - you already know, the one with the huge SQL schema that no-one absolutely understands on it - and wonder how I can easily hold your complete schema for a Pick database of the identical or better complexity in my head".
But it is easy - by education I am a Chemist, by interest a Physical Chemist (and by career an unemployed programmer :-). And when I'm excited about chemistry, I can ask myself "what's an atom fabricated from" and suppose about things just like the sturdy nuclear power. Next stage up, how do atoms stick collectively and make molecules, and assume about the electroweak force and electron orbitals, and how do chemical reactions occur. Then I believe about molecules stick collectively to make materials, and suppose about metals, and/or Van de Waals, and stuff.
Level is, you need to *layer* stuff, and look at issues, and say "how can I break up elements off into 'black packing containers' so at anybody stage I can assume the other ranges 'simply work'". For instance, with Decide a FILE (desk to you) shops a category - a collection of similar objects. One object per Document (row). And, similar as relational, one attribute per Area (column). Are you able to map your relational tables to actuality so easily? :-)
Going back THIRTY years, I remember a story about a man who constructed little laptop crabs, that could fairly fortunately scuttle around within the surf zone. Because he didn't attempt to work out how to resolve all the problems without delay - each of his (incredibly puny by immediately's standards - that is the 8080/Z80 era!) processors was set to only process somewhat little bit of the problem and there was no central "brain". However it worked ... Possibly you must just write a bunch of small modules to solve each particular person downside, and let final answer "just occur".
Cheers,
Wol


Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (guest, #60862) [Hyperlink]


To my understanding, this is strictly what a mathematical abstraction does. For example in Z notation we would construct schemas for the assorted modifying ("delta") operations on the bottom schema, and then argue about preservation of formal invariants, properties of the end result, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A by means of O (for which they've been already argued).
The end result is a set of operations that, executed in arbitrary order, result in a set of properties holding for the end result and outputs. Thus proving the formal design correct (w/ caveat lectors regarding scope, correspondence with its implementation [though that can be confirmed as effectively], and skim-only ["xi"] operations).


Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]


Looking by means of the history of computing (and probably plenty of different fields too), you'll most likely find that individuals "cannot see the wooden for the timber" more often that not. They dive into the element and utterly miss the big picture.
(Drugs, and interest of mine, suffers from that too - I remember someone speaking concerning the advisor desirous to amputate a gangrenous leg to save somebody's life - oblivious to the fact that the affected person was dying of cancer.)
Cheers,
Wol


Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]


https://www.youtube.com/watch?v=VpuVDfSXs-g
(LCA 2015 - "Programming Thought-about Harmful")
FWIW, I feel that this speak may be very related to why writing secure software program is so onerous..
-Dave.


Posted Nov 7, 2015 5:49 UTC (Sat) by kunitz (subscriber, #3965) [Link]


Whereas we're spending millions at a mess of security problems, kernel points aren't on our top-priority listing. Truthfully I remember only once having discussing a kernel vulnerability. The results of the analysis has been that every one our techniques had been running kernels that had been older because the kernel that had the vulnerability.
But "patch administration" is a real concern for us. Software must proceed to work if we set up safety patches or replace to new releases because of the tip-of-life coverage of a vendor. The income of the company is relying on the IT programs running. So "not breaking user space" is a safety function for us, as a result of a breakage of 1 part of our several ten thousands of Linux methods will stop the roll-out of the safety update.
One other downside is embedded software program or firmware. Nowadays nearly all hardware programs include an working system, usually some Linux model, offering a fill network stack embedded to support remote administration. Often these systems don't survive our obligatory safety scan, as a result of distributors nonetheless didn't update the embedded openssl.
The true challenge is to offer a software program stack that can be operated in the hostile surroundings of the Internet sustaining full system integrity for ten years or even longer with none customer maintenance. The present state of software engineering will require assist for an automatic replace course of, however vendors must understand that their business mannequin must be capable of finance the assets providing the updates.
Overall I'm optimistic, networked software program will not be the primary technology used by mankind inflicting problems that have been addressed later. Steam engine use may result in boiler explosions but the "engineers" had been ready to reduce this risk significantly over a couple of decades.


Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Link]


The following is all guess work; I might be eager to know if others have proof both a method or one other on this: The individuals who discover ways to hack into these methods by kernel vulnerabilities know that they expertise they've learnt have a market. Thus they don't tend to hack with a view to wreak havoc - certainly on the entire the place data has been stolen as a way to launch and embarrass folks, it _seems_ as though those hacks are through a lot less complicated vectors. I.e. lesser expert hackers find there is a whole load of low-hanging fruit which they can get at. They're not being paid forward of time for the info, so they turn to extortion as a substitute. They don't cover their tracks, and they'll often be discovered and charged with criminal offences.
So if your safety meets a sure basic stage of proficiency and/or your company is not doing anything that places it near the highest of "companies we'd wish to embarrass" (I think the latter is much more effective at maintaining methods "secure" than the former), then the hackers that get into your system are likely to be skilled, paid, and probably not going to do much harm - they're stealing data for a competitor / state. So that doesn't bother your bottom line - at the very least not in a approach which your shareholders will remember of. So why fund safety?


Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (visitor, #82661) [Link]


However, some effective mitigation in kernel degree can be very helpful to crush cybercriminal/skiddie's try. If certainly one of your buyer running a future trading platform exposes some open API to their purchasers, and if the server has some memory corruption bugs will be exploited remotely. Then you recognize there are recognized assault strategies( similar to offset2lib) can help the attacker make the weaponized exploit so much simpler. Will you explain the failosophy "A bug is bug" to your customer and inform them it might be ok? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp.
To probably the most industrial makes use of, extra safety mitigation inside the software program won't price you more funds. You may nonetheless must do the regression test for every upgrade.


Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]


Needless to say I concentrate on exterior web-primarily based penetration-exams and that in-house assessments (native LAN) will doubtless yield completely different results.


Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link]


I keep studying this headline as "a brand new Minecraft second", and pondering that possibly they've determined to follow up the .Net factor by open-sourcing Minecraft. Oh properly. I imply, security is nice too, I guess.


Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]


Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Hyperlink]


Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Hyperlink]


Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]


Posted Nov 9, 2015 15:Fifty three UTC (Mon) by neiljerram (subscriber, #12005) [Link]


(Oh, and I used to be additionally nonetheless questioning how Minecraft had taught us about Linux performance - so due to the other comment thread that identified the 'd', not 'e'.)


Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (guest, #4654) [Hyperlink]


I would identical to so as to add that for my part, there is a general drawback with the economics of computer security, which is particularly seen at present. Two issues even possibly.
First, the money spent on computer security is often diverted towards the so-referred to as safety "circus": fast, easy solutions which are primarily chosen simply so as to "do one thing" and get better press. It took me a very long time - maybe many years - to say that no security mechanism at all is healthier than a foul mechanism. However now I firmly consider on this attitude and would quite take the chance knowingly (provided that I can save money/resource for myself) than take a foul strategy at fixing it (and haven't any cash/resource left when i notice I should have completed one thing else). And that i find there are various unhealthy or incomplete approaches presently obtainable in the computer safety discipline.
These spilling our uncommon money/resources on prepared-made useless tools ought to get the dangerous press they deserve. And, we definitely must enlighten the press on that because it's not so easy to appreciate the effectivity of safety mechanisms (which, by definition, should prevent issues from taking place).
Second, and that could be more recent and more worrying. The movement of cash/useful resource is oriented in the course of assault tools and vulnerabilities discovery much more than within the direction of recent protection mechanisms.
This is very worrying as cyber "defense" initiatives look increasingly more like the standard idustrial initiatives geared toward producing weapons or intelligence techniques. Furthermore, unhealthy useless weapons, because they are only working in opposition to our very weak present methods; and bad intelligence techniques as even fundamental school-stage encryption scares them all the way down to ineffective.
However, all of the ressources are for these adult teenagers playing the white hat hackers with not-so-tough programming tricks or network monitoring or WWI-level cryptanalysis. And now additionally for the cyberwarriors and cyberspies which have yet to show their usefulness fully (especially for peace protection...).
Personnally, I would happily go away them all the hype; but I am going to forcefully claim that they have no proper whatsoever on any of the price range allocation decisions. Only those working on protection ought to. And yep, it means we should determine where to put there resources. Now we have to claim the unique lock for ourselves this time. (and I guess the PaXteam could possibly be amongst the primary to learn from such a change).
Whereas fascinated by it, I would not even go away white-hat or cyber-guys any hype in the end. That's extra publicity than they deserve.
I crave for the day I'll learn in the newspaper that: "Another of those ailing advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well known virus program code exploiting a programmer mistake and managed nevertheless to bring a kind of unfinished and unhealthy high quality packages, X, that we are all obliged to use to its knees, annoying millions of standard users along with his unlucky cyber-vandalism. All of the protection consultants unanimously recommend that, as soon as once more, the budget of the cyber-command be retargetted, or at the least leveled-off, so as to deliver more security engineer positions in the educational domain or civilian trade. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional on this affair."


Hmmm - cyber-hooligans - I just like the label. Though it doesn't apply well to the battlefield-oriented variant.


Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Link]


The state of 'software safety business' is a f-ng catastrophe. Failure of the highest order. There is huge amounts of cash that goes into 'cyber safety', but it's usually spent on authorities compliance and audit efforts. This implies as an alternative of truly placing effort into correcting issues and mitigating future problems, the majority of the effort goes into taking current purposes and making them conform to committee-pushed tips with the minimal amount of effort and modifications.
Some stage of regulation and standardization is absolutely needed, however lay people are clueless and are completely unable to discern the distinction between any individual who has worthwhile experience versus some company that has spent thousands and thousands on slick advertising and 'native promoting' on giant web sites and computer magazines. The people with the cash unfortunately only have their very own judgment to depend on when shopping for into 'cyber safety'.
> These spilling our uncommon money/sources on prepared-made useless instruments should get the dangerous press they deserve.
There isn't a such factor as 'our uncommon cash/sources'. You've gotten your money, I have mine. Money being spent by some corporation like Redhat is their cash. Money being spent by governments is the government's money. (you, literally, have way more control in how Walmart spends it's cash then over what your government does with their's)
> This is particularly worrying as cyber "defense" initiatives look increasingly like the same old idustrial projects aimed toward producing weapons or intelligence methods. Moreover, dangerous ineffective weapons, because they are solely working towards our very susceptible present programs; and unhealthy intelligence techniques as even fundamental college-degree encryption scares them all the way down to useless.
Having secure software with strong encryption mechanisms within the palms of the public runs counter to the pursuits of most major governments. Governments, like another for-profit group, are primarily fascinated with self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is Way more helpful to them then trying to help the public have a secure mechanism for making telephone calls. Especially when those secure mechanisms interfere with information assortment efforts.
Unfortunately you/I/us can't rely on some magical benefactor with deep pockets to sweep in and make Linux higher. It is simply not going to occur.
Companies like Redhat have been massively beneficial to spending resources to make Linux kernel extra capable.. nonetheless they are pushed by a the necessity to turn a revenue, which means they should cater on to the the sort of necessities established by their customer base. Prospects for EL are typically rather more targeted on decreasing prices associated with administration and software program growth then safety on the low-level OS.
Enterprise Linux prospects are inclined to rely on physical, human coverage, and community security to protect their 'comfortable' interiors from being exposed to exterior threats.. assuming (rightly) that there is very little they will do to actually harden their methods. In reality when the choice comes between safety vs comfort I'm certain that the majority customers will happily defeat or strip out any security mechanisms introduced into Linux.
On top of that when most Enterprise software program is extremely bad. A lot in order that 10 hours spent on bettering a web entrance-end will yield extra actual-world safety benefits then a one thousand hours spent on Linux kernel bugs for many companies.
Even for 'normal' Linux users a safety bug of their Firefox's NAPI flash plugin is way more devastating and poses a massively larger risk then a obscure Linux kernel buffer over flow drawback. It's just probably not essential for attackers to get 'root' to get entry to the essential information... usually all of which is contained in a single user account.
In the end it's as much as individuals such as you and myself to place the trouble and cash into improving Linux safety. For each ourselves and other individuals.


Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (visitor, #4654) [Link]


Spilling has at all times been the case, however now, to me and in pc safety, most of the money seems spilled because of bad religion. And this is usually your money or mine: both tax-fueled governemental sources or company prices which are instantly reimputed on the costs of products/software program we are instructed we're *obliged* to buy. (Look at company firewalls, home alarms or antivirus software advertising and marketing discourse.)
I believe it's time to point out that there are several "malicious malefactors" round and that there is an actual need to identify and sanction them and confiscate the assets they've in some way managed to monopolize. And i do *not* suppose Linus is among such culprits by the way in which. But I think he could also be among the ones hiding their heads in the sand about the aforementioned evil actors, while he in all probability has more leverage to counteract them or oblige them to reveal themselves than many of us.
I find that to be of brown-paper-bag degree (although head-in-the-sand is somehow a brand new interpretation).
In the long run, I believe you're right to say that presently it's solely as much as us people to strive honestly to do something to enhance Linux or pc security. But I still think that I'm proper to say that this isn't normal; especially whereas some very serious folks get very critical salaries to distribute randomly some tough to evaluate budgets.
[1] A paradoxical situation if you give it some thought: in a domain where you might be at first preoccupied by malicious people everybody should have factual, clear and trustworthy behavior as the primary precedence in their thoughts.


Posted Nov 9, 2015 15:47 UTC (Mon) by MarcB (subscriber, #101804) [Hyperlink]


It even has a nice, seven line Fundamental-pseudo-code that describes the current state of affairs and clearly shows that we are caught in an limitless loop. It does not reply the large question, although: How to write down better software program.
The sad thing is, that this is from 2005 and all the things that have been obviously silly concepts 10 years in the past have proliferated even more.


Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]


Note IMHO, we must always investigate further why these dumb issues proliferate and get so much assist.
If it's only human psychology, effectively, let's fight it: e.g. Mozilla has shown us that they can do fantastic issues given the fitting message.
If we are facing energetic individuals exploiting public credulity: let's determine and struggle them.
But, extra importantly, let's capitalize on this information and safe *our* systems, to exhibit at a minimum (and more later on after all).
Your reference conclusion is especially good to me. "challenge [...] the typical wisdom and the established order": that job I would happily accept.


Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]


That rant is itself a bunch of "empty calories". The converse to the items it rants about, which it's suggesting at some stage, would be as unhealthy or worse, and indicative of the worst form of security considering that has put lots of people off. Alternatively, it is only a rant that gives little of worth.
Personally, I think there is not any magic bullet. Safety is and all the time has been, in human history, an arms race between defenders and attackers, and one that is inherently a commerce-off between usability, risks and costs. If there are mistakes being made, it is that we should always most likely spend more resources on defences that could block entire classes of attacks. E.g., why is the GRSec kernel hardening stuff so laborious to apply to common distros (e.g. there isn't any reliable source of a GRSec kernel for Fedora or RHEL, is there?). Why does all the Linux kernel run in a single safety context? Why are we still writing numerous software program in C/C++, typically without any basic security-checking abstractions (e.g. primary bounds-checking layers in between I/O and parsing layers, say)? Can hardware do more to supply security with pace?
Little question there are loads of people engaged on "block courses of attacks" stuff, the question is, why aren't there extra resources directed there?


Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link]


>There are a variety of the reason why Linux lags behind in defensive security technologies, but considered one of the key ones is that the companies earning profits on Linux haven't prioritized the event and integration of those technologies.
This looks as if a reason which is admittedly price exploring. Why is it so?
I believe it is not obvious why this does not get some extra attention. Is it potential that the people with the cash are proper to not more extremely prioritise this? Afterall, what curiosity have they got in an unsecure, exploitable kernel? Where there is common cause, linux development will get resourced. It has been this way for a few years. If filesystems qualify for common interest, absolutely security does. So there would not seem to be any apparent motive why this challenge does not get extra mainstream attention, except that it really already gets sufficient. You may say that catastrophe has not struck yet, that the iceberg has not been hit. Nevertheless it appears to be that the linux growth course of just isn't overly reactive elsewhere.


Posted Nov 10, 2015 15:Fifty three UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]


That's an interesting query, definitely that is what they really imagine regardless of what they publicly say about their dedication to safety applied sciences. What's the truly demonstrated draw back for Kernel builders and the organizations that pay them, so far as I can tell there is just not sufficient consequence for the lack of Safety to drive more funding, so we are left begging and cajoling unconvincingly.


Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (guest, #4654) [Hyperlink]


The key concern with this area is it relates to malicious faults. So, when consequences manifest themselves, it is just too late to act. And if the current dedication to an absence of voluntary technique persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia.
Admittedly, kernel developpers seem pretty resistant to paranoia. That is an effective factor. However I'm waiting for the days where armed land-drones patrol US streets in the neighborhood of their youngsters schools for them to discover the feeling. They are not so distants the times when innocent lives will unconsciouly rely on the safety of (linux-based) pc programs; under water, that's already the case if I remember correctly my final dive, as well as in several latest cars according to some experiences.


Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Hyperlink]


Classic internet hosting firms that use Linux as an exposed front-end system are retreating from development whereas HPC, cell and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel in their directions.
This is basically not that stunning: For internet hosting wants the kernel has been "completed" for fairly some time now. Moreover support for current hardware there shouldn't be much use for newer kernels. Linux 3.2, and even older, works simply nice.
Hosting does not need scalability to a whole lot or 1000's of CPU cores (one uses commodity hardware), complex instrumentation like perf or tracing (methods are locked down as much as possible) or superior power-management (if the system doesn't have constant excessive load, it's not making sufficient cash). So why ought to internet hosting firms still make sturdy investments in kernel improvement? Even when that they had something to contribute, the hurdles for contribution have develop into larger and better.
For their security needs, hosting corporations already use Grsecurity. I don't have any numbers, however some experience means that Grsecurity is basically a hard and fast requirement for shared hosting.
Alternatively, kernel security is sort of irrelevant on nodes of a brilliant pc or on a system working giant business databases which are wrapped in layers of middle-ware. And cellular vendors merely do not care.


Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]


Linking


Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Hyperlink]


Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Link]


The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am certain the system's exhausting drives were despatched off for forensic examination, and we have all been ready patiently for the reply to an important query: What was the compromise vector? From shortly after the compromise was found on August 28, 2011, proper by means of April 1st, 2013, kernel.org included this be aware at the top of the site News: 'Due to all on your patience and understanding during our outage and please bear with us as we convey up the different kernel.org systems over the following few weeks. We can be writing up a report on the incident sooner or later.' (Emphasis added.) That comment was eliminated (along with the rest of the location News) throughout a May 2013 edit, and there hasn't been -- to my knowledge -- a peep about any report on the incident since then. This has been disappointing. When the Debian Venture found sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted an excellent public report on precisely what occurred. Likewise, the Apache Foundation likewise did the proper factor with good public autopsies of the 2010 Web site breaches. Arstechnica's Dan Goodin was nonetheless making an attempt to comply with up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years in the past. He wrote: Linux developer and maintainer Greg Kroah-Hartman instructed Ars that the investigation has but to be accomplished and gave no timetable for when a report might be launched. [...] Kroah-Hartman additionally advised Ars kernel.org programs had been rebuilt from scratch following the assault. Officials have developed new tools and procedures since then, but he declined to say what they are. "There can be a report later this year about site [sic] has been engineered, but do not quote me on when will probably be released as I am not chargeable for it," he wrote.
Who's accountable, then? Is anybody? Anyone? Bueller? Or is it a state secret, or what? Two years since Greg K-H said there would be a report 'later this year', and four years since the meltdown, nothing but. How about some data? Rick Moen
rick@linuxmafia.com


Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (visitor, #4654) [Link]


Less seriously, notice that if even the Linux mafia doesn't know, it have to be the venusians; they are notoriously stealth of their invasions.


Posted Nov 14, 2015 12:46 UTC (Sat) by error27 (subscriber, #8346) [Link]


I know the kernel.org admins have given talks about a few of the new protections which were put into place. There are not any extra shell logins, as an alternative everything uses gitolite. The totally different providers are on different hosts. There are extra kernel.org workers now. Individuals are using two factor identification. Some other stuff. Do a search for Konstantin Ryabitsev.


Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Link]


I beg your pardon if I was one way or the other unclear: That was mentioned to have been the path of entry to the machine (and i can readily consider that, because it was additionally the precise path to entry into shells.sourceforge.web, a few years prior, around 2002, and into many different shared Internet hosts for a few years). But that isn't what is of major curiosity, and isn't what the forensic examine long promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator in the August 2011 Dan Goodin article you cited: 'How they managed to take advantage of that to root access is presently unknown and is being investigated'. Okay, folks, you have now had 4 years of investigation. What was the path of escalation to root? (Also, other details that would logically be lined by a forensic study, such as: Whose key was stolen? Who stole the important thing?) This is the sort of autopsy was promised prominently on the entrance web page of kernel.org, to reporters, and elsewhere for a long time (and then summarily removed as a promise from the entrance page of kernel.org, with out remark, along with the rest of the site Information section, and apparently dropped). It nonetheless would be applicable to know and share that data. Particularly the datum of whether or not the path to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen
rick@linuxmafia.com


Posted Nov 22, 2015 12:Forty two UTC (Solar) by rickmoen (subscriber, #6943) [Link]


I've completed a closer evaluation of revelations that came out soon after the break-in, and think I've found the answer, via a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days before the public was knowledgeable), plus Aug. Thirty first comments to The Register's Dan Goodin by 'two security researchers who had been briefed on the breach': Root escalation was by way of exploit of a Linux kernel safety gap: Per the 2 safety researchers, it was one each extremely embarrassing (huge-open access to /dev/mem contents together with the working kernel's picture in RAM, in 2.6 kernels of that day) and identified-exploitable for the prior six years by canned 'sploits, one in all which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Different tidbits: - Site admins left the root-compromised Internet servers working with all providers nonetheless lit up, for a number of days. - Site admins and Linux Basis sat on the knowledge and failed to tell the public for those self same multiple days. - Site admins and Linux Basis have by no means revealed whether or not trojaned Linux source tarballs were posted in the http/ftp tree for the 19+ days earlier than they took the site down. (Sure, git checkout was fantastic, but what in regards to the hundreds of tarball downloads?) - After promising a report for several years and then quietly removing that promise from the front web page of kernel.org, Linux Foundation now stonewalls press queries.
I posted my finest attempt at reconstructing the story, absent an actual report from insiders, to SVLUG's important mailing checklist yesterday. (Necessarily, there are surmises. If the individuals with the info have been more forthcoming, we would know what occurred for sure.) I do should wonder: If there's another embarrassing screwup, will we even be informed about it at all? Rick Moen
rick@linuxmafia.com


Posted Nov 22, 2015 14:25 UTC (Sun) by spender (visitor, #23067) [Hyperlink]


Additionally, it's preferable to use live memory acquisition prior to powering off the system, in any other case you lose out on memory-resident artifacts that you may carry out forensics on.
-Brad


How about the lengthy overdue autopsy on the August 2011 kernel.org compromise?


Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Link]


Thanks to your feedback, Brad. I would been relying on Dan Goodin's claim of Phalanx being what was used to realize root, in the bit where he cited 'two security researchers who were briefed on the breach' to that effect. Goodin also elaborated: 'Fellow safety researcher Dan Rosenberg stated he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the primary time I've heard of a rootkit being claimed to be bundled with an assault instrument, and i noted that oddity in my posting to SVLUG. That having been said, yeah, the Phalanx README doesn't specifically declare this, so then maybe Goodin and his a number of 'safety researcher' sources blew that detail, and no one however kernel.org insiders but knows the escalation path used to achieve root. Also, it is preferable to make use of reside memory acquisition prior to powering off the system, otherwise you lose out on reminiscence-resident artifacts that you may perform forensics on.
Arguable, however a tradeoff; you'll be able to poke the compromised reside system for state information, but with the drawback of leaving your system working beneath hostile control. I was always taught that, on stability, it's higher to pull energy to finish the intrusion. Rick Moen
rick@linuxmafia.com


Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (guest, #88005) [Hyperlink]


Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Hyperlink]


With "one thing" you mean those that produce these closed source drivers, proper?
If the "client product firms" simply stuck to utilizing parts with mainlined open source drivers, then updating their products would be a lot easier.


A new Mindcraft moment?


Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Link]


They've ring zero privilege, can entry protected reminiscence straight, and can't be audited. Trick a kernel into working a compromised module and it is game over.
Even tickle a bug in a "good" module, and it's in all probability recreation over - on this case fairly actually as such modules are usually video drivers optimised for games ...

rich_text    
Drag to rearrange sections
Rich Text Content
rich_text    

Page Comments

No Comments

Add a New Comment:

You must be logged in to make comments on this page.