Ah, so I read the comments that Eric Schrock had about why Sun doesn't want to help with Linux kernel development, and as a Linux kernel developer I thought I would attempt to address his major points, and explain as to why Linux developers feel this way. It's still early in the morning for me, so this might seem a little disorganized, but what the hell:
Ok, his first statement:
The main reason we can't just jump into Linux is because Linux doesn't align with our engineering principles, and no amount of patches will ever change that. In the Solaris kernel group, we have strong beliefs in reliability, observability, serviceability, resource management, and binary compatibility. Linus has shown time and time again that these just aren't part of his core principles, and in the end he is in sole control of Linux's future
To state that the Linux kernel developers don't believe in those good, sound engineering values, is pretty disingenuous. Hey, one of the first thing that people think about when Linux is mentioned is "reliability" (hm, I used to have a link showing that, can't find it anymore...) These is pretty much a baseless argument just wanting to happen, and as he doesn't point out anything specific, I'll ignore it for now.
He continues on with specific issues:
Projects such as crash dumps, kernel debuggers, and tracing frameworks have been repeatedly rejected by Linus, often because they are perceived as vendor added features.
Ah, ok, specifics. Let's get into these:
Crash dumps. The main reason this option has been rejected is the lack of a real, working implementation. But this is being fixed. Look at the kexec based crashdump patch that is now in the latest -mm kernel tree. That is the way to go with regards to crash dumps, and is showing real promise. Eventually that feature will make it into the main kernel tree. But to state that Linux kernel developers just reject this feature is to ignore the fact that it really wasn't ready to be merged into the tree.
Kernel debuggers. Ah, a fun one. I'll leave this one alone only to state that I have never needed to use one, in all the years of my kernel development. But I know other developers who swear by them. So, to each their own. For hardware bringup, they are essential. But for the rest of the kernel community, they would be extra baggage. I know Andrew Morton has been keeping the kernel debugger patches in his tree, for those people who want this feature. And since Linux is about choice, and this choice is offered to anyone who wants to use it, to say it is not available, is again, not a valid argument.
Tracing frameworks. Hm, then what do you call the kprobes code that is in the mainline kernel tree right now? :) This suffered the same issue that the crash dump code suffered, it wasn't in a good enough state to merge, so it took a long time to get there. But it's there now, so no one can argue it again.
Which brings me to the very valid point about how Linux kernel development differs from any other OS development. We (kernel developers) do not have to accept any feature that we deem is not properly implemented, just because some customer or manager tells us we have to have it. In order to get your stuff into the kernel, you must first tell the community why it is necessary, and so many people often forget this. Tell us why we really need to add this new feature to the kernel, and ensure us that you will stick around to maintain it over time. Then, if the implementation seems sane, and it works, will it be added. A lot of these "enterprise" kernel patches didn't have sane implementations, and were not successfully explained to the kernel community as to why they are necessary (and no, the argument, "but we did it in INSERT_YOUR_DYING_OS_NAME_HERE, so Linux needs it" is not a valid excuse.)
Then Eric continues:
Not to mention the complete lack of commitment to binary compatibility (outside of the system call interface). Kernel developers make it nearly impossible to maintain a driver outside the Linux source tree (nVidia being the rare exception), whereas the same apps (and drivers) that you wrote for Solaris 2.5.1 will continue to run on Solaris 10.
First off, this is an argument that no user cares about. Applications written for Linux 1.0 work just fine on the 2.6 kernel. Applications written for older versions of Linux probably work just fine too, I don't know if anyone tried it in a long time. The Linux kernel developers care a lot about userspace binary compatibility. If that breaks, let us know and we will fix it. So to say that old apps written for old versions of Solaris will work on the latest Solaris, is not a valid argument at all.
But the issue of driver compatibility. For all of the people that seem to get upset about this, I really don't see anyone understand why Linux works this way. Here's why the Linux kernel does not have binary driver compatibility, and why it never will:
Ok, the final point I want to address:
Large projects like Zones, DTrace, and Predictive Self Healing could never be integrated into Linux simply because they are too large and touch too many parts of the code. Kernel maintainers have rejected patches simply because of the amount of change (SMF, for example, modified over 1,000 files).
This is just wrong. Look at the LSM kernel change that I helped get into the kernel tree? It touched pretty much every part of the kernel. It touched almost every core kernel structure. And yet, the LSM developers worked with the kernel community, addressed their concerns, and got the changes accepted into the tree. And Linux now benefits from having an extremely flexible security model now because of this effort. So to say it is impossible to get such a change into Linux is false.
It's ok that Sun doesn't want to use Linux for their systems, no one is forcing them to, and Solaris is quite nice in places. And when Sun realizes the error of their ways, we'll still be here making Linux into one of the most stable, feature rich, and widely used operating systems in the world.
posted Thu, 23 Sep 2004 in [/diary]
My Linux Stuff