ELC: The embedded Linux nightmare
Linux (and the kernel in particular) is, says Thomas, a sort of "mutual benefit society" which is jointly maintaining a common good. This society will only work as long as the stakeholders give to it as well as taking from it. The giving part, unfortunately, is often lacking in the embedded world.
There are a lot of reasons given for the use of special, closed, vendor kernels in embedded situations. According to Thomas, these reasons do not hold water. They include:
- "Vendor kernels are developed by experts." Thomas looked at some
specific vendor kernels to see what level of expertise was to be found
there. In one kernel from a system-on-chip vendor, this allegedly
2.6.10 kernel had patches to about 10,000 different files - out of
just over 16,000 total. Another kernel, from a distribution vendor,
had modified 8,000 files. Yet another, from a board vendor, had only
patched 6500 files. Says Thomas: "don't ask me why" these vendors
felt the need to make so many changes.
To give some perspective, the patch from 2.6.10 to 2.6.11 only touched 5600 files. These vendor kernels are far larger than the (invasive) real-time preemption patch set, which only hits 725 files. These massive patches are not a sign of expertise - quite the opposite, instead. Experts don't mess with things which do not need changing and they get their changes back into the mainline.
- "Vendor kernels offer better time to market." Thomas's counterexample
here was an email from a vendor which had been struggling with a
(self-inflicted) driver problem for a month. Working with the
community, instead, allows vendors to avoid making silly mistakes and
to fix them when they do happen.
- "Users prefer vendor kernels." This is only true when there is no
choice. When there is a choice, users prefer kernels with
ongoing development and maintenance, and for which they can get
support from the community.
- "Vendor kernels help Linux." That help is hard to see. Thomas
pointed out this
discouraging note from the folks at Cirrus:
I think we will just maintain our own port for the 93xx. I am not going to want to support code not written by Cirrus Logic. So I give you kuddos for getting to the port first, but using GIT makes it easy to remove your work and add ours.
It is hard to see how this sort of attitude helps Linux in any way. Instead, we have vendors tossing aside the work done by the community in the name of "not invented here."
What really flows from vendor kernels is user lock-in, community detachment, and waste of resources. None of these are good for users, for the vendor, or for the Linux community as a whole. They are, instead, the embedded Linux nightmare.
As an example of community detachment, Thomas offered the linux-arm.org web site, which describes itself as:
This site, Thomas points out, was launched in 2005 - ten years after the community ARM port was launched. It does not even do the courtesy of linking to the real community ARM site. It is, instead, an example of a vendor trying to create its own community which has little to do with the people actually creating the code.
With regard to waste of resources: a Linux developer recently rewrote a system-on-chip driver to make it suitable for the mainline. In the process, a 7,000-line driver became a much better 1,300-line driver. Using the COCOMO model, Thomas estimates that about $180,000 was wasted in the creation of this vendor driver.
An even more egregious example is a fork of the real-time preemption tree by "an unnamed company" a couple of years ago. No patches have ever been published from this fork, and there has not been a single email exchange with the preempt-rt developers. The resulting code is still based on a kernel from about the 2.6.14 era, and is completely unmaintainable. Unfortunately, a customer now wants serial ATA support, putting this company in a difficult situation. Thomas asks: "why the hell is this company using Linux?" He estimates that at least ten staff-years have been wasted in this fork.
The end result of this nightmare can be seen in the form of unhappy
customers, a bad reputation for free software, fragmentation of the code
base, a feeling of being ripped off among kernel developers, and wasted
resources. In addition, Thomas fears that the kernel development process
risks being dominated by the enterprise Linux companies, which do work with
the community. If the embedded world wants to avoid all of these problems,
it needs to start talking with the community and getting its code into the
mainline kernel. Then Tux can get a good night's rest, and world
domination will get back on schedule.
Index entries for this article | |
---|---|
Conference | Embedded Linux Conference/2007 |
(Log in to post comments)
ELC: The embedded Linux nightmare
Posted Apr 17, 2007 23:57 UTC (Tue) by lutchann (subscriber, #8872) [Link]
In the process, a 7,000-line driver became a much better 1,300-line driver. Using the COCOMO model, Thomas estimates that about $180,000 was wasted in the creation of this vendor driver.
Not having attended the talk I don't have any background on this example, but a lot of times vendor drivers like this are the result of gluing the hardware component's DV test code to a kernel driver skeleton. Naturally, this doesn't produce the best driver in the world, but it does result in a shorter time to market and requires less initial effort on the part of the hardware vendor than writing a proper driver. It's not very fair to assume that providing bloated, ugly drivers means the vendor doesn't care about working with the community.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 2:05 UTC (Wed) by eli (guest, #11265) [Link]
Something not really addressed is the fact that it does take effort, and a good bit of time, to get large changes merged upstream. I've done it, and it's rewarding, but you need to have support for it in the organization or your developers won't bother.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 4:41 UTC (Wed) by sepreece (guest, #19270) [Link]
Gleixner's talk had a distinct implication that the vendors were at fault and the community had a right to expect more of them.
The vendors, of course, see things differently. Unlike computers, whose users have been taught to expect to have to replace the system software regularly, devices are generally expected to remain the same from the day they are shipped to the day they are discarded. Most users would not like to see that change.
Vendors, therefore, tend to like to get the software very stable before they ship the product, and to use the same software base to build a given product (and updates, refreshes, and successors of that product) for a long time.
So, vendors tend to be working on old releases, even though they want current features. Thus it's not surprising that he found a vendor release that was based on 2.6.10 but was actually closer to a later release, based on patch size.
The point is that this is not evil on the vendor's part, it's just a difference in needs and expectations. In many cases it's not that the vendors don't want to "give back", it's that they can't. When they find a bug or make an enhancement, their base is so far from the community's that the community has no interest in what they have to say.
The vendors aren't dumb. They know that community gives leverage and that having more eyes working on something will make it better, faster, and many of them would love to have their work mainstreamed so they don't have to maintain it. But they are often working on things that literally nobody else in the world cares about (like devices for custom hardware) or on old releases or just have different priorities than the community developers.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 5:59 UTC (Wed) by drag (guest, #31333) [Link]
The way he was talking was that it was more then a use vs them situation.
It's that the embedded Linux developers aren't listenning to the advice that the Linux kernel developers are trying to give them.
It's not that the Linux kernel developers expect more 'back' from the embedded developers, it's just that the embedded developers are being very foolish by not working closer with them.
For example it's been stated many times that with the Linux kernel the easiest and cheapest way to get hardware support a wide range of kernels, from old to to new, is to get your driver into the mainline kernel.
That if your a developer who, for whatever reason, has to support a wide veriaty of kernels it's MUCH MUCH cheaper and more effective to get your driver into the kernel were it gets latest updates and bug fixes and then _backport_ your drivers to older kernels.
Starting off developing with older kernels then trying to maintain your own patches going forward onto newer kernels is suicide.
With the Linux kernel backporting is cheaper and much easier then forward porting. In addition once you get your driver into mainline kernel then that driver is essentially self-maintaining. Usually only a little bit of maintainance and regression checking should be needed for each new kernel. So it's a win-win.
At least that is what I've been told and I've read. It seems that hardware developers almost universally see dramatic benifits from working with mainline kernel developers.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 10:43 UTC (Wed) by gouyou (guest, #30290) [Link]
How is it from your usual ditribution? When you install a server you do not want to update the distribution in the next 3 to 5 years when you either retire your server or repurpose it.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 12:35 UTC (Wed) by khim (subscriber, #9252) [Link]
You can keep old version on your server, but active distribution development always goes in "unstable/testing" branch (where latest and greatest stuff lives).
Embedded developers worry about a lot of things and like to develop in stable branch. Then, few years down the road, they try to switch from 2.6.10 to 2.6.20 (or something like this) when it's clear that they actually need to switch.
But linux kernel is not designed to be used this way! Either you change your driver over time while tracking changes in kernel - or you need MAJOR overhaul when you need to switch. Nobody is trying to keep things compatible and reduce amount of changes needed low!
And this is when things are going straight to hell: embedded developers blame linux kernel developers (why the hell they change internals so much?) and linux kernel developers blame embedded developers (you were asked year ago if you still need devfs or not - there were no complains, why you are whining NOW?).
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 20:43 UTC (Wed) by vmole (guest, #111) [Link]
Either you change your driver over time while tracking changes in kernel - or you need MAJOR overhaul when you need to switch.
Or, you get your driver *into* the standard kernel, and let the main kernel developers keep it up to date with respect to kernel changes, so that all you have to worry about is fixes specific to your device, many of which will *also* be provided by the kernel community.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 22:34 UTC (Wed) by bronson (subscriber, #4806) [Link]
Then they say, "But... give up control of my code? Do you have the brain worms??"
The Cirrus reply mentioned in the article had this attitude. Companies seem to believe that, unless their engineers are the sole committers, their driver will turn into, I don't know, Paris Hilton or something. They won't let "their" source code be modified by the more skilled and qualified Linux kernel team, even though it produces a fairly significant cost savings for them.
It's bizarre. I hope this attitude is just a hold-over from the proprietary 90s and disappears as the new generation of programmers takes over.
ELC: The embedded Linux nightmare
Posted Apr 19, 2007 3:02 UTC (Thu) by markcox (guest, #29577) [Link]
It's bizarre, how people engaging in a seemingly 'civilized' discussion resort to making negative remarks about celebrities to make their point. Is this lwn or digg.com ?
ELC: The embedded Linux nightmare
Posted Apr 19, 2007 7:21 UTC (Thu) by Zenith (guest, #24899) [Link]
It's bizarre, how people engaging in a seemingly 'civilized' discussion resort to making negative remarks about celebrities to make their point. Is this lwn or digg.comI actually found the comment quite funny. Why should we always keep the discussion insanely clean and to the point? Even programmers need to laugh once in a while, and I did :) And really, do you think that Paris Hilton cares much for what programmers have to say about her? ;)
ELC: The embedded Linux nightmare
Posted Apr 21, 2007 21:39 UTC (Sat) by bronson (subscriber, #4806) [Link]
Sorry markcox, I did not intend to offend. Allow me to rephrase...
They're afraid "their driver will turn into, I don't know, a frothing cow or something."
Is that easier on your sensibilities?
ELC: The embedded Linux nightmare
Posted Apr 22, 2007 0:49 UTC (Sun) by markcox (guest, #29577) [Link]
haha. Yes much better thanks :)
ELC: The embedded Linux nightmare
Posted May 2, 2007 7:57 UTC (Wed) by blujay (guest, #39961) [Link]
Does this mean that you equate Paris Hilton to a frothing cow?
For the record, I quite enjoyed the original, along with the brain worms. =)
ELC: The embedded Linux nightmare
Posted May 1, 2007 18:19 UTC (Tue) by ParisHilton (guest, #45014) [Link]
Yes, I find this statement offensive! I've been a dedicated LWN reader for years and I've never been so insulted!
ELC: The embedded Linux nightmare
Posted Apr 19, 2007 8:17 UTC (Thu) by nix (subscriber, #2304) [Link]
The companies' developers obviously don't believe that, or given theoverwhelming male-dominance of the software development field, they'd all
be *rushing* to hurl all their code at the kernel devs.
;}
ELC: The embedded Linux nightmare
Posted Apr 19, 2007 5:01 UTC (Thu) by khim (subscriber, #9252) [Link]
Or, you get your driver *into* the standard kernel, and let the main kernel developers keep it up to date with respect to kernel changes, so that all you have to worry about is fixes specific to your device, many of which will *also* be provided by the kernel community.
Not in the books, I afraid. Main kernel developers will probably not accept non-working driver for non-working hardware in main kernel and for embedded developers time where driver work and hardware is available is endpoint: it exist, it works, what else will you need ?
Any sane PHB will stop development at this point: it's time to produce and sell the gadget, not bother will useless "mainstream integration"! Engineer can be used for some other products...
Yes, few years down the road (when time to upgrade base version of kernel will come) this approach will prove troublesome, but to understand that you need to look few years down the road - rare for PHB.
ELC: The embedded Linux nightmare
Posted Apr 19, 2007 7:06 UTC (Thu) by net_bh (guest, #28735) [Link]
Any sane PHB will stop development at this point: it's time to produce and sell the gadget, not bother will useless "mainstream integration"! Engineer can be used for some other products...It is hard to believe that embedded developers are doing one-off gadgets _all the time_. In my experience, there is usually a product line, a few products on a common platform and most drivers can be carried forward across the product line with few changes. And even across SoC platform generations, a surprising amount of code can be reused.
So merging with mainline does help.
And even if you are doing one-off gadgets, mainlining today ensures that 5 years down the line when you want to do an update on the gadget, your code has been maintained for free in the latest mainline kernel.
Yes, few years down the road (when time to upgrade base version of kernel will come) this approach will prove troublesome, but to understand that you need to look few years down the road - rare for PHB.
I have good PHBs then. It is our policy that as soon the product is released, we start merging the code back into upstream/mainline, etc. So at any time, we deal with a very minimal (mostly non-existent) diff with mainline.
ELC: The embedded Linux nightmare
Posted Apr 19, 2007 21:41 UTC (Thu) by drag (guest, #31333) [Link]
The way it sounds to me is that embedded developers are simply very used to having to rewrite everything all the time.
That's what they expect, that's what they do. So they make sure to do it as quick and dirty as possible and move onto the next project. Minimal time invested, minimal effort wasted.
In contrast the Kernel developers, in order to work most effectively, are tryhing to enforce a development methodology on these people. Effectively the Linux developers are telling them that instead of quick and dirty and get finished as soon as possible.. they should go quick and clean and never ever stop developing.
Culture clash.
ELC: The embedded Linux nightmare
Posted Apr 20, 2007 14:03 UTC (Fri) by Max.Hyre (subscriber, #1054) [Link]
I have good PHBs then.Oxymoron. It means that your Bs aren't PH.
ELC: The embedded Linux nightmare
Posted Apr 20, 2007 14:54 UTC (Fri) by sepreece (guest, #19270) [Link]
"In my experience, there is usually a product line, a few products on a common platform and most drivers can be carried forward across the product line with few changes."
Yes, but the OS version used for the platform is typically not changed over the life of the product line (or changed only at very long intervals), so there's usually no need to maintain the drivers over that period except for hardware or feature tweaks, which would have to be done separately anyway.
That said, it would make sense to push the drivers to mainstream anyway, for the usual benefit of community review. The problem in doing that is that by the time the first product is ready to ship the kernel version is not current and bringing the drivers up to current would be added work that would be hard to justify just by the possibility of finding bugs that haven't already manifested. [In many cases the drivers can't be posted prior to shipment because they would expose product details that we want to keep secret as long as possible for competitive reasons.]
I think we're going to get better at this, over time, but it's a learning process. Thomas's advice was useful; I just thought his talk was a little one-sided.
"We want more drivers, no matter how 'obscure' [...]"
Posted Apr 19, 2007 16:42 UTC (Thu) by GreyWizard (guest, #1026) [Link]
[...] devices are generally expected to remain the same from the day they are shipped to the day they are discarded.
Did I miss the part where Gleixner advocates forcing users to upgrade the firmware on their mobile phones at gunpoint? Working with and contributing to the community doesn't force anyone to change firmware on devices that have already shipped.
The point is that this is not evil on the vendor's part, [...]
As far as I can see Gleixner attributes the problem to hubris and ignorance, not evil.
When [the vendors] find a bug or make an enhancement, their base is so far from the community's that the community has no interest in what they have to say.
Being so far from the community's sources is an unmistakable sign of a serious software design mistake. There's no way these vendors know more about integrating kernel features than Gleixner and the rest of the kernel community.
But they are often working on things that literally nobody else in the world cares about (like devices for custom hardware) [...]
"We want more drivers, no matter how 'obscure' [...]"
Posted Apr 20, 2007 1:45 UTC (Fri) by sepreece (guest, #19270) [Link]
Gosh, you put a more negative spin on what I said than I intended. I enjoyed Thomas's talk (and the chat we had this afternoon) and agree with many of his points. I plan to take some of his suggestions back to work to see if I can sell them.
I'm not sure I consider "hubris and ignorance" a lot nicer to be called than evil; at least evil implies you're doing it on purpose and with awareness. I don't believe he suggested any such thing, just that we were missing an opportunity and could gain by working differently. We're not ignorant of the choice, we have just chosen differently. We don't think we know more than Thomas, except perhaps about the specifics of our business and our needs.
Being far from the current version is just a business choice, not a design failure. We build platforms that last five years or more with only tweaks in the components, rather than replacements. It's hard to argue that this should be done differently. Version shifts are typically done at the next major rev of the platform, but bugs are found over the whole life of the platform.
"We want more drivers, no matter how 'obscure' [...]"
Posted Apr 20, 2007 15:50 UTC (Fri) by GreyWizard (guest, #1026) [Link]
I don't believe he suggested any such thing, just that we were missing an opportunity and could gain by working differently.
He certainly phrased it more tactfully than I did, but people usually miss opportunities either because they don't know about them (ignorance) or because they think they know better and are mistaken (hubris).
Being far from the current version is just a business choice, not a design failure.
By this reasoning every design decision is actually a business choice and there is no such thing as a design mistake.
We build platforms that last five years or more with only tweaks in the components, rather than replacements. It's hard to argue that this should be done differently.
Perhaps that's why no one is arguing that. Gleixner seems to be arguing that the particular way vendors attempt to reach that goal is flawed, not that the goal itself is a problem.
Version shifts are typically done at the next major rev of the platform, but bugs are found over the whole life of the platform.
No device vendor needs to change 10,000 kernel files just to have bug fixes relevant to their devices when a complete operating system distribution for general purpose computers changes only 8,000. As Gleixner says, the reasons vendors give for use of special, closed, vendor kernels don't hold water.
"We want more drivers, no matter how 'obscure' [...]"
Posted Apr 20, 2007 21:56 UTC (Fri) by filker0 (guest, #31278) [Link]
I worked on a project as a contractor that used a kernel 2.6.10 distribution obtained through Timesys on a PPC440GX. Because of the process at the company the work was done for, and the industry standards at work, we ended up unable to change the kernel version as new kernels came out as we would have had to start from scratch on the approval process for the kernel for each upgrade. The kernel release will never change for the life of the product, though it may be patched, and the application software updated. There is only one customer for the product. The hardware is unique to the platform, and any new version of the platform will use different hardware that would not be compatible with the drivers.
Because of the above, it was decided not to attempt to mainstream the drivers that we wrote. Having the community maintain the drivers for us would be nice, but how is anyone going to test them without the custom ASICs and FPGAs that they control?
I believe that it is for reasons like the above that many embedded developers don't try to put their stuff back in the kernel -- they end up using out-of-date kernel versions, they don't update the kernel version for the life of the product, and the hardware is custom and unique to the particular "box".
"We want more drivers, no matter how 'obscure' [...]"
Posted Apr 21, 2007 12:52 UTC (Sat) by farnz (subscriber, #17727) [Link]
And, unfortunately, your employer missed the gain from mainstreaming the drivers. The community obviously can't test the drivers for you, but there have been changes to the kernel that might result in your drivers needing janitorial-grade changes (the change of the argument list for interrupt handlers, as one example - LWN maintains a list of changes). These changes will be made by the community for you if the driver is mainstreamed.While the changes will only be compile tested, if your employer decides to make the jump from 2.6.10 to 2.6.20 (for example), they'll find that the drivers at least compile, and don't break the kernel. This reduces the porting load; instead of having to find out what's changed, update your drivers, get them building, test them and debug them, you're down to just testing the drivers, and debugging them as needeed. Further, you can use git to look at what's been done to your drivers, and thus have a good chance of spotting cases where someone's misunderstood what the hardware does as they clean up the drive.
It's a difficult way of thinking to come to from a proprietary world; what other "upstreams" will maintain code they cannot test in a building and theoretically working fashion?
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 6:16 UTC (Wed) by zipdisk (guest, #8589) [Link]
I think the point Thomas is trying to make is basically that vendors arenot embracing the full concept of using Open Source. I've been in both
sides of the fence (working for a vendor and being part of the community)
and I have to admit that in either case you tend to see to the other side
as the guilty part.
But in the end, I think the real problem is not who is guilty of what but
how to fix the "bad" relation. Is true that vendors many times do not
release source code back to the community (not only on Linux but in other
Open Source Operating System), and is true that sometimes the community is
not interested in what the vendor is doing. Is true that sometimes a
vendor finds problems and try to get help from the community and the
community does not help at all and is also true that sometimes the
community brings a lot of help but the vendor does not give a thing in
return.
This topic is more complex than what it seems, and for people outside the
embedded world is difficult to understand why this happens.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 8:28 UTC (Wed) by nim-nim (guest, #34454) [Link]
It's all a matter of control.
Private fork means you can do whatever short-term changes your PHB asks of you. It means you don't have to answer to community reviewers in addition to your boss (which very often you know how to distract). It means you're paid to reinvent the wheel instead of doing more difficult stuff. It means you can slip in pilfered code of dubious origin without detection. For those reasons many developpers are activelly hostile to working with mainline. They can easily convince their managers with bad arguments if management is not educated about the Linux ecosystem. Especially since management sometimes wants the "big picture" and ignores the little details that made Linux successful.
So far evangelization has focused on developpers. This is IMHO a big mistake. Working through mainline is a big win mid and long-term but this kind of prospective view is asked of management, not developpers.
It's all too easy to frighten management with the short-term drawbacks of working through mainline if there's no authoritative voice telling them it's worth the pain. Even if you manage to enlighten a few developpers in an organisation their peers can easily contain them.
What's needed is more successful big names like Intel, Dell, Red Hat... making presentations on the strategic benefits of working through mainline. That would give management of smaller companies the backbone necessary to resist short-term FUDing.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 8:40 UTC (Wed) by Hanno (guest, #41730) [Link]
Are the slides available?
Slides
Posted Apr 18, 2007 15:19 UTC (Wed) by tglx (subscriber, #31301) [Link]
http://tglx.de/private/tglx/celf2007/celf-2007-keynote.pdf
tglx
Slides
Posted Nov 22, 2012 18:12 UTC (Thu) by mabshoff (guest, #86444) [Link]
http://elinux.org/images/f/fd/Celf-2007-keynote-Gleixner.pdf
I guess tglx's website had been cleaned up sometime in the last 5 years :)
Cheers,
Michael
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 9:29 UTC (Wed) by simlo (guest, #10866) [Link]
Can we get the this article for "free"? I would like to send a link to some of my co-workers :-)
Esben
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 9:43 UTC (Wed) by Hanno (guest, #41730) [Link]
Use the "subscriber link" feature to do that.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 10:43 UTC (Wed) by alex (subscriber, #1355) [Link]
It's not always that way. When I last worked on an embedded project I did make at active effort for submit patches into the mainline. I understood very well the advantages of being able to closely track the mainline.
Of course back in the those days I was trying to do it with an linuxsh CVS overlay ontop of the mainline kernel while trying to keep stuff not worth submitting (like BSP's for the board) from more general changes (like core processor changes). I suspect nowadays using GIT would have made my life a lot easier in juggling these things around.
Having said that this was driven by me as an engineer not by management. It's a shame that since I've left the company (when it was taken over) I've not seen any further activity on the developer lists. Since then the product has made it to market but they don't make the (GPL parts) of the source code available. Having said that I doubt any of their customers actually care about that.
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 16:32 UTC (Wed) by timschmidt (guest, #38269) [Link]
Care to name a product?
ELC: The embedded Linux nightmare
Posted Apr 18, 2007 17:28 UTC (Wed) by alex (subscriber, #1355) [Link]
The Vivid DVR (http://www.baxall.com/vivid/). Basically its networked CCTV Digital Video recorder.
Why does this happen?
Posted Apr 26, 2007 12:29 UTC (Thu) by endecotp (guest, #36428) [Link]
> 7,000-line driver became a much better 1,300-line driver
Let's consider why this happens. These people are not completely stupid. Yet rather than produce a 1300-line driver, they produced a 7000-line driver. It can only be that it wasn't obvious to them how to achieve what they wanted in 1300 lines: they have written 5700 lines of "unnecessary" stuff, probably because it was easier overall for them to do that than to understand how to do it "properly". I can think of two fundamental issues:
1. Kernel documentation needs to be better. (Or, the kernel design needs to be easier to understand so that it "works" with less documentation.)
2. Mailing lists need to be more attractive venues for discussion of this sort of thing. I'm not sure why they are currently not attractive to these developers, but it's clear that they aren't. If the fundamental issue is corporate privacy then there probably isn't much that can be done, but there may be other reasons. This could be a good subject for a research project....
Why does this happen?
Posted Apr 28, 2007 23:48 UTC (Sat) by tglx (subscriber, #31301) [Link]
> Let's consider why this happens. These people are not completely stupid.
Right, they are not stupid, but foolish.
> It can only be that it wasn't obvious to them how to achieve what they wanted in 1300 lines: they have written 5700 lines of "unnecessary" stuff, probably because it was easier overall for them to do that than to understand how to do it "properly"
How do you explain, that:
- the old driver was producing so many problems, that the company asked for a rewrite
- the new driver has a 20% performance gain in the first shot
Sigh. I have seen so many commercial quality code in the last 10 years, that I really wonder why I did not get eye-cancer yet.
Seriously, looking back at my own code I can clearly see the improvement which was imposed to my coding style and my way of thinking about problems through the community review and collaboration process.
It seems that many of these improvement just have been prohibited by company policies and stubborness on both the managment and the developers side.