|
|
Subscribe / Log in / New account

LCE: Linux, hardware vendors, and enterprise distributors

Please consider subscribing to LWN

Subscriptions are the lifeblood of LWN.net. If you appreciate this content and would like to see more of it, your subscription will help to ensure that LWN continues to thrive. Please visit this page to join up and keep LWN on the net.

By Jonathan Corbet
September 5, 2007

Enterprise distributions are an important part of the economic success story of Linux. The creation of highly stable, highly supported distributions has brought significant revenue streams to some distributors and enabled the deployment of Linux into many "mission critical" situations. Enterprise distributions encourage the commercial world to take Linux seriously. At LinuxConf Europe, however, your editor has stumbled into a few conversations which characterized enterprise distributions as one of the bigger problems the development community has now. Then a talk by Dirk Hohndel made that point again in a different context.

[Dirk Hohndel] Dirk's talk was on how to get hardware vendors to support Linux. He knows what he is talking about: as the Linux CTO at Intel, Dirk is charged with, among other things, implementing Intel's commitment to provide free drivers for all of its hardware. His core point is that hardware vendors understand money better than anything else; getting them to support Linux will require showing them that it is in their economic interest to do so. To that end, he praised how Dell has taken care to put together hardware which is entirely supportable with free drivers to ship with Ubuntu pre-loaded. That sort of decision will quickly get the attention of the relevant vendors.

There were some suggestions on what to tell hardware vendors who are thinking about adding open source support for their products. Development in the open is crucial; drivers should be released early and made available for the community to work with. Intel did this with some of its early network drivers; the resulting level of interest and community participation exceeded all expectations. Vendors need to understand that they cannot design software just for their device, that they need to think bigger. This is a hard message for vendors to hear, but, in the long run, they benefit from a better kernel which will be better suited to their needs in the future.

It is important that software support be available immediately when the hardware is made available. If there is no driver for several months after the hardware release, competitors will have had time to get their answering products to market before Linux users can use the original product. That sort of time lag is forever in the hardware world. Vendors also need to continue to maintain their code after it gets into the mainline; there is nobody else who can ensure that it continues to work on all versions of the hardware.

One thing that the community could do to help would be to improve the tone of the discussion on our mailing lists. That tone is often quite hostile; it does not create a friendly environment for engineers working for hardware vendors who want to engage with the community.

There is another place where life gets difficult for hardware vendors, though; this is where the enterprise distributors come in. When Intel releases a driver for a new product, that driver goes into the mainline kernel. But the release cycle implemented by the enterprise distributors will not pick up that driver for as much as two years after it gets into the mainline. So enterprise customers are not able to make use of that hardware for a long time after its release, even though the driver is available.

Intel has competitors which will never release free drivers for their hardware. But they do put out closed-source modules for the enterprise distributions. So their customers are able to use that hardware from the outset.

In other words, Intel is being punished for playing by the rules and releasing their drivers to the community. This is exactly the wrong sort of incentive to create for hardware vendors. If they conclude that they will do better by just shipping binary-only modules, that is the course they will take.

Dirk's complaint echoes other conversations your editor has heard in the last few days. The development community has been very insistent in its message that code should be merged upstream, and that this merge should happen early. In the kernel area, the development cycle has been shortened to the point that changes find their way into a stable release after a maximum of a few months. But the enterprise distributions, by freezing kernels for years at a time, are pushing us back to the old, multi-year development cycle and sending a very different message to vendors.

The discussion of enterprise distributor policies is not new; see this article from last June for a previous installment. But this discussion appears to be reaching a new level of urgency, with some developers calling enterprise distributions one of the biggest problems the community is facing today. There is a fundamental conflict between the fast-moving development community and the sort of stasis that the enterprise distributions try to create. This conflict becomes especially acute when customers want the best of both worlds: no changes combined with fast-moving development and support for current hardware.

There are no easy solutions in sight. The enterprise distributions may be forcing a model from the proprietary software world on Linux, but there are reasons for the creation of that model in the first place. The kernel development community has gotten quite good at integrating vast numbers of changes while still producing a stable result, but any software which has recently seen significant changes will occasionally produce unwelcome surprises when dropped into a production environment. Slowing the rate of development is not an option, and it should be noted that the enterprise distributors are at the top of the list of companies which are setting the pace. Getting around this problem is going to be a challenge - but this community is good at facing challenges.

Index entries for this article
ConferenceLinuxConf.eu/2007


(Log in to post comments)

LCE: Linux, hardware vendors, and enterprise distributors

Posted Sep 5, 2007 17:08 UTC (Wed) by bfields (subscriber, #19510) [Link]

he praised how Dell has taken care to put together hardware which is entirely supportable with free drivers to ship with Ubuntu pre-loaded.

According to Dell's site they've got two desktops and a laptop for sale in the US which ship with Ubuntu. The two desktops only ship with nVidia graphics, and if you didn't already know that means proprietary drivers will be required for full support, then there's nothing on the website to tell you that.

I'm typing this on the laptop (a 1420n), which shipped with two proprietary drivers, one for the Intel wireless, one for the modem. I assume the need for the proprietary wireless driver will go away with a kernel upgrade at some point, and I don't ever expect to use the modem, so I'm not complaining--it's a fine laptop, and about as free-software compatible as what I could get anywhere else.

I can find desktops on Dell's website with Intel graphics. So I'm not sure what the criteria were for choosing these two desktops to install Ubuntu on (no doubt they had some good reason), but "entirely supportable with free drivers" can't have been at the top of the list.

LCE: Linux, hardware vendors, and enterprise distributors

Posted Sep 5, 2007 17:42 UTC (Wed) by charlieb (guest, #23340) [Link]

> no doubt they had some good reason

Really? What gives you such confidence?

nVidia OK for 2D

Posted Sep 5, 2007 19:56 UTC (Wed) by ncm (guest, #165) [Link]

I ordered my Dell 620 laptop with an nVidia card, instead of the default Intel (which is in there too, but turned off), and operate it with the Free nv driver. I use the machine for work, so I never run 3D code. The nVidia card uses more power, but doesn't compete for bus bandwidth with the CPU. I had a choice of the ATI card, which today I would choose since ATI has announced support for a complete free driver.

nVidia OK for 2D

Posted Sep 5, 2007 23:59 UTC (Wed) by drag (guest, #31333) [Link]

I recently bought a Dell/Ubuntu 1420n laptop.

It's fantastic. Multimedia buttons, Sleep, Hibernation, 3D acceleration, 2D acceleration, SD card slot, Wireless, etc etc.

All of that works out of the box. OUT OF THE BOX. I bought it, opened it up, turned it on, and started using it. Total setup time was about 10 minutes.

After upgrading the 3D drivers to those provided by Gutsy I am using Fusion. (well actually now I am running Debian and it's just as kick-ass)

It actually uses less CPU time to move windows around in fusion then it does on 2d-only metacity.

Sorry Intel's open source drivers far outstrip Nvidia's open source drivers.

nVidia OK for 2D

Posted Sep 6, 2007 2:43 UTC (Thu) by ncm (guest, #165) [Link]

Yes, the Intel drivers are nice enough, but the hardware device they are driving steals memory bus cycles from the CPU to refresh the screen, and thereby slows down all CPU operations, all the time. An nVidia card with dedicated memory, however much you and I may hate its drivers, doesn't slow down the main CPU just by being on. As far as I know you can't buy a machine with Intel graphics that doesn't steal bus cycles. I'd like to be proven wrong.

So, (1) there may be rock-hard practical reasons to choose other than Intel graphics, and (2) having chosen nVidia, you're not obliged to use the proprietary nVidia driver. The really Free "nv" driver works great for most practical uses, including all of mine.

(One exception: if I'm running two X servers, on two VTs, suspend/resume gets very confused, but I don't know whether to blame the nv driver.)

nVidia OK for 2D

Posted Sep 6, 2007 10:52 UTC (Thu) by jond (subscriber, #37669) [Link]

I'd be interested to see some benchmarks of e.g. kernel compiles comparing the performance when you use the intel or the nvidia graphics.

nVidia OK for 2D

Posted Sep 6, 2007 15:57 UTC (Thu) by bfields (subscriber, #19510) [Link]

I know nothing about graphics cards, and I'm entirely willing to believe as you say that there were good reasons for choosing the ones they chose. But I'm still left wondering why he said that:
Dell has taken care to put together hardware which is entirely supportable with free drivers to ship with Ubuntu pre-loaded.
I find it hard to believe that "hardware which is *entirely* supportable with free drivers" was such a high priority, when you can find hardware (even among the hardware that Dell already sells, with other OS's!) that has obviously more complete Linux driver support.

nVidia OK for 2D

Posted Sep 10, 2007 2:54 UTC (Mon) by mdomsch (guest, #5920) [Link]

In fact, yes, we do carefully choose what hardware to sell in systems sold with Linux. Free drivers make it easier to develop, test, debug, and fix than non-free drivers. We intentionally made sure that Intel video chips, with their completely free drivers, are available in all the systems told, though one can choose to buy the nVidia cards if you wish. Likewise for wireless - we include the Intel wireless solutions in the notebooks because of their excellent support of open source drivers. The only closed-source driver provided is for the modem in the notebooks, which is present though relatively few people use modems any more.

See http://linux.dell.com/wiki/index.php/Ubuntu_7.04 for technical details on Dell's offerings.

Thanks,
Matt
Dell Linux Technology Strategist, Office of the CTO

nVidia OK for 2D

Posted Sep 10, 2007 16:39 UTC (Mon) by bfields (subscriber, #19510) [Link]

We intentionally made sure that Intel video chips, with their completely free drivers, are available in all the systems told, though one can choose to buy the nVidia cards if you wish.

That's not what the ordering system says: I go to dell.com, choose "home and home office", then "open source pc's", and "shop for ubuntu" (from the "Helpful Links" sidebar on the left), and I get this page. The two desktops listed are "Inspiron Desktop 530 N" and "XPS 410N". The 530N gives a choice of "128MB NVIDIA GeForce 8300GS" or "256MB NVIDIA GeForce 8600GT-DDR3". The 410N only offers the former.

If that's a mistake, I'm happy to hear it! Let me know how to order one with the Intel hardware....

nVidia OK for 2D

Posted Sep 18, 2007 2:52 UTC (Tue) by mdomsch (guest, #5920) [Link]

I need to post a correction. I was mistaken about the XPS 410n, it does only come with nVidia graphics, not Intel as I believed.

As for the Inspiron 530n, Ubuntu 7.04 Feisty doesn't have drivers for the Intel 82G33/G31 Express Intel Graphics Controller that is available in that system with Windows; so for the current product offering with Feisty we had to restrict (meaning not sell) that controller when sold with Ubuntu; Ubuntu 7.10 Gutsy is expected to have those drivers. We'll revisit the list of supportable hardware if/when we announce plans to deliver Gutsy.

I apologize for the confusion my earlier comment caused.
-Matt

LCE: Linux, hardware vendors, and enterprise distributors

Posted Sep 6, 2007 0:22 UTC (Thu) by drag (guest, #31333) [Link]

> I'm typing this on the laptop (a 1420n), which shipped with two proprietary drivers, one for the Intel wireless, one for the modem. I assume the need for the proprietary wireless driver will go away with a kernel upgrade at some point, and I don't ever expect to use the modem, so I'm not complaining--it's a fine laptop, and about as free-software compatible as what I could get anywhere else.

I have a 1420n also. After a while of playing around with Ubuntu I switched back to using Debian Sid. It required a lot more setup, but I just feel more comfortable using Debian.

(I lost my desire to run many multiple Linux distros a couple years ago. Not saying Debian is better then Ubuntu.. Ubuntu's default setup is FANTASTIC for new Linux users. None better. I was very impressed)

If you upgrade your kernel to one that supports the mac80211 protocol stack (I am using wireles-dev branch) then you can take advantage of the updated iwl3945 drivers that do not require the regulatory daemon. (the kernel stuff is still open source, just the daemon is closed)

http://intellinuxwireless.org/?p=mac80211&n=Info

I didn't have any luck with the mac80211 package. The wireless dev branch works ok, but I think I may have some problems with encrypted networks (haven't tried any yet)

But this is ok, because the touch pad is a very new ALPS version, not synaptic like I thought.

I requires a one-line patch to the Linux kernel to get it to work with all the bells and whistles.

http://lkml.org/lkml/2007/8/4/184

How is Intel punished?

Posted Sep 5, 2007 17:20 UTC (Wed) by epithumia (subscriber, #23370) [Link]

I'm having trouble understanding how Intel is punished because their competitors have closed-source drivers available. Nothing stops Intel (or the enterprise distribution vendors) from producing compiled versions of existing open-source modules for the enterprise distributions; that would seem to put Intel and their competitors on an equal footing from the standpoint of driver availability, and Intel is still ahead because their source is open. So where's the loss from their standpoint? Surely they're not considering the simple act of opening their drivers as a loss they have to recoup somehow.

How is Intel punished?

Posted Sep 5, 2007 17:54 UTC (Wed) by xav (guest, #18536) [Link]

Because Intel plays by the rules, which means when they develop their driver they help building the entire framework (be it wireless stack or DRM drivers for accelerated 3D). So the changes are not easy to backport to an ancient kernel, compared to someone who just shoehorns a proprietary blob into whatever API exists.

How is Intel punished?

Posted Sep 5, 2007 22:18 UTC (Wed) by khim (subscriber, #9252) [Link]

To develop driver for new kernel and to develop binary blob for ancient kernel in some "enterprise distribution" is to spend more-or-less equal effort: if first place you need to work with community (and it requires time), in the second - you need to fight bugs/shortcomings of old kernel. The proper driver for new kernel is not easily adaptable to old kernel - so in effect Intel is punished for support of open-source development model. I see no easy choice except one: treat the whole kernel and HAL level and upgrade it over time - even in enterprise distributions. Of course it requires testing - but it's needed anyway because "old kernels" in enterprise distributions contain tons of backports...

Vendor-distributor cooperation

Posted Sep 11, 2007 19:36 UTC (Tue) by hazelsct (guest, #3659) [Link]

The obvious solution is to work together with the enterprise distributor. For example, provide a driver which works with the latest kernel, and split the cost of a backport with the distributor(s). For Ubuntu Dapper, the kernel is 2.6.16, and a long series of 2.6.16.n releases have had various fixes and backported drivers.

This in turn would create an incentive for distributors to coordinate their enterprise kernel releases, in order to reduce everybody's backporting costs. This idea isn't new, Ubuntu developers have been requesting it for some time...

Of course, massive API changes such as mac82011 wreak havoc with such a process, but they don't happen with every single kernel release.

Versioned APIs in the kernel?

Posted Sep 5, 2007 20:19 UTC (Wed) by mjthayer (guest, #39183) [Link]

I still wonder - if the kernel had versioned APIs, with a mechanism for providing several versions in parallel in one kernel, couldn't enterprise distributions apply the energy they now put to backporting stuff into creating and maintaining such APIs, and regression testing them to make sure old modules continued to work? Then they could update to more recent kernels during the lifetime of a distribution, although probably never the very newest, and actually use more or less vanilla kernels rather than the things they have now. It needn't affect other users if the additional APIs were a compile-time option.

One of these months, when I finally have some free time again, I will take a look at implementing that, although I'm not very optimistic about it being accepted into the kernel...

Versioned APIs in the kernel?

Posted Sep 5, 2007 22:22 UTC (Wed) by khim (subscriber, #9252) [Link]

This model does not work: both Windows and Solaris are trying very hard to make it work - yet ServicePacks break drivers there routinely. It's because kernel developers just can not check if driver will be broken or not by this or that change unless they either have the hardware (and nobody have all possible hardware) or they have at least driver sources (and if they do have driver sources - then why not just fix it?).

Versioned APIs in the kernel?

Posted Sep 6, 2007 6:43 UTC (Thu) by mjthayer (guest, #39183) [Link]

But would that work less well than what Redhat is doing now? Massively backporting new stuff to old kernels, breaking many things in the process and trying to fix them. In this case, the people who produce the closed source drivers which they want to keep compatible can do a quick test with new kernel versions and report back to Redhat or SUSE saying what needs to be fixed. It actually makes their life easier, as they only have to test against one tree, not every enterprise kernel. And once all regressions which interest the enterprise distributor have been fixed in a given kernel version, or they have created compatiblilty APIs to solve them (and they are only interested in binary modules which have been certified against their distribution) they can add that kernel, which by now has probably had several micro-releases, to their distribution.

Versioned APIs in the kernel?

Posted Sep 6, 2007 8:14 UTC (Thu) by khim (subscriber, #9252) [Link]

It actually makes their life easier, as they only have to test against one tree, not every enterprise kernel.

Only the guys who produce binary drivers benefit: RedHat guys will still need to backport features, mainstream developers will be forced to keep "compatibility layer" unbroken, etc.

P.S. Do you really think RedHat keeps kernels bug-for-bug compatible in the life of enterprise distribution ? Then you are mistaken: it's not uncommon to require new version of binary drivers after kernel upgrade for RHEL. They only keep old version and backport features because all other components are handled in this way too...

Versioned APIs in the kernel?

Posted Sep 9, 2007 17:45 UTC (Sun) by skybrian (guest, #365) [Link]

The whole point of having a stable API that conforms to a standard is to support interoperability with other software when you *don't* have source for it. You shouldn't need to test that specific apps work. A regression test suite that shows that you conform to the standard should be enough. If an app breaks because it's not following the standard, fix the app.

This is how application API's and network protocols work. Why can't it work for device drivers?

Of course, due to the wide variety of devices and introduction of new kinds of devices, no single standard can cover everything and no standard will last more than a few years without being replaced. But it still seems feasable to define a driver API for each type of device and support it for multiple versions of the kernel, until it eventually becomes so old that it needs to be replaced. In a typical kernel release, some of the API's will change, but most should stay the same.

Versioned APIs in the kernel?

Posted Sep 10, 2007 6:49 UTC (Mon) by mjthayer (guest, #39183) [Link]

Seems reasonably feasible to me :) However it seams to be something which the kernel developers clearly do not want to have, and since it is their kernel, they take the decisions - short of someone forking the kernel.

My idea was more a mechanism through which people could - if they liked - provide compatibility APIs when the main ones changed. Something which they would then be responsible for themselves, albeit inside the kernel, and which would be an opt-in feature at compile-time. Regression testing is definitely necessary in this case, as the kernel APIs are not so clearly defined that you can tell without testing whether something has changed or not. And the suite is whatever you wish to support.

No, I do not like the common FOSS idea that "the source code is the specification" either, at least not for anything approaching public interfaces, as it makes it much more difficult to write clean code, and much more code ends up being of the "works for me" kind. But as I said, it is not my decision, and I recognise that the work the kernel maintainers do gives them (and the people who pay them :) ) the right to lay down the rules.

Versioned APIs in the kernel?

Posted Sep 22, 2007 7:09 UTC (Sat) by HalfMoon (guest, #3211) [Link]

A regression test suite that shows that you conform to the standard should be enough. If an app breaks because it's not following the standard, fix the app. ... This is how application API's and network protocols work. Why can't it work for device drivers?

You seem to assume that kernel developers are *gratuitously* breaking driver interfaces. Not true!

In fact, the deep assumption you're making -- the unreasonable one -- is that a rock solid definition of the "standard" is possible, and reasonable. Neither is true.

Have you ever tried to come up with such a definition? I've worked on several. Let me assure you, it's harder than actually implementing the code that purports to meet such specs. Several times harder. Doing a good job means considering lots of implementation strategies, and ensuring that many of them are possible. And it also involves making tradeoffs, and noticing that certain things "must" be left unspecified unless you want to rule out important implementation strategies (or spend time implementing several of them and documenting the results).

And writing a test suite ... talk about hard and un-glamorous. If it's good enough to be worth using (i.e. strong coverage of the whole interface, with both positive and negative tests, shaking out timing bugs and races, etc) then it must have beeen written by a *GOOD* developer. Even just fair-to-middling developers aren't much help there.

There really aren't many people who can do that kind of work, AND write. Linux may be very lucky if it's got two engineers who can do that in a given area. Taking one of them off making hardware work better, and putting them onto less rewarding spec and testsuite work ... sounds like a lose all around. Software companies can sometimes do that, if they've got a surplus of good engineers. Linux does not have such luxuries.

And especially with drivers ... you're almost certain to find that the interfaces need to change because of stuff hardware designers are doing, or because after you finally understand the problem well enough to have solid code, you find that your initial interfaces have serious breakage which prevents sane support for the next chunk of hardware. (Oh, and other parts of the kernel now let you do some things better...)

THAT is why it can't work for device drivers. The problem really is not that simple. And developers really are not that available.

LCE: Linux, hardware vendors, and enterprise distributors

Posted Sep 5, 2007 21:04 UTC (Wed) by ibukanov (subscriber, #3942) [Link]

I wonder if Intel can sue those vendors that distribute binary-only modules with their hardware. Given that size of Intel contributions to the kernel, the company may have enough copyrights in Linux sources to argue about GPL violations by the vendors.

In the enterprise market even a threat of such lawsuit can be enough to stop sales of the affected hardware.

LCE: Linux, hardware vendors, and enterprise distributors

Posted Sep 5, 2007 21:31 UTC (Wed) by smitty_one_each (subscriber, #28989) [Link]

The actual problem seems to be the moving-target nature of kernel APIs, and the burden placed upon a hardware vendor to target not only kernel versions, but the various incompatible enterprise distros.

Feeding the sharks (trying to come up with a legal remedy) would benefit no one but the true enemy, the proprietary OS vendors.

LCE: Linux, hardware vendors, and enterprise distributors

Posted Sep 5, 2007 21:58 UTC (Wed) by ibukanov (subscriber, #3942) [Link]

I interpreted the article that the problem was vendors that shiped binary-only modules assuming that they could get away with hiding the driver sources from competitors. The fact that it is only realistically possible with stable enterprise kernels just shows the niche where GPL violators flourish.

But may be you right and the real problem the kernel developers are facing is that the vendors prefer to target the old enterprise kernels and not the latest stuff from kernel.org and it works for them. Even with the free drivers the problem would be exactly the same, that is, new hardware only working with old specific version of Linux.

Support for tainted kernel

Posted Sep 5, 2007 22:48 UTC (Wed) by ibukanov (subscriber, #3942) [Link]

A part of the problem comes from the fact that distributors of enterprise kernels are willing to support customers who install binary-only modules. If the customer would know that loading those BLOBs means canceling support contract for the kernel, they may think twice before using such hardware, right?

Do Novel/RedHat provide such support?

Support for tainted kernel

Posted Sep 6, 2007 1:25 UTC (Thu) by AJWM (guest, #15888) [Link]

If I recall correctly from the last time I had to deal with this, RedHat provides limited support for tainted kernels, restricted to modules from just a few vendors which presumably they have some kind of partnering with.

The incident that comes to mind involved EMC Powerpath modules on an otherwise untainted RHEL kernel (a few non-stock drivers but they were GPLd). (The modules weren't the cause of the problem, it turned out to be a known kernel bug that already had a fix, but it was hard to reproduce being load dependant, and 32GB core dumps are a pain to ship around.)

Support for tainted kernel

Posted Sep 6, 2007 16:56 UTC (Thu) by BenHutchings (subscriber, #37955) [Link]

With Novell SLES all third-party modules are unsupported. It won't even auto-load GPL'd third-party modules unless you change a configuration file.

Support for tainted kernel

Posted Sep 6, 2007 19:13 UTC (Thu) by ibukanov (subscriber, #3942) [Link]

Then I wonder who installs those proprietary binary-only modules on enterprise kernels and why this is a problem at all? Surely this must not be Novell/RedHat customers.

LCE: Linux, hardware vendors, and enterprise distributors

Posted Sep 6, 2007 6:43 UTC (Thu) by grahammm (guest, #773) [Link]

Why do the enterprise distributions have to take so long for the new drivers etc to be introduced?

Much earlier in my career I worked as a systems programmer for an ICL mainframe user. The operating systems releases came on a 6 monthly cycle with only 2 'back' versions supported. So we were forced to upgrade at least every 18 months (though in practice we normally applied each release, after extensive testing, within 2 or 3 months) So why can the enterprise Linux distributions not work to a faster release schedule?

RHEL Updates - Respins with updated packages including new and updated drivers

Posted Sep 6, 2007 15:40 UTC (Thu) by dowdle (subscriber, #659) [Link]

I don't get this. Red Hat provides periodic Update releases every 3-4 months (aka respins). I've read in release notes for the updates countless times how various bugs were fixed, drivers were updated to newer versions, and additional drivers were added... sometimes backported when needed.

Without seeing a case study and what happened... I don't really have any proof that there is a problem.

I'm sure Red Hat doesn't backport every driver added to newer kernels... but if it is something their customers want, they do... or at least that is my understanding.

With regards to Red Hat customers using closed / binary only drivers... everything I've read shows that Red Hat would prefer not to support those at all... as they highly encourage using open source only software.

RHEL Updates - Respins with updated packages including new and updated drivers

Posted Sep 10, 2007 2:47 UTC (Mon) by mdomsch (guest, #5920) [Link]

RHEL scheduled updates are occurring closer to every 6 months, not 3-4.
SLES scheduled udpates are occurring closer to every year.

Both provide mechanisms, "KMODs", to distribute backported drivers built on the older kernel trees. <shameless plug> DKMS can be used to generate KMODs. </shameless plug>

LCE: Linux, hardware vendors, and enterprise distributors

Posted Sep 6, 2007 17:39 UTC (Thu) by bockman (guest, #3650) [Link]

Linux is currently used mostly on servers, where to have drivers for the latest hardware is seldomly
an issue. This is why the commercial distributors can get away with the policy of freezing the kernel
for two-three years - which obviously saves them many troubles.

The major commercial desktop-oriented distribution (Ubuntu) has a release cycle of 6 months;
while not optimal for getting new hardware support, 6 months seems to me good enough for its
current user base (which in most part is composed of people who know how to compile their own
kernel).

So ... is this really a big issue? It would be if Linux had a large base of consumer PC users, but now
... .

Ciao
----
FB


Copyright © 2007, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds