|
|
Subscribe / Log in / New account

A constantly usable testing distribution for Debian

Please consider subscribing to LWN

Subscriptions are the lifeblood of LWN.net. If you appreciate this content and would like to see more of it, your subscription will help to ensure that LWN continues to thrive. Please visit this page to join up and keep LWN on the net.

September 22, 2010

This article was contributed by Raphaël Hertzog

Debian's "testing" distribution is where Debian developers prepare the next stable distribution. While this is still its main purpose, many users have adopted this version of Debian because it offers them a good trade-off between stability and freshness. But there are downsides to using the testing distribution, so the "Constantly Usable Testing" (CUT) project aims to reduce or eliminate those downsides.

About Debian unstable & testing

Debian "unstable" is the distribution where developers upload new versions of their packages. But, frequently some packages are not installable from unstable due to changes in other packages or transitions in libraries that have not yet been completed.

Debian testing, on the contrary, is managed by a tool that ensures the consistency of the whole distribution: it picks updates from unstable only if the package has been tested enough (10 days usually), is free of new release-critical bugs, is available on all supported architectures, and it doesn't break any other package already present in testing. The release team controls this tool and provides "hints" to help it find a set of packages that can flow from unstable to testing.

Those rules also ensure that the packages that flow into testing are reasonably free of show-stopper bugs (like a system that doesn't boot, or X that doesn't work at all). This makes it very attractive to users who like to regularly get new upstream versions of their software without dealing with the biggest problems associated with them. That is very attractive to users, yet several Debian developers advise people to not use testing. Why is that?

Known problems with testing

Disappearing software: The release team uses the distribution to prepare the next stable release and from time to time they remove packages from it. That is done either to ensure that other packages can migrate from unstable to testing, or because a package has long-standing release-critical bugs without progress towards a resolution. The team will also remove packages on request if the maintainers believe that the current version of the software cannot be supported (security-wise) for 2 years or more. The security team also regularly issues such requests.

Long delays for security and important fixes: Despite the 10-day delay in unstable, there are always some annoying bugs (and security bugs are no exceptions) that are only discovered when the package has already migrated to testing. The maintainer might be quick to upload a fixed package in unstable, and might even raise the urgency to allow the package to migrate sooner, but if the packages get entangled in a large ongoing transition, it will not migrate before the transition is completed. Sometimes it can take weeks for that to happen.

The delay can be avoided by doing direct uploads to testing (through testing-proposed-updates) but that mechanism is almost never used except during a freeze, where targeted bug fixes are the norm.

Not always installable: With testing evolving daily, updates sometimes break the last installation images available (in particular netboot images that get everything from the network). The debian-installer (d-i) packages are usually quickly fixed but they don't move to testing automatically because the new combination of d-i packages has not necessarily been validated yet. Colin Watson sums up the problem:

Getting new installer code into testing takes too long, and problems remain unfixed in testing for too long. [...] The problem with d-i development at the moment is more that we're very slow at producing new d-i *releases*. [...] Your choices right now are to work with stable (too old), testing (would be nice except for the way sometimes it breaks and then it tends to take a week to fix anything), unstable (breaks all the time).

CUT's history

CUT has its roots in an old proposal by Joey Hess. That introduces the idea that the stable release is not Debian's sole product and that testing could become—with some work—a suitable choice for end-users. Nobody took on that work and there has been no visible progress in the last 3 years.

But recently Joey brought up CUT again on the debian-devel mailing list and Stefano Zacchiroli (the Debian project leader) challenged him to setup a BoF on CUT for Debconf10. It turned out to be one of the most heavily attended BoFs (video recording is here), so there is clearly a lot of interest in the topic. There's now a dedicated wiki and an Alioth project with a mailing list.

The ideas behind CUT

Among all the ideas, there are two main approaches that have been discussed. The first is to regularly snapshot testing at points where it is known to work reasonably well (those snapshots would be named "cut"). The second is to build an improved testing distribution tailored to the needs of users who want a working distribution with daily updates, its name would be "rolling".

Regular snapshots of testing

There's general agreement that regular snapshots of testing are required: it's the only way to ensure that the generated installation media will continue to work until the next snapshot. If tests of the snapshot do not reveal any major problems, then it becomes the latest "cut". For clarity, the official codename would be date based: e.g. "cut-2010-09" would be the cut taken during September 2010.

While the frequency has not been fixed yet, the goal is clearly to be on the aggressive side: at the very least every 6 months, but every month has been suggested as well. In order to reach a decision, many aspects have to be balanced.

One of them (and possibly the most important) is the security support. Given that the security team is already overworked, it's difficult to put more work on their shoulders by declaring that cuts will be supported like any stable release. No official security support sounds bad but it's not necessarily so problematic as one might imagine. Testing's security record is generally better than stable's is (see the security tracker) because fixes flow in naturally with new upstream versions. Stable still get fixes for very important security issues earlier than testing, but on the whole there are fewer known security-related problems in testing than in stable.

Since it's only a question of time until the fixed version comes naturally from upstream, more frequent cut releases means that users get security fixes sooner. But Stefan Fritsch, who used to be involved in the Debian testing security team, has also experienced the downside for anyone who tries to contribute security updates:

The updates to testing-security usually stay useful only for a few weeks, until a fixed version migrates from unstable. In stable, the updates stay around for a few years, which gives a higher motivation to spend time on preparing them.

So if it's difficult to form a dedicated security team, the work of providing security updates must be done by the package maintainer. They are usually quite quick to upload fixed packages in unstable but tend to not monitor whether the packages migrate to testing. They can't be blamed for that because testing was created to prepare the next stable release and there is thus no urgency to get the fix in as long as it makes it before the release.

CUT can help in this regard precisely because it changes this assumption: there will be users of the testing packages and they deserve to get security fixes much like the stable users.

Another aspect to consider when picking a release frequency is the amount of associated work that comes with any official release: testing upgrades from the previous version, writing release notes, and preparing installation images. It seems difficult to do this every month. With this frequency it's also impossible to have a new major kernel release for each cut (since they tend to come out only every 2 to 3 months) and the new hardware support that it brings is something worthwhile to many users.

In summary, regular snapshots address the "not always installable" problem and may change the perception of maintainers toward testing so that hopefully they care more of security updates in that distribution (and in cuts). But it does not solve the problem of disappearing packages. Something else is needed to fix that problem.

A new "rolling" distribution?

Lucas Nussbaum pointed out that regular snapshots of Debian is not really a new concept:

How would this differentiate from other distributions doing 6-month release cycles, and in particular Ubuntu, which can already be seen as Debian snapshots (+ added value)?

In Lucas's eyes, CUT becomes interesting if it can provide a rolling distribution (like testing) with a "constant flux of new upstream releases". For him, that would be "something quite unique in the Free Software world". The snapshots would be used as starting point for the initial installation, but the installed system would point to the rolling distribution and users would then upgrade as often as they want. In this scenario, security support for the snapshots is not so important, what matters is the state of the rolling distribution.

If testing were used as the rolling distribution, the problem of disappearing packages would not be fixed. But that could be solved with a new rolling distribution that would work like testing but with adapted rules, and the cuts would then be snapshots of rolling instead of testing. The basic proposal is to make a copy of testing and to re-add the packages which have been removed because they are not suited for a long term release while they are perfectly acceptable for a constantly updated release (the most recent example being Chromium).

Then it's possible to go one step further: during a freeze, testing is no longer automatically updated, which makes it inappropriate to feed the rolling distribution. That's why rolling would be reconfigured to grab updates from unstable (but using the same rules as testing).

Given the frequent releases, it's likely that only a subset of architectures would be officially supported. This is not a real problem because the users who want bleeding edge software tend to be desktop users on mainly i386/amd64 (and maybe armel for tablets and similar mobile products). This choice—if made—opens up the door to even more possibilities: if rolling is configured exactly like testing but with only a subset of the architectures, it's likely that some packages would migrate to rolling before testing where non-mainstream architectures are lagging in terms of auto-building (or have toolchain problems).

While being ahead of testing can be positive for the users, it's also problematic on several levels. First, managing rolling becomes much more complicated because the transition management work done by the release team can't be reused as-is. Then it introduces competition between both distributions which can make it more difficult to get a stable release out, for example if maintainers stop caring about the migration to testing because the migration to rolling has been completed.

The rolling distribution is certainly a good idea but the rules governing it must be designed to avoid any conflict with the process of releasing a stable distribution. Lastly, the mere existence of rolling would finally fix the marketing problem plaguing testing: the name "rolling" does not suggest that the software is not yet ready for prime time.

Conclusion

Whether CUT will be implemented remains to be seen, but it's off for a good start: ftpmaster Joerg Jaspert said that the new archive server can cope with a new distribution, and there's now a proposal shaping up. It may get going quickly as there is already an implementation plan for the snapshot side of the project. The rolling distribution can always be introduced later, once it is ready. Both approaches can complement each other and provide something useful to different kind of users.

The global proposal is certainly appealing: it would address the concerns of obsolescence of Debian's stable release by making intermediary releases. Anyone needing something more recent for hardware support can start by installing a cut and follow the subsequent releases until the next stable version. And users who always want the latest version of all software could use rolling after having installed a cut.

From a user point of view, there are similarities with the mix of normal and long-term releases of Ubuntu. But from the development side, the process followed would be quite different, and the constraints imposed by having a constantly usable distribution are stronger. With CUT, any wide-scale change must be designed in a way that it can happen progressively in a transparent manner for users.


Index entries for this article
GuestArticlesHertzog, Raphaël


(Log in to post comments)

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 19:33 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

Personally, I'd prefer quicker Debian releases. Or better updates (kernel backports, X backports) for existing releases.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 20:00 UTC (Wed) by cjwatson (subscriber, #7322) [Link]

There's a good chance that CUT could serve a dual purpose of making it easier to prepare new stable releases. As many projects have found, if you have more-or-less releaseable checkpoints every so often then it's easier to prepare a better-than-usual one for your gold release. Contrariwise, if you get out of the habit then it's harder to claw your way back to stability.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 21:27 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

I don't really think so. CUT is not going to be radically different than the current testing.

Debian Stable rapidly becomes woefully obsolete - it's usually about 3-4 years old. New hardware is not supported, new versions of infrastructure software (Python, Ruby, Java, etc.) are not supported and so on.

It's nice to run it on servers, but I'm not able to use it even on locked-down business desktops!

My favorite idea is splitting Debian into "Debian Core" and "Debian Universe". "Debian Core" will have fairly rapid release cycle (yearly, perhaps) and will include only core infrastructure software (kernel, X, Python, gcc). And "Debian Universe" will contain everything else.

Kinda like Ubuntu's model of Multiverse or Arch's AURs.

Exaggeration

Posted Sep 22, 2010 21:50 UTC (Wed) by dbruce (guest, #57948) [Link]

"Debian Stable rapidly becomes woefully obsolete - it's usually about 3-4 years old."

http://en.wikipedia.org/wiki/Debian#Release_history

No, it has never been more than 3 years old. The only time it was ever over two years old was between July 2004 and June 2005 (due to the infamously-delayed Sarge release). Until 2002, Debian released a stable distro every year. Since Sarge, it has been just under two years between releases.

So in recent years, Stable has always been between zero and two years old, with the mean age being about a year.

DSB

Exaggeration

Posted Sep 22, 2010 22:09 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

"No, it has never been more than 3 years old."

You're forgetting the time spent in pre-release freeze. For example, Lenny has Linux 2.6.26 which was released on 13 Jul 2008. So it's already 2 years old and is going to be about 3 years when Squeeze is going to be released.

Python 2.5.2 (in Lenny) was released on Feb 18 2008 - already more than 2.5 year old.

Exaggeration

Posted Sep 23, 2010 12:42 UTC (Thu) by dbruce (guest, #57948) [Link]

Good points - the stabilization process seems to add an awful lot of time.

DSB

Exaggeration

Posted Oct 5, 2010 13:19 UTC (Tue) by dererk (guest, #67491) [Link]

You are mixing a lot of real facts, but using them as your own purpose.

That is, following your understanding, Red Hat Enterprise Linux offers a 4 years old distribution (because they distribute 2.6.18 kernel which was realized 14th Oct 2006), or, in the same way, it's 6 years old, because it includes python 2.4 released on 2004...

It's a stable software, in software engineer that would basically mean it has been proven to work in most testing scenarios, unfortunately for some cases, *that time* is too much.

I really think CUT would be a solution for them. Once again, thanks JoeyHess the great tools you invent and code (altogether with etckeeper, debconf itself, and so on!)!

Exaggeration

Posted Sep 22, 2010 22:20 UTC (Wed) by foom (subscriber, #14868) [Link]

Well, it depends on how you count. For example, Debian Squeeze is not yet released. But it's going to be way outdated at release time, if you take some specific high-profile examples:
- Python 2.6 (not 2.7, released Jul 3)
- Linux 2.6.32 (not 2.6.33-2.6.35, 2.6.33 released Feb 24)
- GCC 4.4 (not 4.5, released Apr 15)
- Firefox 3.5 (not 3.6, released Jan 21)
- Thunderbird 3.0 (not 3.1, released Jun 24)

So, by your measures, Squeeze is not yet 0 years old, but if you measure by firefox version included, it's already 8 months out of date and it hasn't even been released yet.

I don't have a major problem with that; I use Debian on all my machines -- stable (lenny) on most of them. And basically the only software I've upgraded on those is emacs23 and linux 2.6.32.

But it does seem somewhat of a shame that it takes so long to stabilize things and get a release ready after starting to freeze packages, that much of the software is 6+ months out of date at the day of release. Maybe CUT will help with that.

Exaggeration

Posted Sep 23, 2010 15:04 UTC (Thu) by juliank (guest, #45896) [Link]

> Well, it depends on how you count. For example, Debian Squeeze is not yet
> released. But it's going to be way outdated at release time, if you take
> some specific high-profile examples:
> - Python 2.6 (not 2.7, released Jul 3)
Same for Ubuntu 10.04 and Ubuntu 10.10.

> - Linux 2.6.32 (not 2.6.33-2.6.35, 2.6.33 released Feb 24)
Well, 2.6.32 will be maintained longer than 2.6.33, 2.6.34, or 2.6.35; and makes much more sense for a Debian release.

> - GCC 4.4 (not 4.5, released Apr 15)
Same for Ubuntu 10.04 and Ubuntu 10.10; moving to a new GCC version is usually a bit complicated.

> - Firefox 3.5 (not 3.6, released Jan 21)
> - Thunderbird 3.0 (not 3.1, released Jun 24)
Mozilla stuff is generally a problem, as far as I know.

Exaggeration

Posted Sep 26, 2010 14:15 UTC (Sun) by pgquiles (guest, #70318) [Link]

>> - Python 2.6 (not 2.7, released Jul 3)
>Same for Ubuntu 10.04 and Ubuntu 10.10.

Ubuntu 10.10 already has Python 2.7

http://packages.ubuntu.com/search?keywords=python2.7

>> - GCC 4.4 (not 4.5, released Apr 15)
>Same for Ubuntu 10.04 and Ubuntu 10.10; moving to a new GCC version is >usually a bit complicated.

Ubuntu 10.10 already has gcc 4.5

http://packages.ubuntu.com/search?keywords=gcc-4.5

If Ubuntu can develop something quite stable with 6-month release cycles and 2-month stabilization cycles, why can't Debian try it? (openSuse has 9-month release cycle and also works well for them)

Exaggeration

Posted Sep 26, 2010 14:34 UTC (Sun) by juliank (guest, #45896) [Link]

> Ubuntu 10.10 already has Python 2.7
But it's not the default and not supported, so practically useless.

> Ubuntu 10.10 already has gcc 4.5
It's not the default, so it does not matter.

Debian has those packages as well, in experimental. In Ubuntu, there is no thing such as experimental, so it needs to be in maverick in order to be in Ubuntu.

Exaggeration

Posted Sep 30, 2010 17:49 UTC (Thu) by pboddie (subscriber, #50784) [Link]

But it's going to be way outdated at release time, if you take some specific high-profile examples:
- Python 2.6 (not 2.7, released Jul 3)

I suppose using something like Python 2.5 (as I do on the semi-supported Kubuntu 8.04 release) occasionally results in brushing up against code written needlessly against Python 2.6-or-later features, but quite a lot of that can be fixed quite quickly, especially if that code is limited to people doing stupid things with setuptools instead of just providing sane distutils stuff in their setup scripts.

Really, Python 2.6 is the launchpad release for people jumping to 3.x, with 2.7 being the successor in that regard, plus extra gravy.

Your other examples are somewhat better, however, although there's almost always a case to be made for holding back on the newer stuff, especially if adopting such stuff means several laps of the track for those having to integrate and test it with everything else.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 22:22 UTC (Wed) by foom (subscriber, #14868) [Link]

> New hardware is not supported

Actually that's not true. The stable kernel updates *do* include new hardware support, where it's possible to backport in a reasonable way. For example, the onboard ethernet card in my desktop (running lenny) wasn't supported in the original kernel, but is now.

For etch (the one before lenny), they even released a new upstream kernel partway through the stable cycle which could be optionally installed.

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 8:03 UTC (Thu) by micka (subscriber, #38720) [Link]

Two month ago, I tried to install Squeeze on a new laptop. Neither the ethernet nor the wifi were supported, and that can make a netinstall very hard to do.
Both were supported in 2.6.33 but were not yet added to 2.6.32.

Anyway, I always update to unstable right after the end of testing install (read minutes after). Testing is outdated as soon as it freezes (sometimes even before).

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 15:00 UTC (Thu) by foom (subscriber, #14868) [Link]

> Two month ago, I tried to install Squeeze on a new laptop. Neither the ethernet nor the wifi were supported, and that can make a netinstall very hard to do.
> Both were supported in 2.6.33 but were not yet added to 2.6.32.

If you submit a bugreport, it might get added.

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 7:00 UTC (Thu) by cjwatson (subscriber, #7322) [Link]

I didn't say that you would be able to use CUT everywhere you can use stable; that was not my point. My point was that the *process* of preparing regular CUTs would help us with releasing stable in a more timely fashion.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 19:37 UTC (Wed) by me@jasonclinton.com (guest, #52701) [Link]

The snapshots+rolling proposal makes me so happy I could squeal. Kudos to everyone involved.

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 7:34 UTC (Thu) by gwittenburg (guest, #5080) [Link]

+1 on that one! :)

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 19:37 UTC (Wed) by nicooo (guest, #69134) [Link]

How does this compare to Mint and aptosid?

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 3:59 UTC (Thu) by a9db0 (subscriber, #2181) [Link]

aptosid is focused on being derived from Sid, which is the unstable branch of Debian. It's far more bleeding edge than Testing. (Current kernel - 2.6.35) It's very nice for those of use who like to hang out on the slightly-hairier edge.

I currently use it on my desktop and laptop. I started on the laptop because I needed the very up to date kernel for harware support, and I wanted to try an XFCE desktop. Liked it so much I moved my desktop to it.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 20:18 UTC (Wed) by dlang (guest, #313) [Link]

I like the rolling proposal, but I see no reason why it _must_ continue to get automated updates from unstable when testing is frozen.

I think it would be perfectly acceptable to let the rolling version freeze along with testing when a new release is being prepared.

if these freezes are long enough to cause real grief for people using rolling, then it probably means that the freeze was declared too early.

I don't think that you need upgrade instructions from one snapshot to another, the upgrade process is just the normal apt-get upgrade that you would do between snapshots. The value in updating the snapshots on a regular basis is that it will greatly reduce the number of packages that must be downloaded after the install to bring the system up to date.

I also think it's reasonable to state that packages in testing/rolling must be able to upgrade themselves from any previous version that was in testing/rolling in the last X months (where X is a relatively small number 3-6 for example) as well as from the last -stable release rather than requiring that packages be able to upgrade themselves from _any_ prior version. If someone isn't doing an apt-get upgrade at least every 6 months, they probably don't really want the rolling distro anyway.

I'm not saying that it's good to have packages that can't upgrade from some prior versions, just that it shouldn't be a show-stopper requirement to do so.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 20:28 UTC (Wed) by rfunk (subscriber, #4054) [Link]

Sounds like a good idea in theory, but then I'm not responsible for dealing with its complications.

I wonder how this would compare with sidux in practice.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 21:05 UTC (Wed) by clugstj (subscriber, #4020) [Link]

Does anyone run Debian "stable"?

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 21:24 UTC (Wed) by mpr22 (subscriber, #60784) [Link]

I only stopped doing so earlier this year. I can't remember what was annoying me enough to cause me to do so, however.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 21:41 UTC (Wed) by paravoid (subscriber, #32869) [Link]

Are you really asking that? Several people (and companies) do, on hundreds of thousands of machines.

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 21:46 UTC (Wed) by dlang (guest, #313) [Link]

yes they do, especially on servers.

they don't always stick to only the versions of the packages supplied by Debian though.

I run debian on a few hundred systems, but there are a half dozen or so packages that I consider critical and run more up-to-date versions of (the kernel being one of them)

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 22:31 UTC (Wed) by marcH (subscriber, #57642) [Link]

> Does anyone run Debian "stable"?

If not, who would run Redhat?

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 12:37 UTC (Thu) by jengelh (subscriber, #33263) [Link]

Over my dead body! :-D (Seriously, RHEL has already beaten Debian stable in terms of outdatedness.)

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 22:54 UTC (Wed) by copsewood (subscriber, #199) [Link]

I run Debian stable on a cheap virtual machine server and have done so for several years, achieving around 99.9% uptime ( perhaps 3-4 hours downtime per year, and nearly all of this downtime is planned in advance). Not bad for a £15/month server.

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 9:45 UTC (Thu) by algernon (guest, #11573) [Link]

I run stable on all my servers - for most of the time anyway. I usually start to switch my servers one by one to testing halfway through freezes, though.

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 13:34 UTC (Thu) by Seegras (guest, #20463) [Link]

On servers. A lot of servers. Everything else (well yes, everything including RedHat, SuSE, *BSD, Solaris and of course #@!dows, except maybe Ubuntu LTS) is just a nuisance. You want them to work, and you want them to have all the necessary security-patches, but you don't want to fuss over them every day.

Otherwise, on my workstation and notebook, I use "unstable".

A constantly usable testing distribution for Debian

Posted Sep 24, 2010 18:04 UTC (Fri) by giraffedata (guest, #1954) [Link]

A lot of people also want their desktop and notebook to work without fussing over them every day, so run Debian stable there too. I do. Once, I found I wanted some features that were too new to be in Stable, but I decided to wait a year rather than risk a maintenance headache.

Rackspace rents virtual machines running any of about a dozen Linux OSes. Stable is the only Debian option it offers.

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 21:21 UTC (Thu) by oak (guest, #2786) [Link]

Yes, on my desktop. At home and at work.

On my home laptop I run Debian old-stable[1] as the newer SW in Debian stable was memory usage wise too bloated for it.

[1] Note: its PCMCIA network card broke before Lenny became stable, so I don't worry about remote exploits and as I don't have any valuable data on it, I've pasted the password on its cover. I use it fairly rarely, mainly to do some C/Python coding & TeX writing when not at home. If somebody steals it, they are going to be responsible for recycling it. :-)

A constantly usable testing distribution for Debian

Posted Oct 6, 2010 0:55 UTC (Wed) by arpadapo (guest, #70478) [Link]

I do, for instance.

I have two computers, one is a desktop with Lenny (stable) and the other is a laptop with Squeeze (testing). However, there are several virtual machines on both, so I can actually use something newer whenever I want. Or when I have to care about security more than usual. (Good example is web browsing sometimes from Sid in VirtualBox, using the latest version of Firefox plus the necessary addons.)

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 21:43 UTC (Wed) by ms (subscriber, #41272) [Link]

Weird - I've been using debian unstable for about 9 years now on all my desktops and some servers. I very rarely have any problems.

Now admittedly, my requirements are light - ratpoison, mutt, xterm, chrome (was firefox) and emacs are pretty much all I actually require, and I've only recently taken to running gnome in the background.

Given I have little experience of where these constantly breaking packages come from, can people enlighten me? What does break so frequently? - is it just the latest shiny stuff or have I just been very very lucky?

A constantly usable testing distribution for Debian

Posted Sep 22, 2010 22:31 UTC (Wed) by SiB (subscriber, #4048) [Link]

Once or twice a year I had to fix something on a text console after an upgrade. Only once ever I had to init=/bin/bash to recover from some breakage. I try to avoid an upgrade if I know I need the computer during the next few days. And I never upgrade all computers at once. So, Debian sid is pretty stable, from my point of view.

Agreed, unstable gives me very few problems

Posted Sep 30, 2010 7:05 UTC (Thu) by cypherpunks (guest, #1288) [Link]

And I've been using it on production servers since 2000. The biggest pain I remember is CUPS. It's still an opaque Windows-inspired piece of monolithic crap, and there have been month-long periods when new releases didn't work for mysterious undiagnosable reasons.

But other than that, I upgrade a few times a week and have only had a few glitches.

Bleeding-edge is "experimental". I've used packages from there occasionally.

Problems with clusters of related packages

Posted Sep 23, 2010 0:57 UTC (Thu) by JoeBuck (subscriber, #2330) [Link]

Back when I tried running Debian testing, I had major difficulties whenever there was a Gnome transition (I think KDE had the same issue), in that packages would trickle in individually, resulting in a mostly unusable desktop because Gnome wouldn't really run with an arbitrary mix of older and newer packages with a few left out, regardless of what the dependencies say. It's been a while so maybe the Debian folks have figured out how to make this work better, but I somehow doubt it. Probably the only way to really get it right is to identify groups of packages that have to all move into testing together, even if the raw .deb dependencies say that there is more flexibility.

Problems with clusters of related packages

Posted Sep 23, 2010 13:10 UTC (Thu) by Np237 (guest, #69585) [Link]

Please don’t hesitate to report such issues. It often happens that we miss some of the inter-package dependencies in unstable, and they get only noticed when the package enters testing.

This is where testing migration beats us hard, since the packages with fixed dependencies will not migrate until their dependencies do, so testing users remain screwed.

I feel this is one of the major challenges that needs to be dealt with by CUT.

A constantly usable testing distribution for Debian

Posted Sep 23, 2010 16:24 UTC (Thu) by gidoca (subscriber, #62438) [Link]

Rolling releases aren't quite that unique. Gentoo and Arch have had them for a long time, and I wouldn't want to use a distro on a desktop that doesn't have rolling releases. I really hope Debian decides in favor of them.

A constantly usable testing distribution for Debian

Posted Sep 27, 2010 14:46 UTC (Mon) by misiu_mp (guest, #41936) [Link]

They do, dont they? I thought arch had this kind of release model, but I wasn't sure.

I definitely like the ability to upgrade software to the next great version, instead of being stuck with old and outdated one, however secure and stable it might be. This is especially valid for typical user-level applications, where features are important such as OO.org, firefox, eclipse, gimp, games, multi-media, version control clients etc. These usually are the leaves of the dependency tree and can easily be upgraded. If they break, they don't affect other packages and can be replaced simply with new versions.
It is usually less important to be up-to-date with the core packages such as kernel, gnome, glib, dbus and such. For those it is more important to have stable versions. As long as it is possible to use the latest features of my user programs for work or whatever, stable core is a good core.

That sounds like a good idea for a distribution - cutting edge user programs (with several versions to choose from) and a stable, hardened system core.

A constantly usable testing distribution for Debian

Posted Sep 24, 2010 15:09 UTC (Fri) by danieldk (guest, #27876) [Link]

I would love this, because it would be easier to recommend Debian to less technical users. I never really recommend friends or family to use Debian Stable (since they expect newer applications), and I woudn't dare to recommend them testing or unstable. So, Ubuntu is usually the best compromise. But having a constantly usable testing distribution would put Debian back as an option.

Constantly Updating Testing

Posted Sep 30, 2010 7:41 UTC (Thu) by pkr (guest, #50467) [Link]

This doesn't sound very attractive to me. I'm running Debian/stable on one
of my desktops and switched to Debian/testing just recently on another
because I wanted a newer Version of PyQT. But switching to testing was
no fun. The next time when I want a newer Version of any software
than the one installed, I will just install it in /usr/local and
remove the installed outdated package. I will do that for any
other software until only essential packages are left.

It looks like Debian doesn't know what to do with releases, they have
volatile, backports and numerous unofficial projects (like debian-
desktop.org), trying to solve the problem they have with
releases. This proposal sounds like a good solution but only if
they would also give up trying to get a real release ready and leave
that for others like Ubuntu.

A constantly usable testing distribution for Debian

Posted Sep 30, 2010 18:19 UTC (Thu) by ummmwhat (guest, #54087) [Link]

Finally!
I'm not sure how it took 17 years to realize this obvious need (perhaps even the most basic one!).

I think it's simply silly that Windows users can just download new software, while Linux users doing distribution updates (which is what they are supposed to do) have to wait up to 6 months for new software.

The solution is simple: try to trust upstreams.
If upstream releases a new stable version, then assume it's a _stable_ version and thus _immediately_ include it as an update to the stable distribution.

If any upstream is caught releasing "stable" releases that are not suitable for being updates to stable distributions, then _complain_ to upstream loudly, _help_ them get better QA polices, and delay their updates as is done now until the process issues are fixed.

For security updates, it's also simple: just use the update provided by the upstream.
If upstream refuses to release a stable version with the security update, complain, help and delay updates as above.

In other words, just stop attempting to duplicate stabilization work in every distro by forking all packages, and instead tell the problems upstream and help them make better releases.

A constantly usable testing distribution for Debian

Posted Oct 5, 2010 13:31 UTC (Tue) by BackSeat (guest, #1886) [Link]

If upstream releases a new stable version, then assume it's a _stable_ version and thus _immediately_ include it as an update to the stable distribution.

The reason we run Debian Stable on a large number of servers is precisely because Debian does not make such assumptions.

That sort of thing may be OK for a desktop distribution, but when running customer critical applications on a server in a data centre, stability and security are far more important than "the latest version".

Major advantage for bug reporting

Posted Sep 30, 2010 21:26 UTC (Thu) by Richard_J_Neill (guest, #23093) [Link]

This would be a fantastic solution to the bug-reporting dilemma, because it shortens the cycle.

Consider the following case, of a technically skilled user running CUT.

1. User finds bug/issue/missing feature/suggestion in a particular app.
2. User files bug report.
#Because this is CUT, not stable, the bug report is valid and useful
#to the developers, without them having to utter the usual,
#understandable "but are you running the latest code".
3. Bug gets fixed in upstream.
#User gets the bugfix in a reasonable timeframe; this means he benefits
#from better software, and it's highly motivating for him to continue
#being involved.

Contrast 2 other, more typical cases.

(a) Stable distros. Most showstopper bugs are ironed out. But there are always some that remain. Frequently, these are fixed upstream, but never backported. And if the user files a bug report, the devs often find it not-useful.
#User is never able to run "perfect" software; he lives with one set of
#bugs for 6 months, then upgrades distro version, and gets another set.

(b) Unstable distros. These are unusable as a daily system for the vast majority of users, because there is frequent critical breakage.
#The majority of the talented Linux "eyeballs" are thus dissuaded from
#working where they can do the most good.

Delta .debs

Posted Sep 30, 2010 21:34 UTC (Thu) by Richard_J_Neill (guest, #23093) [Link]

One other desirable condition for this to work (imho) is to support Deltas (i.e. patches) rather than full debs.

For example, a ten line code-change in one of the core libraries can snowball into updates of 50 packages, and a 200 MB download of .deb files, even though the actual end result could be established with rsync in a few MB.

This is expensive and time-consuming for mirrors and users alike.

Suggestion: user has currently got version example-2.5.3 installed, and the latest release is example-2.5.4.

Current process: download example-2.5.4.deb.

Better suggestion:
- User keeps the previous 2.5.3.deb (disk space is cheap).
- Mirror has example-2.5.4.deb AND example-2.5.3_2.5.4.patch
- User then downloads the much smaller .patch file, and re-generates the latest .deb locally.

Delta .debs

Posted Oct 3, 2010 10:10 UTC (Sun) by riteshsarraf (subscriber, #11138) [Link]

Debian already has provision for that.
http://debdelta.debian.net/

A constantly usable testing distribution for Debian

Posted Oct 11, 2010 4:03 UTC (Mon) by crimsondebian (guest, #70556) [Link]

It would be worth to take a look at Parsix Linux (www.parsix.org), it is based on testing, but they keep their own copy of the repository in their servers and freeze it then test it, and finally make a release. I think it is a good combination too: making a cycle release distro based on a rolling release one .


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds