Home » Debian » Twobuntu talk

Twobuntu talk

I was honored to present my ideas about Ubuntu & Debian at Debconf 7, with a number of Ubuntu and Debian developers present, including Mark (first time I saw him in person) and DPL Sam Hocevar. There was no slide projector, so I had to read my slides, even though everyone says you shouldn’t do that… 🙂

I was very impressed with the quality of the people at Debconf. I met lots of smart geeks, and even several anthropologists who study Debian as a social organization. You do not ship an operating system by accident, just as you don’t build an airplane by accident.

Here are my slides.
Here is a link to all of the Debconf 7 videos.
Here is a link to my talk, and the subsequent discussion.

A couple of thoughts from the discussion after the slides:

One of the big pieces of confusion is that Mark believes that publishing patches in multiple / granular formats is helpful to Debian. Lars Risan, who had a beer with Mark after my talk said that Mark strongly disagreed with my analogy of throwing code over the wall and that Ubuntu is doing everything technically possible to make its changes available to Debian. However, a major point in my talk is that throwing patches over the wall is not helpful because the person on the other side needs to get up to speed on the patch. You can’t just hand someone 100s or 1000s of lines of code because that person will have to take time to learn what is going on (to integrate them, fix any problems, etc.) and this takes a lot of time, perhaps as much time as if the Ubuntu patch doesn’t exist. This is why it took Debian many months to integrate modular X even though it had Ubuntu’s patches. Because it takes so much time, Ubuntu’s patches in practical terms are not helpful to Debian. The format, frequency, etc. of the patches doesn’t matter, it is the time to learn something which is the cost. If two people in different teams are each learning the same thing, they are not “standing on the shoulders of giants” but re-inventing and re-learning from scratch, just like the old, dark proprietary software days. I believe that, even though I spent some minutes on this topic, Mark did not understand this concept. Perhaps he would disagree with the scope of the problem, etc., but he would need to understand it first to disagree with it.

One of the points in my talk is that Ubuntu is exploiting a loophole in the GPL. You can of course fork code and do with it what you want, but that the spirit of GPL is to cooperate, work in the same codebase, and to fork only under extreme circumstances. Two examples I cited are the Linux kernel and Wikipedia. Jonathan Riddell has responded that Ubuntu is not exploiting a loophole in the GPL because he disagrees that there is only one Wikipedia and one Linux kernel. However, he doesn’t explain what he means. Perhaps he thinks there are multiple Linux kernels because each distro ships their own. Along these lines, Mark at one point responded to my point that there is only one kernel by asking: “Which one is that, the Red Hat kernel?” Only if Ubuntu had at least considered using the Red Hat kernel would his answer have been a substantive rebuttal. The larger point is that the various distro kernels differ from the mainline kernels mostly because they contain backports of fixes from the mainline kernel and other minor tweaks. These changes are very small in nature compared to the chasm of the separate buglist, greatly divergent code, etc. which exist between Ubuntu and Debian. Maybe I should have said that there is “one Linux kernel and Wikipedia team.” However, I see these statements as largely synonymous.

One of the points I made is that Ubuntu did not show up to Debconf with a list of workitems for Debian: Ubuntu takes whatever they can get from Debian, but any Debian limitations they just fix on their own. Mark responded by saying that the idea that Ubuntu should demand a list of workitems from Debian is a misunderstanding of the way free software works. However, I believe he is nitpicking my choice of the word “workitems.” My point is that much of the design and feature work that happens in Ubuntu is going on in a separate community/world. If the cross-organizational collaboration was healthy, Ubuntu would come to Debconf with a list of things like: “One of Ubuntu’s biggest problems is (x). How can we work together on this problem?” This doesn’t happen because Ubuntu is attacking problems on their own, without the deep and rich consultation if the communities were together. And, another downside of this situation is that the center of gravity on a number of areas is moving away from Debian.

Mark believes that Ubuntu is good for Debian, because so many people use apt-get. This is a bad way of measuring “good.” What about quantifiable metrics? Here is one: I would measure something as good for Debian if it increases the number of DDs. Anthony Towns told me here that the number of DDs has not dramatically increased in the last 3 years, and the number of new maintainers joining every year is even slightly decreasing. Therefore, this is one metric which demonstrates that Ubuntu has not been good for Debian. Mark says that he thinks it would be great if Debian had 10,000 DDs, but if the Debian userbase is not growing, how is it going to get there?!

Mark said that: “Ubuntu could have derived from Red Hat, SuSE, Gentoo…” However, Mark did not do this, and instead forked Debian and hired a number of its best developers. Furthermore, I also disagree with the premise that he needed to derive from any codebase at all. A major theme of my talk is that what makes Debian so great is that it supports many hardware platforms and contains many software packages, built to work together. Why could his improvements not be done directly into Debian, just like HP and many other companies and interests have done? Finally, if Mark thinks he could have had so much success deriving from any other distro, Debian is not the special thing he claims it to be.

A number of Debianites agreed with me on many aspects of these issues, but who have given up trying to convince Mark, etc. Several told me that they were worried about the center of gravity shifting away from Debian, for example. I believe a reasonable compromise to minimize the damage to the community is to let users use Ubuntu, but to encourage all developers to join Debian. I met someone working on MOTU games, and his group by themselves realized that doing their work directly in Debian was a better approach.

A final thought: if Debian took all of Ubuntu’s patches, which Mark would like Debian to do, and shipped on the same day, how would a user decide which distro to install?

What do you think? Post your comments below.


10 Comments

  1. “Anthony Towns told me here that the number of DDs has not dramatically increased in the last 3 years, and the number of new maintainers joining every year is flat and even slightly decreasing.”

    That is a bit troublesome – you cannot judge if Ubuntu had any effect here at all.
    As someone who does not use Debian or Ubuntu (Fedora/Suse user here) I would like to share the blurred image the users out there might have: the only news I heard about Debian recently were about troubles and discussions regarding inner topics. Problems in the security group, strong disagreements about sponsored positions, huge delays in the release date, problems to find enough people for the inner structures, strange Firefox/Thunderbird naming problems strangely no other bigger distribution was faced with, etc.

    I’m sure Debian did a lot more, but that never reached the surface of the IT mass media.

    So it might be that Ubuntu is drawing away developers – but I wouldn’t be too sure about that. It could also be that Debian just has a huge PR problem at the moment and that the Ubuntu effect even works against this problem. Maybe it would be worse without Ubuntu.
    Sure, you can’t prove my version or your version – but please be aware that there might be a second view.

    And, btw.: you can be glad about how much Ubuntu does. Maybe it doesn’t fit your personal views at all, but other projects like WebKit do *lot* less.

    And? Yes, KHTML pointed out that they didn’t like that – but they also made it clear that it is according to the licence, and therefore ok. It might not be ok in terms of moral standards, but that’s another problem. In terms of the licence it is clear – and that is not a loophole!
    (Btw., the Debian developers are the only people who use this term in this regard, and again, this sheds a pretty strange light on the Debian community if you view it from the outside.)

  2. Thank you for your thoughtful and interesting comments!

    It is true that I don’t know why Debian isn’t adding developers, but I have to believe that if Debian had added 10 million or so new users in the last 3 years, they’d have added a lot of new developers as well.

    Ubuntu has added that last .1%, and changed the frequency of when to ship. This could have been done without creating a new distribution. Multiple ship cycles out of a single repository is hard, but it can be done.

    In my presentation, I had put the term “loophole” in quotes. I fully understand and agree that the right to fork is an important right under the GPL, but don’t you also agree that it should be used only under very limited circumstances?

    I do not discuss whether there were social challenges, and it is an idea worth discussing, but Mark has never alleged this, and from my week at Debconf meeting lots of different people, I found the health of the Debian community to be fine. It could get better–how much, I do not know. One of the points I made in my talk is to exhort people to spend more time coding than sending e-mails.

  3. If the right to fork is “only under very limited circumstances”, why does Debian “maintain” fork of 1000+ source packages over years? I am sure 99% of the packages in Debian is a fork, since they contain at least a new “debian” subdirectory.

    If you guys think that the upstream packages do not cooperate nicely with each other, hence requires a bulk of DD to “fork” the upstream sources to become a nice distribution, why can’t Ubuntu think that Debian does not produce a decent desktop for the public hence fork Debian to make one (“Ubuntu”) so?

    Ubuntu does suck in one way or another in terms of their stance towards Debian, but it doesn’t mean the way they fork isn’t right. As an example, gnewsense does encourage people to fork from them, so at least GNU guys are perfectly happy with the way Ubuntu works. I wonder whether it is a loophole of GPL, or just a loophole of DFSG that fails to prevent people from doing what Debian doesn’t like them to do.

  4. Alan, you make some good points. However, the fork you discuss is a trivial fork. The upstream doesn’t want a Debian directory and other Debian “package policy” work in their code. Debian is forced to carry this diff.

    Debian wants almost all of Ubuntu’s fixes, and that is a big difference. Ubuntu publishes its patches on the Internet, but it doesn’t fix bugs in them that get entered into the Debian buglist, or necessarily consult with Debian when making the fix.

    Forking is allowed, but it has costs, and it splits the community. It is social engineering, and I think the affect on the community has been negative. Another example, I think Ubuntu is buggy because it is missing Debian’s resources.

  5. Debian does want some Ubuntu fixes, but not all. These fixes are mostly not software features, but some difficult to measure quality, which partly accounts for the success of Ubuntu. If Debian were willing to fix them, Mark didn’t need to fork. At this moment perhaps part of them fail to exist anymore, but they did exist at the time Ubuntu 4.10 was released.

    * Debian doesn’t want making time-constrained (rush-for-release) decisions. Any patch produced by such decisions needs after-thought to be merged.

    * Debian doesn’t have same default-choice as Ubuntu reflecting their different focus. exim4 vs postfix. vim-tiny vs nano. And a big difference on what to install by default.

    * Debian doesn’t want interim releases between stable (Long Term Support) releases. Ubuntu recognizes different users need different level of stability, but not the testing/unstable kind of instability.

    * Debian doesn’t want to lower their bar to accept DD’s who do not want being brainwashed with DFSG. Ubuntu “Master of the Universe” is more attractive to join.

    * Debian doesn’t want to trade-off freeness for drivers (hardware-enablement) to attract more users. Ubuntu recognizes that “a driver is free” is less important than “a piece of software is free”. Even if your video card driver is free today, it may not be so in your next desktop/laptop. On the other hand, not serving non-free drivers loses a really big market. Such a difficult decision is hence made.

    I don’t think I have enumerated all, but the above items just flash over my mind when I write this. Should Mark Shuttleworth has any power to change at least half of them, he may have chosen to give his $10M to Debian instead. But we all know Debian prefers democracy instead of money or publicity.

    As I said, there are obviously areas Ubuntu can do better. They can publish better diffs, and work in under the hood. They can better manage bug reports, and they experimenting it hard at Launchpad.

    Yet there are fundamental difficulties preventing the projects to cooperate. Should someone export *all* the Launchpad blueprints for Ubuntu Gutsy to Debconf7, 90% of them will fail to reach a decision before Ubuntu 7.10 is released, while Ubuntu itself can manage to complete 50% of them, in every release. Not a nice number, but it is better nothing.

    Yet don’t think Ubuntu is the root of all evils. They have involved Debian for a number of events. The package and soname convention during GCC 4.0 C++ ABI transition. The dpkg package description translation. The recent dpkg-triggers implementation. And so on. I know, they can do more, but provided that Debian is willing align with Ubuntu’s need for prompt conclusion.

    If the center of gravity begins to move, should it be the Debian guys to think twice about what the market needs? Do we need a release with 0 RC (critical, serious, grave) bug (but countless important bugs), or a release which is predictably timely, with a feeling of stability, with a well polished interface, and easy to enable non-free if I wish?

    BTW, you said Ubuntu is buggy and raised 6 problems. Frankly, I see zero of them. On the other hand, I can perhaps build my list of “My 6 bugs for Ubuntu” or “My 6 bugs for Debian”. I hope you got what I mean.

  6. Alan, again lots of good points.

    My responses:
    I think Debian wants the vast majority of Ubuntu’s fixes. The ones it hasn’t taken yet are usually because someone hasn’t had the time to work on it.
    BTW, many of the differences you suggest are tiny, could be done by a metapackage, or via a very small derivative, much smaller than what is going on now. Also, I don’t think that Ubuntu added even one user in its choice of postfix over exim. This was just an arbitrary change they made because they could, not because it particularly matters.

    I don’t believe that Debian is incapable of adding features quickly as you suggest. I went to lots of talks at Debconf where people were working on important changes and even writing code during the conference. To the extent that it is true, it is because Ubuntu has full-time employees. If you added 20 full-time employees to Debian, the pace of progress would noticeably increase.

    Ubuntu does make it easy to install drivers, but it is still missing mp3, dvd, etc. And again, these differences only make for a very small derivative, not what Ubuntu is attempting.

    You don’t repro any of the bugs? The point in that list is just to explain that if Ubuntu is going to lead us to world domination, it needs a number of important bugs to be fixed.

    Several people, including me, brought up time-based releases at Debconf 7. I hope that people give it some thought.

  7. Let’s see if the infant “merge-o-matic” helps to make it possible for Debian to accept more changes. Debian has 12 arches so patches from Ubuntu are always only “start of the story”. Ubuntu has made smart choices to escape themselves from such problems. Hence merging in these changes is always challenging, if Debian is still a Universal OS and Ubuntu is still a subset of it.

    Obviously I won’t question DD’s can code (or even better than Ubuntu devs)! What they usually missed is to imagine what a normal user expects. Upstart is one. Usplash is another. You may discover more by looking at how they work with recent problems, e.g. this is what I am currently reading . A mainstream desktop OS should nearly always “guess what you mean until you tell me otherwise”. Sarge’s installer has done it right. Ubiquity is still more superior. Yet people like to take it to the extreme . We expect there are ideas better than those from Ubuntu devs. Hence Ubuntu was merging in stuff from Automatix and Linux Mint. Now Automatix and Linux Mint are still busy at producing more innovations. Community simply work like this.

    Every software has their deployment specific bugs. There are countless Windows machines which has problems here and there, but that doesn’t prevent it from “world domination”. Easy to load drivers, play mp3 and dvd are just part of the story. Does Debian have ubuntuforums equivalent? Does Debian have both bleeding edge and stable *releases*? Countless tiny issues will add together to decide whether you will go mainstream or not. Not until we see a business research on why Ubuntu is popular, it depends on your eyes to count what is missing from every others.

    Yes, a lot of issues will go away when time-based release is in place, and Ian Murdock agrees . Everyone will be happy if Debian can learn from what Ubuntu has done well. Then, we can ask whether Ubuntu can merge in. Until then, the current situation doesn’t look all that bad.

  8. Merge-o-matic doesn’t help a person understand a patch. Understanding this point is the most important idea in my talk.

    In my talk I also explain why supporting more arches is not a serious cost. Did you see it?

    I don’t think Debian is incapable of making a distro easy to use. Look at synaptic. It’s simply that Debian hadn’t gotten to it yet. But Mark could have done this work directly in Debian. Automatix is not a distro, so I don’t have problems with them doing things. Add-on packages for distros are perfectly fine.

    I propose that Ubuntu is popular because it took Debian and added the last .1% But in doing this it split the community, and sapped excitement from Debian, and created something much less efficient that if it had done the work in Debian directly. You say the situation isn’t bad, but you have to calculate how big this inefficiency before you can make such a judgment. Most of the devs in Ubuntu have devs in Debian doing the exact same thing on a daily basis: maintaining X, the kernel, Gnome, KDE, Mozilla packages, OpenOffice, etc. If you add all of this up, you see that Ubuntu is mostly creating duplicate work. This is also why Debian isn’t improving because of Ubuntu, but playing catch-up.

    You think Debian can learn from Ubuntu, and that is true, but an organization can improve without looking outside itself to do that. Why can’t Debian just get better by itself? It is much more efficient to improve something, than to create a competitor to teach them a lesson.

Leave a comment

Your email address will not be published. Required fields are marked *