|
|
Subscribe / Log in / New account

Mark Shuttleworth at LinuxTag

By Jonathan Corbet
June 14, 2010
Your editor had the pleasure of giving a keynote talk at the 2010 LinuxTag, immediately prior to Mark Shuttleworth's keynote - a position described by more than one person as being Mark's warmup act. That role must have been successfully carried out; Mark's talk was, indeed, well received from the start. Topics ranged from the familiar (cadence) to issues like quality, with a look at upcoming Ubuntu design features as well.

Mark described his (and Ubuntu's) job as taking the great work done by the development community and getting it out there where people can use it. There has been a lot of progress on the development front, resulting in a great deal of top-quality software. But that's not where the job stops; getting that software to users, Mark says, is "a whole new level of awesome." Achieving this new level is his objective.

"Cadence" - the regularity and frequency of releases - has been one of Mark's talking points for a while. The conclusion that he has come to is that releases are important, they have value in and of themselves. It is, he says, more important to get a release out than to have any specific feature included in the release.

Why is that? Releases draw attention to the project and generate enthusiasm among users. Releases also can help to keep the entire community busy. Developers can run with something like a 100% duty cycle; there's always stuff to hack on. But other community members - packagers, Mark Shuttleworth documentation writers, artists, translators, marketing people, etc. - really need a release to focus their energies around. Regular, frequent releases thus keep the whole community engaged in the project. They are also something which free software is uniquely capable of doing: proprietary software releases are much more feature-driven and cannot be done with the same kind of regular cadence.

For a while now, Mark has been pushing the idea of a coordinated cadence across multiple projects. Quite a few projects, he says, are headed toward something like six-month release cycles; why not try to coordinate them so that distributors can all focus on the same specific releases? That kind of coordination could help projects focus their work knowing that a certain release will be picked up and widely distributed, and it should help distributors to minimize duplicated effort and get the best of what the development community has to offer. In terms of progress, Mark says that Ubuntu is getting closer to proper release cycle coordination with Debian, but no further details were offered.

Quality is another theme that Mark's talk covered; he urges the community to start thinking differently about the quality of its code. When the focus is on "hero developers," he says, quality tends to take a back seat. He would like to see a stronger focus on everyday quality in development projects, starting with broader use of automated test suites which, he says, are "made of awesome." The core rule for these projects is that the development trunk should pass the test suite after every commit.

This kind of discipline may not sit well with all developers. But there is a key advantage to a "pristine trunk" which always reaches a minimal quality bar: it encourages users to run and test development code. Ubuntu has been increasing its building of bleeding-edge packages for a number of projects; the result has been a "ten to one-hundred times increase" in the number of people testing those programs. A good set of regression tests also makes it more likely that a project will take patches from unknown developers; passing all the tests gives a level of assurance that the patch cannot break things too badly.

Mark also touched on automatic crash reporting as a highly useful tool for distributors and developers. The crash reporting tool can gather all of the relevant information and ship it off to the people who are best equipped to interpret it and, hopefully, fix the problem. Code review was also favorably mentioned, with the tools provided by Launchpad getting special attention. Such tools, he says, broaden participation in the development process and help to create a wider conversation about what's acceptable in a code patch. That, in turn, helps new developers to get started with a project.

Finally, Mark is pushing to see more attention paid to design in free software. Proper design makes the software more appealing to "ordinary users," and thus will increase their number. It can also increase pride among developers. But doing design right is a challenging task; it's not something that can always be done by developers. Mark talked a bit about the design elements which have gone into the Ubuntu Notebook Edition, including the infamously moved window icons, the creation of notifications which don't take up screen space, "category indicators" which can indicate status (and provide controls) for a number of related applications, etc. The result of all this work is a lot of new code which, he hopes, the GNOME community will be willing to accept into its mainline.

The next release is Maverick Meerkat, which Mark suggested is the "Don't panic" release. Why? Well, it seems that they've moved the release date forward slightly to October 10, which has the effect of balancing out the year's two development cycles a bit. But the binary 10/10/10 release date has appeal to hacker types, especially when one realizes that 0b101010 = 42.

Looking toward the future, Mark put up the famous chasm diagram by Geoffrey Moore. This diagram is a bell-shaped curve showing product adoption over time; there is, however, a gap between the "early adopters" and the majority of users. Getting across that gap ("chasm") can require changes in how a project is developed and marketed. Mark says that Linux as a whole still needs to cross that chasm, though specific components (the kernel in particular) have already done it.

Mark Shuttleworth What does Linux have to do to get to the other side? Preinstallation was at the top of Mark's list; users need to get devices which have Linux already installed on them. He also says that "obvious things should just work," where things like codecs are considered to be "obvious." We need to provide these features, but we cannot stop caring about freedom in the process. As a result of its efforts, Mark says, Ubuntu will be shipping preinstalled on five million machines this year.

Some of those machines, most likely, will be running Ubuntu Light; this is a stripped-down distribution meant to run as an "instant on" alternative on Windows machines. Ubuntu Light can get a user into a web browser within seven seconds of the power being turned on - useful when checking a web page is all that the user wants to do. To get there, Ubuntu Light has very few applications and no real file management. But it is a place for users to start, to discover a bit of Linux, and, hopefully, develop an appetite to delve into it more deeply later on.

Another area of interest is ARM processors. To Mark's surprise, the ARM-oriented sessions at the Ubuntu Developer Summit were packed; there is a lot of interest in this architecture. The Linaro initiative was mentioned as an effort to make Linux work better on ARM-based systems. There is also, evidently, a growing level of interest in ARM-based servers; that should be interesting to watch.

There was also a brief mention of cloud-oriented endeavors. "Ensemble," a way of packaging and deploying cloud-based servers, was mentioned. Also, evidently, Ubuntu is working on a project using LXC containers to run containerized systems on Amazon EC2 guests. This second level of virtualization, evidently, is useful for people wanting to run a number of services while buying only a single virtual host system.

After the end of the talk, a member of the audience asked Mark about when (if ever) Canonical might open-source the Ubuntu One server software. Mark answered that he doesn't "have an answer on how to do it and make it all work." One gets the sense that this release will not happen anytime soon. Mark did try to point out that freedom has been designed into Ubuntu One, even if the code is not free. In particular, Ubuntu One doesn't just make sure that users can get their data out; all data is stored locally as well from the outset. So users should not worry that they can be trapped in the service.

At that point, the standing-room-only session came to a close.

Index entries for this article
ConferenceLinuxTag/2010


(Log in to post comments)

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 5:15 UTC (Tue) by skvidal (guest, #3094) [Link]

"Mark did try to point out that freedom has been designed into Ubuntu One, even if the code is not free."

Seriously?

I would hope at a linux conference you would not be able to talk about 'freedom' and not actually mean 'free software' and get away with it.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 6:52 UTC (Tue) by nix (subscriber, #2304) [Link]

I see what Mark's driving at though. For something like this, keeping its users' data free is surely significant, and Ubuntu One does this. (Of course, these days, so do its biggest competitors, all very non-free in the software sense.)

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 5:41 UTC (Tue) by alison (subscriber, #63752) [Link]

Jonathan, what about a report or link to what *you* spoke about?

Jon's speech

Posted Jun 15, 2010 13:51 UTC (Tue) by pr1268 (subscriber, #24648) [Link]

Seconded.

Jon's speech

Posted Jun 15, 2010 14:10 UTC (Tue) by smurf (subscriber, #17840) [Link]

Thirded.

Jon's speech

Posted Jun 15, 2010 14:33 UTC (Tue) by sebas (guest, #51660) [Link]

Forced. ;)

What I spoke about

Posted Jun 15, 2010 14:29 UTC (Tue) by corbet (editor, #1) [Link]

What I spoke about will be familiar to LWN readers already; it was a version of my standard "state of the kernel" talk. I've put up the slides [PDF], but I'm not sure how helpful they are without me droning alongside them.

What I spoke about

Posted Jun 15, 2010 16:41 UTC (Tue) by jspaleta (subscriber, #50639) [Link]

Relative to the last report that you did.... has Canonical move up in the corporate contributor ranking?

-jef

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 7:19 UTC (Tue) by salimma (subscriber, #34460) [Link]

"Mark also touched on automatic crash reporting as a highly useful tool for distributors and developers"

Yes and no. In my experience, in some cases we end up substituting bug reports with incomplete technical information for bug reports with no engagement from the reporter.

Automated crash reporting

Posted Jun 15, 2010 23:02 UTC (Tue) by midg3t (guest, #30998) [Link]

I think it is going too far to expect any involvement from the user with automated crash reports. That is, after all, the whole point of automating them. The value is in collating crash reports in order to prioritize bug fixing efforts, as well as being aware of where the application is crashing so that the entire class of bugs (crashes) can be fixed. Of course Linux users are often technical enough that they will be willing to provide extra information (such as the environment and actions that produce the crash) but that should definitely be an optional step.

kerneloops.org is a good example of doing this properly.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 8:15 UTC (Tue) by nhippi (guest, #34640) [Link]

> He would like to see a stronger focus on everyday quality in development projects, starting with broader use of automated test suites

This is certainly one area where Ubuntu could give back to the community. Assign your employees to write test suites for all software you use which already doesn't have a suite. Simple, productive, everyone benefits and you finally have something to show when people criticize you for not contributing code.

Mark Shuttleworth at LinuxTag

Posted Jun 17, 2010 16:57 UTC (Thu) by vonbrand (guest, #4458) [Link]

It is anything but "simple" to write a proper test suite for something, plus it is boring detail work. It is really no wonder there are few and between test suites. The more extensive ones I've seen (e.g. for GCC) are mostly collected and cleaned up problem cases.

Mark Shuttleworth at LinuxTag

Posted Jun 20, 2010 20:17 UTC (Sun) by nix (subscriber, #2304) [Link]

It's actually quite easy to write white-box testsuites for parts of systems -- *if you do it when you write the code*. If you do it then, you *know* the edge cases you have to test in your coverage testcase, and you *know* the boundaries you want to wander up to in any fuzz tests -- so it's fairly easy to write them.

What's a total sod is trying to do the same when faced with a bunch of code you wrote ages ago and no longer remember anything about, or that you didn't write at all.

Mark Shuttleworth at LinuxTag

Posted Jun 20, 2010 18:06 UTC (Sun) by jschrod (subscriber, #1646) [Link]

You can be sure that people like jspaleta would still criticize them, because they would find something where they would not contribute _enough_. Just look at generic reports about something and he jumping in with pseudo-questions that are meant to slant Ubuntu and especially Canonical in some way. And sadly, there are enough folks like him in our community.

Joachim (not an Ubuntu user, FWIW)

Mark Shuttleworth at LinuxTag

Posted Jun 22, 2010 13:26 UTC (Tue) by nye (guest, #51576) [Link]

>And sadly, there are enough folks like him in our community.

Sadly, I don't believe there are.

Jef's unwillingness to drink the Kool-Aid, combined with an unusual level of honesty and accuracy, is highly appreciated.

If that attitude were more prevalent, I feel Free Software would seem more appealing to a wider number of users because the community would demonstrate a lesser degree of obviously ludicrous self-delusion.

Mark Shuttleworth at LinuxTag

Posted Jun 25, 2010 18:30 UTC (Fri) by deepfire (guest, #26138) [Link]

> Jef's unwillingness to drink the Kool-Aid, combined with an unusual level of honesty and accuracy, is highly appreciated.

Absolutely.

It's always heartening to see people straying off the local maximum chase, that is universal pleasantness, and into some soul searching and inconvenient question asking.

Periods of peaceful and undisturbed focus are important, but so are periods of wider perspective and introspection.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 8:53 UTC (Tue) by jhs_s (guest, #67402) [Link]

"The result of all this work is a lot of new code which, he hopes, the GNOME community will be willing to accept into its mainline"

That is interesting when you quote the announcement mail for new GNOME modules (http://mail.gnome.org/archives/devel-announce-list/2010-J...):

"+ libappindicator (external dependency)
- it doesn't integrate with gnome-shell
- probably depends on GtkApplication, and would need integration in
GTK+ itself.
- we wished there was some constructive discussion around it, pushed
by the libappindicator developers; but it didn't happen.
- there's nothing in GNOME needing it.
=> rejected for the reasons stated above"

Basically Ubuntu it just hoping that things get accepted without working closely enough with the GNOME community to make this happen. That's really unfortunate as everybody would love to work together with them more closely. That would mean that they would have to release things early and not just send an announcement when things are set in stone.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 11:55 UTC (Tue) by AlexHudson (guest, #41828) [Link]

Given how different the stock Ubuntu is to GNOME as designed, one can only assume that they're going to keep forking further from upstream.

I guess they see that as their differentiator though, tbh - i.e., to them, it's a feature.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 14:05 UTC (Tue) by ccurtis (guest, #49713) [Link]

I don't see anything contradictory in Ubuntu's position. Honestly, DbC (Design by Committee) is justifiably ripe fodder for an endless number of cartoon parodies. I think Ubuntu wants to be "the Apple of Linux", and I don't mean that in the sense that they're overly mimicking the OSX interface.

Sometimes one has to take the bull by the horns and head off into new directions. Hopefully the endeavor is successful, others agree, and the changes get integrated back into the project's baseline - because maintaining a fork is expensive. But engaging in political battles on mailing lists to convince a community to change is often tilting at windmills (let's see how many clichés I can fit in here...)

As the same time, there's often a hypocrisy in these communities. When a change is suggested, half the developers shout "show me the code"; when the code is written another half complain about the style or how it's the wrong way to do it; and when the code is released independently because an upstream merge is just too difficult, yet another half complain about the project being forked. However, the fork allows the code to prove itself in the real world and not simply in theoreticals, and what more proof is needed?

One thing is for sure - Ubuntu is doing their own thing. There is a vision that they are pursuing, and they're not asking for permission first. They are making mistakes, but are also very successful. The Linux ecosystem is growing and in some respects Ubuntu is leading the way. This statement isn't meant to diminish any other distro forging their own paths (all of which are leading their particular ways in growing the ecosystem) but Ubuntu is giving Linux some Apple-like consumer "magic".

And they're not doing it to the exclusion of others -- again, there are (as always) areas for improvement, but they're helping the Debian project (by paying some DDs if nothing else) and the talk of cadence is the equivalent of shouting, "Hey guys, follow me!". Of course, if you're running off a cliff you can do that yourself, thanks, but so far all evidence to the contrary.

I have to stop here because I feel like I'm writing an advertisement for Ubuntu, when really I just find it difficult to fathom why there is so much disdain for a distro doing open source the way that open source advocates sell it.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 14:24 UTC (Tue) by AlexHudson (guest, #41828) [Link]

There wasn't actually any criticism in my comment if you read it: I'm just noting that they have already diverged pretty far from upstream GNOME in key ways and look set to continue doing that. If they want to do that, it's entirely up to them.

I think it is relatively optimistic, though, to think that you can develop stuff in this manner and expect acceptance by e.g. GNOME, particularly when you're touching relatively core pieces of the desktop UI.

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 3:32 UTC (Wed) by ccurtis (guest, #49713) [Link]

Please don't think I'm trying to single you out - there seems to be a contingent of semi-vocal people who appear to enjoy finding fault with Ubuntu/Shuttleworth.

However, implicit in your statement, "I guess they see that as their differentiator though, tbh - i.e., to them, it's a feature." is the contrary position "[...] to me, it's a fault." I'm not arguing the point - the endeavor may very well turn out to be folly. I don't use GNOME (I tried, I really did) so it really has little impact on me, as, I suspect, most of what Ubuntu does impacts its critics.

What is more interesting to me is the broader picture. We see this now as it relates to Ubuntu/GNOME, but it's really the same story of Google Wakelocks and the kernel, with somewhat different details. These sorts of issues need to be clearly resolved in a way amenable to everyone early (forks are okay as long as you discuss your approach first, or whatever), else much needless strife will ensue as I see no reason for issues like these to abate.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 15:46 UTC (Tue) by rvfh (guest, #31018) [Link]

> As the same time, there's often a hypocrisy in these communities. When a change is suggested, half the developers shout "show me the code"; when the code is written another half complain about the style or how it's the wrong way to do it; and when the code is released independently because an upstream merge is just too difficult, yet another half complain about the project being forked. However, the fork allows the code to prove itself in the real world and not simply in theoreticals, and what more proof is needed?

You have three halves here... could you let us know the approximate size of each of them :-D (another cliché, esp. for old French-speaking people who know Raimu)

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 3:38 UTC (Wed) by ccurtis (guest, #49713) [Link]

The three halves were intentional -- a little levity to (hopefully) show I'm not frothing at the mouth over here.

However, to play pedant, I did represent each half at a different point in time, so it all works out. ;-)

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 6:05 UTC (Wed) by niner (subscriber, #26151) [Link]

Also you never said that these halves were distinct sets. A developer can easily say "show me the code" _and_ later complain about the fork.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 16:28 UTC (Tue) by mjthayer (guest, #39183) [Link]

> As the same time, there's often a hypocrisy in these communities. When a change is suggested, half the developers shout "show me the code"; when the code is written another half complain about the style or how it's the wrong way to do it

Perhaps there is a sweet spot in-between? For instance suggesting the change, being clear that you are willing to author it and discussing with those involved (before you start and as you go) how to do it in a way they would find acceptable? Perhaps I would see things differently if I were currently trying to get a major change into a major project of course.

Note that this isn't really aimed at Ubuntu, just to say that I can understand when project owners are a bit coy about who and what code they "let in".

Mark Shuttleworth at LinuxTag

Posted Jun 17, 2010 11:14 UTC (Thu) by modernjazz (guest, #4185) [Link]

> For instance suggesting the change, being clear that you are willing to author it and discussing with those involved (before you start and as you go) how to do it in a way they would find acceptable?

That works really well if the person is viewed as a core member of the developer community, because the other developers will take the proposal seriously and engage with it.

It often doesn't work well for someone who isn't already in the inner core. "Radical" proposals just won't command the focused attention of the developer community. On one hand, the person making the proposal might be a crackpot, and so it would be a waste of time to discuss it; on the other hand, the person might actually be quite talented and motivated, but no one in the community yet realizes that and so they don't put the time into it that (with hindsight) they should.

Personally, I don't think there is a solution to this problem; there isn't enough time for developers to treat every thing that comes up purely on its merits (trust is an important timesaver), nor is there always enough time for any would-be contributer to go through a slow process of building trust (and some contributers, like Mark himself, may contribute in ways that don't quickly garner respect from C-coders).

So in my view the process will always be a bit messy. Just like evolution. Perhaps the biggest improvement would be an increased tolerance/politeness/respect for that messiness on the part of the wider community.

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 9:16 UTC (Wed) by marcH (subscriber, #57642) [Link]

> "The result of all this work is a lot of new code which, he hopes, the GNOME community will be willing to accept into its mainline"

This must be wrong since Ubuntu has never contributed any free software! Only reaped existing free software for profit. I know this from comments posted again and again on this site.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 16:38 UTC (Tue) by jspaleta (subscriber, #50639) [Link]

5 million preinstalled Ubuntu machines? That's a big claim. I assume there's nothing publicly known outside of this keynote that can be used to substantiate that...as per usual for any public claim made by a Canonical exec.

Hmm. Remind me again... at its peak how many pre-installed machines in a year did Xandros claim at the beginning of the netbook market? Overall are Ubuntu preinstall deployments growing linux marketshare or is Ubuntu just effectively cannibalising the same marketshare that Xandros created originally?

And... moreover if Shuttleworth is lumping in the new Ubuntu branded instant-on solution that was announced recently which competes directly with Splashtop (a competing linux solution)... what's Splashtop's overall pre-installed numbers? Splashtop's _owns_ this space. Is Ubuntu winning OEM contracts away from DeviceVM's Splashtop or are they growing the space with new OEMs that have yet to get on the Splashtop train?

-jef

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 20:48 UTC (Tue) by zoopster (guest, #57562) [Link]

I missed the 5 million pre-installed machines comment in the article. Where did you get that from? Or is it just unsubstantiated rhetoric? And what in the world would that have to do with Xandros? Not sure I get the connection. Enlighten us oh wise one.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 21:26 UTC (Tue) by mmcgrath (guest, #44906) [Link]

> I missed the 5 million pre-installed machines comment in the article.

FTA (the one right above your comment)

> As a result of its efforts, Mark says, Ubuntu will be shipping preinstalled on five million machines this year.

Mark Shuttleworth at LinuxTag

Posted Jun 17, 2010 2:03 UTC (Thu) by daniels (subscriber, #16193) [Link]

I guess he could enumerate potentially unannounced commercial deals, or he could ... not. Seriously, this is the (equally pointless) equivalent of going through Wikipedia articles and whacking '[citation needed]' twice per sentence because you don't like the author, or the topic, or whatever.

Mark Shuttleworth at LinuxTag

Posted Jun 17, 2010 12:31 UTC (Thu) by stevem (subscriber, #1512) [Link]

Seconded. Thanks to Jon and the LWN team for the comment filtering, it's making LWN much more pleasant now!

Mark Shuttleworth at LinuxTag

Posted Jun 18, 2010 7:36 UTC (Fri) by k8to (guest, #15413) [Link]

While I agree, be careful not to talk about it too much, or you negate some of its benefits.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 20:43 UTC (Tue) by dale.sykora (guest, #57981) [Link]

It would be nice if automated testing covered web proxy usage. Most problems I see with Ubuntu seem related to running behind a corporate firewall with proxy settings.

Mark Shuttleworth at LinuxTag

Posted Jun 15, 2010 22:05 UTC (Tue) by marcH (subscriber, #57642) [Link]

What kind of software do you have in mind?

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 13:34 UTC (Wed) by dale.sykora (guest, #57981) [Link]

I do not have a particular automation testing software in mind, but I assume Ubuntu has some automated scripts to test basic stuff. I'm suggesting they run some of the automated tests on a machine that uses a proxy to see if anything is broken. I have noticed UbuntuOne cloud storage doesn't work behind a proxy. Also, some package install/updates fail (flash for instance).

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 19:57 UTC (Wed) by marcH (subscriber, #57642) [Link]

Sorry my question was not clear: What kind of software typically does not work behind a proxy? Examples?

Mark Shuttleworth at LinuxTag

Posted Jun 17, 2010 4:47 UTC (Thu) by dougsk (guest, #25954) [Link]

circa centos 4.(2|3?) I ran into issues with yum. I pushed a script to export the HTTP_PROXY variable at boot up as I couldn't get the boxes to honor WPAD, mind you that was the gotohell plan as I had a two hour maintenance window. Could have been me.

Further down the road I ran into a host of issues with some other "corner" applications and platforms as well [Looked good in lab, meh]. In order to mitigate the pain, I put transparent proxies in place and have never looked back.

Mark Shuttleworth at LinuxTag

Posted Jun 17, 2010 7:03 UTC (Thu) by walles (guest, #954) [Link]

Two pieces of software that come to mind are:
Apport: https://bugs.launchpad.net/bugs/94130
Bzr: https://bugs.launchpad.net/bugs/586341

Since both of them are Canonical driven projects, they would be two good candidates for receiving (more) behind-a-proxy testing.

Cheers /Johan

Mark Shuttleworth at LinuxTag

Posted Jun 18, 2010 7:41 UTC (Fri) by k8to (guest, #15413) [Link]

apt-get fails sometimes, but it's more a matter of the debian practice of round-robin access distribution combined with unsynchronized and nonatomic updates of the mirrors in the pool, combined with apt-get's failure to handle this.

However, it does extremely perniciously whine, if HTTP_PROXY is set, as if this was a sin (but doesn't suggest any resolving action).

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 9:01 UTC (Wed) by dgm (subscriber, #49227) [Link]

I use Ubuntu daily in a corporate environment, behind a proxy, and there's only one gripe I would love to get rid off: having to change proxy settings manually each time I get in and out of the firewalled zone. I think this is an issue with NetworkManager, basically. Anything else just works for me.

One of these days I will look if there's a bug about this in Lauchpad and vote for it.

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 11:57 UTC (Wed) by mgedmin (subscriber, #34497) [Link]

I ended up writing a small shell script to enable/disable my proxy config with gconftool:

$ cat /home/mg/bin/proxy-on
#!/bin/sh
gconftool -s /system/proxy/mode -t string manual

$ cat /home/mg/bin/proxy-off
#!/bin/sh
gconftool -s /system/proxy/mode -t string none

Mark Shuttleworth at LinuxTag

Posted Jun 17, 2010 10:57 UTC (Thu) by zzxtty (guest, #45175) [Link]

Eek, isn't this what aliases are for?!

Mark Shuttleworth at LinuxTag

Posted Jun 17, 2010 13:42 UTC (Thu) by marcH (subscriber, #57642) [Link]

You mean functions?

Mark Shuttleworth at LinuxTag

Posted Jun 18, 2010 13:33 UTC (Fri) by zzxtty (guest, #45175) [Link]

I'm a tcsh man, not bash, but a quick look at the bash manpages suggests aliases are the correct term in this case. I would use an alias for a single command, functions appear to deal with multiple commands or more complex logic.

These should do the trick (untested):
alias proxy-on='gconftool -s /system/proxy/mode -t string manual'
alias proxy-off='gconftool -s /system/proxy/mode -t string none'

A 'script'* would have to spawn a new shell to process the command, rather inefficient.

*I can't quite bring myself to call a one liner a script!

Mark Shuttleworth at LinuxTag

Posted Jun 19, 2010 16:31 UTC (Sat) by bronson (subscriber, #4806) [Link]

> A 'script'* would have to spawn a new shell to process the command, rather inefficient.

True. In those situations where you want to switch your proxy 300 times per second, this is very important! ;)

Trading some performance for modularity and maintainability is usually a pretty good idea.

Mark Shuttleworth at LinuxTag

Posted Jun 21, 2010 6:50 UTC (Mon) by zzxtty (guest, #45175) [Link]

"True. In those situations where you want to switch your proxy 300 times per second, this is very important! ;)"

Well you never know... No, from a performance point of view you are quite right. However I have been in the situation where I've been on machines which are constantly running out of process ids and every little helps (eg: echo *). Admittedly I don't see switching your proxy on and off would be a particularly high priority in such a situation!

"Trading some performance for modularity and maintainability is usually a pretty good idea."

You've lost me slightly there, I would have thought having one .cshrc (or .bashrc?) would be more maintainable than a bin directory full of separate files. I guess we all have our preferred ways of working!

Mark Shuttleworth at LinuxTag

Posted Jun 25, 2010 10:54 UTC (Fri) by robbe (guest, #16131) [Link]

The PID overrun can also be thwarted by using "exec gconftool..." in the OP's scripts.

My maintainability concern with putting aliases in .bashrc is that only a single shell sees that.

Coming back to performance, the alias solution does incur the (small) per-shell cost of reading, parsing, and keeping in memory the alias, when in most shells you will never need it.

IIRC zsh has one-per-file functions that can reside in a directory somewhere. This is probably the optimal solution here: as maintainable as a shell script, loaded on demand, does not fork a new shell.

Mark Shuttleworth at LinuxTag

Posted Jun 28, 2010 20:01 UTC (Mon) by bjartur (guest, #67801) [Link]

Similiar to a source then?

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 13:05 UTC (Wed) by jiu (guest, #57673) [Link]

I have the same situation with corporate windows. I'm not too fussed about it becausr there's so much more to complain about

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 14:23 UTC (Wed) by dlang (guest, #313) [Link]

you can create a proxy.pac file that will detect your IP address and decide to use a proxy or go direct depending on what your IP is (or anything else that it can detect in javascript)

proxy auto-config

Posted Jun 25, 2010 10:47 UTC (Fri) by robbe (guest, #16131) [Link]

Yeah, this works for browsers. Other programs need HTTP, too.

I'd rather not have apt-get's http method evaluate JavaScript code.

Proxy auto-reconfiguration by network

Posted Jun 17, 2010 1:31 UTC (Thu) by ewen (subscriber, #4772) [Link]

As dlang pointed out, you can create a procy.pac local file which is a short Javascript program to set the proxy used at request time. It can, amongst other things, check the current IP address you have against a list of subnets and set the proxy appropriately for that subnet. I wrote up how I do this a while ago:

http://ewen.mcneill.gen.nz/blog/entry/2010-02-12-web-prox...

and aside from extending it to support a few more networks it's worked flawlessly for months. (The example given in that page is to auto-use some specific proxies to reach some other networks behind NAT firewalls, but it should be obvious how to just make it depend on source IP so that it auto-changes when you're in a given firewalled network.)

Ewen

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 17:11 UTC (Wed) by jcm (subscriber, #18262) [Link]

I'm sure the talk was really interesting, but did anyone run a "cadence" count this time around? I tend to zone out after the second or third use of "cadence", no matter how hard I want to listen.

Mark Shuttleworth at LinuxTag

Posted Jun 16, 2010 17:41 UTC (Wed) by jspaleta (subscriber, #50639) [Link]

More importantly.. what was the cadence of the word cadence. Is Shuttleworth so fully committed to the ideal that he's sticking to cadence ...release schedules...inside his own talks!? That would be a subtle, masterful achievement in soapbox grandstanding. Not as good as giving a talk in heroic couplet... but still pretty good.

-jef

Shuttleworth's misplaced example of cadence

Posted Jun 17, 2010 16:02 UTC (Thu) by msnitzer (subscriber, #57232) [Link]

I was at this keynote and noticed that Shuttleworth pointed out that "all major upcoming distros are using the 2.6.32 kernel". Shuttleworth really wants that to be the case but the reality of the situation is that not all distro kernels are created equal even if they started with the same baseline kernel.

But saying as much would defeat Canonical's ability to woo would-be customers with something like: See through normalization and cadence we're just as good as the enterprise distros.

Shuttleworth's "cadence" appears to be purely about getting the community to work for Canonical without having to pay the upstream experts. A utopian ideal that helps Canonical compete with distribution vendors who do employ many of upstream's developers. Who can argue against utopia without looking mad!?

Shuttleworth's misplaced example of cadence

Posted Jun 17, 2010 16:24 UTC (Thu) by jspaleta (subscriber, #50639) [Link]

That brings up an interesting point. Among those distributions who are shipping a kernel based on 2.6.32 mainline how big of a delta are they from each other and from mainline? Are the differences between vendor kernels which are ostensibly the same versions as large as the differences between sequential mainline versions? And for the long lived releases.. how large do those deltas after a year or two?

-jef

Shuttleworth's misplaced example of cadence

Posted Jun 17, 2010 18:13 UTC (Thu) by dlang (guest, #313) [Link]

my understanding is that the distros are down to a 'couple hundred' patches to the baseline kernel. This is an order of magnitude or two fewer changes than between kernel versions (and generally much smaller changes)

the question about long-lived kernels is interesting, but I suspect it's still smaller than between mainline kernel releases.

Also remember that the patches are all available, so the distros can and do see what each other are doing (and there is cross pollination between the kernel teams in some cases) so I suspect the differences are smaller than just counting the patches of each distro and summing them up.


Copyright © 2010, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds