Sigh.

Story: Why Linux has failed on the desktop: kernel developer Con KolivasTotal Replies: 73
Author Content
dinotrac

Jul 24, 2007
7:56 PM EDT
Subject says it all.
gus3

Jul 24, 2007
10:04 PM EDT
The article doesn't have that title now, but he still makes some valid points vis-a-vis the latest "Linux conquering the desktop" conversation. I actually found myself reading his words out loud, and what I heard startled me.
dinotrac

Jul 25, 2007
3:01 AM EDT
It's not shocking -- work gets down where people do work, and if the people doing work are getting paid by folks looking for serious servers, that will tend to dominate.

There's something to what he says, too. I'm always a little surprised when I have occasion to sit in front of a Windows box. There is so much I dislike, but there is a certain "snappiness" that I seem unable to get from a reasonably configured Linux box. It's not big, but it is there.
jdixon

Jul 25, 2007
3:08 AM EDT
> ...but there is a certain "snappiness" that I seem unable to get from a reasonably configured Linux box.

I've never noticed that myself. In fact, I've usually found the opposite to be true. But then, I do use Slackware, which is noted for it's speed.
dinotrac

Jul 25, 2007
3:19 AM EDT
>I've never noticed that myself. In fact, I've usually found the opposite to be true. But then, I do use Slackware, which is noted for it's speed.

And I mostly use Suse, which is not. I haver noticed some distros do better than others. Ubuntu seems a little snappier, PCLinuxOS seemed a little snappier.

Plus, there are things you can do to help. Just messing with the "swappiness" setting can make a very noticable difference. Using a preemptable kernel, etc.

Still, my sense is that Windows -- given the memory it needs, of course, though that doesn't seem different from a modern Linux box with KDE or GNOME -- is just "snappier". Not even faster so much as feeling that way.
pogson

Jul 25, 2007
4:49 AM EDT
I have not seen the effects mentioned. Hesitation in a bloated software system can be caused by seek-time X number of seeks needed / number of heads being too large. I use RAID 1 with as many drives as I can plug in/afford. I use AMD64 with lots of dual channel RAM. I use dual core for multi-user systems like Linux terminal servers. 32bit systems often have I/O bottlenecks, too, if they use drives on the PCI bus. Check out the block diagram for the motherboard before buying/building any system. I build my own to make sure I get what I want.

In spite of Moore's Law and variants, seek-time has not improved much while complexity of software systems has increased rapidly soaking up CPU time. Systems that do not use RAID 1 will inevitably have a problem as the number of processes needing seeks rises. If you need to seek 25 time to load a programme, 8 milliseconds can be appreciable. If you need to seek 100 times, 8 ms can be intolerable. With four to six drives in RAID 1, 100 seeks is still tolerable. Large RAM reduces the number of seeks needed by caching files that have been read before. This effect is huge on Linux terminal servers.
dinotrac

Jul 25, 2007
4:57 AM EDT
>Hesitation in a bloated software system

In fairness, it should be reported that I have not so much as touched a Vista machine.

If I recall correctly, Jim Coates, a longtime Windows rooter, went so far as to recommend VLC for Windows to Vista users because the Windows Media Player is so horsed up. For some reason, I recall complaints about stutters on Vista, but I could be thinking of something else.
jezuch

Jul 25, 2007
7:01 AM EDT
Quoting:is just "snappier". Not even faster so much as feeling that way.


Actually I'm not surprised. Windows is good at cheating. If Billy G. told the programmers that "it must feel snappier", then the programmers put some code that makes it feel snappier, if only that. I suppose that the Linux kernel devs would consider this code "ghastly hacks" and dismiss it with an "apage!" ;)
devnet

Jul 25, 2007
8:12 AM EDT
jezuch,

Absolutely right on that. Many programs prefetch themselves...running ever so quietly in windows in the resident memory. That way they can start up that much faster. My mother in law's doze box had 54 processes running on WinXP Home. That's right, 54 processes. She checks her mail and visits ebay.

All those apps installed fight for RAM and it's stupid.
gus3

Jul 25, 2007
8:32 AM EDT
Quoting:My mother in law's doze box had 54 processes running on WinXP Home.
It's probably something similar on a Fedora desktop system running GNOME.

As for "snappiness," remember, on a Window$ box, that speed comes from over-integration. If something goes haywire in a desktop theme, will that result in a BSOD? I'll take process isolation and the slow-down from context switching, in exchange for a more crash-proof box, thanks. Switching process spaces is still a very costly operation on x86 processors, but it's there for a reason, and Doze doesn't use it enough.
pogson

Jul 25, 2007
9:02 AM EDT
Quoting:Quoted: My mother in law's doze box had 54 processes running on WinXP Home.

It's probably something similar on a Fedora desktop system running GNOME.


A Linux terminal server may have 20-50 users on at once. That makes it busier than a desktop machine but the system is still usable as long as you have a gigabit/s port for the X traffic, lots of RAM to cache files and lots of drives in RAID 1. We are talking about over 2000 processes. I did a test and found my AMD64 X2 3800 could do many thousands of context switches per second and 60000 interrupts per second with reasonable usability. Hardware is grand. Linux uses it well. I doubt M$ can do that. I have seen many systems become sluggish with about 100 processes in that other OS. That's what kills it with malware. That's why one Linux server will do the work of three M$ servers.
dinotrac

Jul 25, 2007
9:16 AM EDT
Re: "snappiness"

It's, in part, a canny decision made by Microsoft years ago. The original NT kept the GUI outside of the kernel. Good OS design, bad snappiness.

Microsoft compromised what was actually a pretty decent OS by moving that stuff into kernel space, making it more responsive and less stable.

OTOH - I believe that it reflects an understanding of things that users like. We like a responsive and "snappy" desktop. That other stuff -- well, that's just geek talk and all computers have that problem, right?

Given the speed of current hardware, given the use of GPUs, etc, it seems that Linux ought to be able to have a "feel" at least as responsive as Windows. I think Con is probably right that kernel developers haven't made that a priority. They may even be right not to do so. Nevertheless, it bites.



gus3

Jul 25, 2007
9:16 AM EDT
Quoting:A Linux terminal server may have 20-50 users on at once.... I doubt M$ can do that.


Bingo. IIRC, but Windows is designed as a single-user, multi-processing system. Sure, XP introduced the "switch user" option, but it can't have more than one interactive user at a time.

But how many Linux systems are ever used by more than one person at a time? I tried and tried to get my last boss to let me set up a centralized terminal server system on Linux, the way God intended. No dice. OK, how about just an in-house DNS? Forget it. In that 5-person office, we had about 3 computers for each person. One for code management and document storage, one for backup, one desktop per person, and the rest were configuration testing. And that's the way it stayed.
pogson

Jul 25, 2007
9:29 AM EDT
gus3 wrote:
Quoting:In that 5-person office, we had about 3 computers for each person.


There is something to say for redundancy but you have higher maintenance costs with higher parts count...

Folks who are used to that other OS just cannot get their minds around Linux. The first time I watched a Linux terminal server load up with 20 students in my lab, I was panicky. My RAM was all used up! The students felt the system was very snappy with 20 of them sharing an Athlon 2500. They had gone from '98 on 400 MHz to a share of 2500 (with the option of having it all in bursts) and were far better off. So, I can claim that 125MHz of Linux is better than 400 MHz of '98.
NoDough

Jul 25, 2007
11:17 AM EDT
Quoting:That's why one Linux server will do the work of three M$ servers.
That's a lie! I've been working with Linux servers for years and haven't seen a BSOD yet! ;)
Sander_Marechal

Jul 25, 2007
1:50 PM EDT
Quoting:Given the speed of current hardware, given the use of GPUs, etc, it seems that Linux ought to be able to have a "feel" at least as responsive as Windows.


Play a bit with Beryl. Say what you will of composite desktops, but with Beryl it does feel "snappier" for me, even though I know it's actually the opposite. I'm not running it now due to some bugs last time I tested, but it's one of the few things I miss about Beryl (the other being the expose function).
Bob_Robertson

Jul 25, 2007
2:37 PM EDT
What I found most interesting about the article is how this smart, motivated individual had his enthusiasm squashed by "politics".

There was no surprise, of course. Maybe he wasn't expecting politics to exist. But politics always exists in group decision making, so here it is again.

It may very well be that he wrote the article so it's just that he wrote convincingly, but I came out of it preferring the deterministic scheduler, rather than the guessing scheduler, myself. The minute extra work of actually using "nice" to balance the workload seems such a small price to pay for being able to decide what exactly I want my desktop system to do best.

Which would be, without hesitation, to place an emphasis on interactive processes and let the background be background. It's not like the time between keystrokes/mouse-clicks isn't megahertz no matter how fast I try to work, but I do want it to frog _instantly_ when I say jump.

His idea did penetrate, sorry he seems to be feeling bad that his wheel was reinvented rather than giving him the credit he thought he was due. Sometimes you're the Newton, sometimes you're the Leibnitz.

dinotrac

Jul 25, 2007
4:06 PM EDT
>Which would be, without hesitation, to place an emphasis on interactive processes and let the background be background.

Hear, hear.

One of the things that I really missed about the mainframe world -- I was a performance and capacity geek -- was the fine grained control I had over different classes of work in terms of priority, time-slice profiles, pre-emptability, etc.

The Unix way is easier, I guess...
gus3

Jul 25, 2007
11:14 PM EDT
Quoting:There is something to say for redundancy but you have higher maintenance costs with higher parts count...


I know, but the extra machines were handling configuration testing. Each person didn't actually have three machines to use. It was just that we had five people, and fifteen systems running. We also had swappable hard drives for the PC's, so re-configuration was very easy. When one test was done, shut down, swap drives, power up, and test another configuration. Well over 50 configurations, which took about 3 days of steady testing (incl. some during off-hours) to get a "yay" or "nay" status.
hkwint

Jul 26, 2007
5:10 AM EDT
Quoting:A Linux terminal server may have 20-50 users on at once.... I doubt M$ can do that.

Bingo


Bingo indeed; here's a nice story: When I installed Azureus for the first time on my AMD 3000+ with 1G RAM, a very minimal Gentoo installation on it with a laborious customized kernel not including anything I don't need, but only running Windowmaker (!), no KDE or Gnome, it crashed my desktop time after time again. Even starting a new bash-shell wouldn't work, because I wasn't able to fork. In 'ps ax' I found out Azureus always starts 30-50+ java threads. Together with system threads this made the total number of threats become 100+.

It took me three months before I found (by coincidence) out it's the ulimit conf-file, limiting the number of processes a 'normal user' may have below 100. On the other hand, 20 to 50 users at once may be logged in. So, that means, my PC can run 100 procs * 50 users = 5000 processes at ease. However, there are never more than two users logged in at the same time on my PC, one of the always CLI only. Then why on earth can't I (default, out of the box) run Azureus on Linux without crashing the desktop, even a WindowMaker desktop which makes XFCE look bloated, and all that on a €800+ desktop < 1 year old? How comes my old XP box, a 1700+ with 256 megs of RAM did just the same fine in Windows? That's only because Linux isn't aimed at desktop use, and default configuration files for Linux are plain silly for desktop users. Of course, after changing the ulimit number of processes to 1000, the problem was all gone, but it did make me wonder why this had to be that hard.

How should my grandma ever know about ulimits? (OK, this may also be a Gentoo-only problem, I don't know how other distro's handle this)

[quote]The minute extra work of actually using "nice" to balance the workload seems such a small price to pay for being able to decide what exactly I want my desktop system to do best.

Which would be, without hesitation, to place an emphasis on interactive processes and let the background be background.[quote]

Well, sounds to me like the tasks of the distro developers. If your distro is desktop-oriented, like Sabayon or Ubuntu, they should take care of ulimits, nice settings like explained above, including CK-patches, pre-emptability etc. I wonder if Con would have stopped if Ubuntu&desktop_co would have included his patches by default. They should have, if you asked me. I also wonder if Linus would have ditched the CK-patches the way he did if more distro's used it.
pogson

Jul 26, 2007
6:13 AM EDT
Many distros are designed for a single-user workstation. As soon as you load up processes, you can bump into unanticipated consequences. I had one last year when I installed EdUbuntu on my Linux terminal servers in a school. The print monitor icon thingy was always busy. Not really a problem, but a nuisance to efficiency. I looked into it and the defaults hard-coded into the applet were to check the print queue every second! With more than a 100 sessions going this thing was checking the print queue 100 times every second and we had 14 printers! So, something intending to give a fast reaction to events in the print queue in a single-user system was malware on my system. I could not remove it because it was a dependent of GNOME. I symlinked it to death. I could have got into the source and changed the time to 300 seconds or so but I was impatient.

These things can be matters of scale. Bloatware usually works reasonably well on the system of the creator with lots of everything but jams the doors on lesser systems.
jezuch

Jul 26, 2007
4:23 PM EDT
Quoting:Bloatware usually works reasonably well on the system of the creator with lots of everything but jams the doors on lesser systems.


That's a very common problem, as I see it. Programmers tend to use several multi-core computers with two 21'' LCD monitors each and they just don't see that something is slow ("Works for me..."). That's one of the reasons I'm not so enthusiastic about replacing my trusted Athlon XP with something "better" ;) Eclipse is somewhat sluggish, yeah, but that's GTK's fault ;)
Steven_Rosenber

Jul 27, 2007
9:48 AM EDT
Developers using newer, better hardware than their users, -- definitely a problem. It can go the other way, too. One of the reasons Puppy has such good dialup support is that Barry K. uses dialup from his home in Australia.

My current experience with Windows is on XP and 2000, and I get the feeling that the GUI tends to feel snappier because it IS snappier. If a big group of coders focused on tricking out Fluxbox instead of KDE, we'd have a desktop that would match Win XP for speed and "features" (in quotes because there are so many things that Windows doesn't offer -- multiple workspaces, multiple logins, better process control, many different desktop environments, a robust, up-to-date command-line interface). Xfce is a lot closer to that ideal.

However, the idea of a "desktop" kernel and a "server" kernel seems to me to be a good one. The tasks are very different, the apps are different, why shouldn't the kernels support the two environments as best they can?
Bob_Robertson

Jul 27, 2007
10:54 AM EDT
"The tasks are very different, the apps are different, why shouldn't the kernels support the two environments as best they can?"

Which is an excellent reason to have different schedulers, maybe through a module or compile option. "Swappiness" is already a settable variable, which is certainly part of the "usability" formula.

Abe

Jul 27, 2007
11:02 AM EDT
Having two Linux Kernels, one for desktop and another for server, is a very bad idea. What should happen is make the kernel to be as parameter driven as possible. This will allow Distros, and power users, to tune them to their liking depending on what environment they run.

I am sure there are already many such parameters. The issue is lack of knowledge on how to optimize them to one mode or the other. Vector Linux is faster than many others. Could be that they modified the kernel in a major way? I doubt it. I bet they found better tuning values than what the others are using.

vainrveenr

Jul 27, 2007
11:41 AM EDT
Quoting:This will allow Distros, and power users, to tune them to their liking depending on what environment they run.
You hit this on the spot: In my experience recently installing VectorLinux on a lower-end x86, this seems to be borne out in reality. Sure seems like the VL folks optimized and and tuned values quite well for the desktop -- just look at the relative low number of seemingly-extraneous startup processes in VL-STD compared to other distros. Now maybe other less-commercial desktop-oriented distros can pick up on this, as well as the called-for increase in desktop-oriented kernel devs!
jezuch

Jul 27, 2007
2:13 PM EDT
Quoting:Which is an excellent reason to have different schedulers, maybe through a module or compile option.


As Con mentions it the interview, some people thought that too a few years ago and created plugsched, which allowed choosing a scheduler at boot (and maybe even swapping at runtime). But the Bosses dismissed it on grounds of fear of fragmentation, or something. It is still being maintained, but its future looks bleak.
Bob_Robertson

Jul 27, 2007
2:26 PM EDT
"It is still being maintained, but its future looks bleak."

Unless, like a burr under a saddle, CK causes some forward progress with his general complaining.

He did mention that the scheduler was being rewritten along the lines of the "fair" scheduler, just that they didn't follow his structure or give him credit.

tracyanne

Jul 27, 2007
2:27 PM EDT
Quoting:My current experience with Windows is on XP and 2000, and I get the feeling that the GUI tends to feel snappier because it IS snappier.


This is the point I tried to make in the other thread http://lxer.com/module/forums/t/25742/ but the thread seems to have degenerated into a discussion on high level languages and abstraction. The windows GUI uses at least as much abstraction as does KDE or GNOME, and just as much high level language based code, and it's still snappier than any Linux GUI. The only reason that can explain this is optimisation, the Kernel is optimised for the Desktop, the threading model suits desktop use.
Bob_Robertson

Jul 27, 2007
2:30 PM EDT
"The only reason that can explain this is optimisation, the Kernel is optimised for the Desktop, the threading model suits desktop use."

It also runs at a system service priority level, rather than as a standard user program like Linux-based GUIs do.
tracyanne

Jul 27, 2007
5:47 PM EDT
Quoting:It also runs at a system service priority level


Didn't think of that, but yes that would definitely make a difference.
Abe

Jul 27, 2007
6:40 PM EDT
Quoting:It also runs at a system service priority level, rather than as a standard user program like Linux-based GUIs do. ... Didn't think of that, but yes that would definitely make a difference.
Hold on for a second guys, I think you are giving MS too much credit without looking at the whole picture.

Yes, Windows threading is very optimized and efficient, but you need to look at what both desktops can and can't do and furnish.

X-Windows is a true network protocol, you can't say the same about Windows desktop. Can you?

Try connecting two sessions on a Windows desktop, you can't. If you are logged on at the console , try to connect remotely, what do you think happens? that is right, one of them has to be logged off.

How many virtual desktop can you have on Windows? A single one, not more. Linux can go up to at least 64. This might not be important at the console, but it is important when you have many remote desktops connected.

How about handling multi-user environment? How good is Windows compared to Linux.

Consider how the desktop environment in Windows merged with the OS itself. that makes it more efficient but more prone to crashes. The desktop on Linux is totally isolated and uses API to communicate. You can run Linux without a desktop environment, try running Windows without a DE, you almost never can.

Even with all this optimization and feature stripping, Linux desktop is still faster considering the over all performance.

Another thing, Look how long Windows desktop Environment has been in development, how long the Linux desktop has been around? Linux sure can use more improvements. Every time there is a new release of the either KDE or GNOME, the DE keeps getting faster and faster with more features and enhancements.

Have you noticed when you click the start button on Windows how long you have to wait for the menu to show up? On Linux, it is almost instantaneous. Try the right click to bring up the windows option to create a new file or directory or other options, it hesitates like heck. On Linux it is almost instantaneous.

There are many other situations where Windows is slower than Linux. Consider loading a CD on Windows, you can hardly anything until it is loaded. The desktop virtually get locked up for a moment.

All of this adds overhead on Linux and makes it a tiny little slower.

In my opinion, I will take a little of hesitation on Linux any time and every time before I even think about running windows for the sake of a little of fake snappiness.

dinotrac

Jul 27, 2007
6:56 PM EDT
>In my opinion, I will take a little of hesitation on Linux any time and every time before I even think about running windows for the sake of a little of fake snappiness.

Neither here nor there. I also run Linux. Doesn't mean I wouldn't enjoy some of that not-so-fake snappiness.
Abe

Jul 27, 2007
7:05 PM EDT
One more thing, take a look at this link below. Linux is capable of running 1024 processors and now is being extended to 2048. I want to see how many processors Windows can practically handle? May be you don't care about that and all you need is to run Linux as fast as it can be on the desktop. Well, the right approach is to tune it up. I think most of us don't have any idea how to do that, but I am sure developers and more power users can do that. It will come for sure and it is just a matter of priorities at this time.

Keep in mind that when MS first tested Vista, it couldn't run in practical fashion to the point of calling it useless. It was embarrassing to MS. Many of their various technical testers refused to release it and send it back to be stripped form many features and capabilities that they were planning on shocking the world with. Well, they ended up shocking themselves.

http://www.linux-watch.com/news/NS7317694195.html

Linux still needs improvements and it is happening very often as it evolves. Like I said, there are priorities and the priority now is still on the server side. As the desktop adoption picks up, I am sure many enhancements will take place in time.

Abe

Jul 27, 2007
7:26 PM EDT
This thread reminded me of my days with VMS. VMS had so many system parameters that could be modified to get the best performance out of it. Digital engineers even had an automated tool (called GETPARAMS, SAVPARAMS or something like that). We used to run it for days and it comes back with recommended new system parameters which enhanced performance quite a bit. Those were the days of true excellence in IT operations. Those SYSPARAMS were sort of like the parameters we see under /proc.
tracyanne

Jul 27, 2007
7:43 PM EDT
Quoting:One more thing, take a look at this link below. Linux is capable of running 1024 processors and now is being extended to 2048.


So what, my current laptop has two processors - dual core - and my Linux desktop has none of the snappiness that the WinXP desktop I have to use at work has. or indeed the same snappiness that the XP virtual machine running on top of my Linux desktop has.

Quoting:I want to see how many processors Windows can practically handle?


I don't give a rats. I'm not interested in what Windows can and can't do, except where Windows does it better than Linux, and that's where I believe Linux needs to improve. And it seems to me that the reason Linux fails in those areas is because of Descisions made by the kernel developers to release a kernel that's optimised for multitudes of CPUs on a server.

If it's possible to optimise the kernel for small devices there is no reason why the same can't be done for the desktop.
Abe

Jul 27, 2007
7:53 PM EDT
Quoting:May be you don't care about that and all you need is to run Linux as fast as it can be on the desktop. Well, the right approach is to tune it up. I think most of us don't have any idea how to do that, but I am sure developers and more power users can do that. It will come for sure and it is just a matter of priorities at this time.


Tracyanne, did you miss the above addressing "So what ... I don't give a rats" ?

I know my post was a little lengthy, but please try to read it all!

tracyanne

Jul 27, 2007
8:01 PM EDT
Quoting:I know my post was a little lengthy, but please try to read it all!


I did. Then I responded with my opinion.
hkwint

Jul 28, 2007
3:41 AM EDT
So, basically, one of the questions many people are wondering (here), is if there are linux-kernel params to optimize for the desktop at this time, if I understand. That question seems interesting enough to spend some time on trying to find out.
dinotrac

Jul 28, 2007
4:10 AM EDT
Let me contribute the one (and nearly only) useful kernel parameter I know about (courtesy of this site's former Grand Poobah, Tom Adelstein):

echo 10 > /proc/sys/vm/swappiness

That one will make things like OO load several times faster by reducing the systems eagerness to swap out pages. Very handy on a single-user desktop with decent amounts of ram.

There is also the old non-kernel trick of nicing up X's priority.

I have found it useful in the stuff I do to seriously bump up the /proc/kernel/shmmax. It seems to come defaulted to 32M. I have tended to bump that to 128M or more. Really seems to make a difference in Postgres (Hey! That can be a desktop thing if you run apps that use it!) and video apps.

And, lest I forget, even though I can't tell you what you should set it to, in some cases, especially with lots of long serial i/os, the latency on your network card or disk might not be ideal. There are those who claim latencies of 248 (akin to your video card) are better for video-intense systems than the 32 or so that ide buses tend to be set at by default. (These are lspci to view and setpci to set).

hkwint

Jul 28, 2007
4:42 AM EDT
OK, after a bit of reading, here's what I found that could with respect to 'snappiness':

(Found most of these the things in an T.Adelste article and its comments: http://www.linuxjournal.com/article/8308?from=0&comments_per... )

*readahead settings, http://www.4p8.com/eric.brasseur/linux_optimization.html

*hdparm settings (IDE/PATA only, not SATA)

*prelink your stuff, all binaries, http://wren.gentoo.org/doc/en/prelink-howto.xml?style=printa... (This is already done in binary distro's I suppose, but not sure)

*some firefox-config settings in about:conf *memory settings in OOo *Of course, use CK-sources, or maybe even better, no-sources ( http://test3.gentoo-wiki.com/HOWTO_no-sources )
dinotrac

Jul 28, 2007
5:13 AM EDT
Tom's article contains a suggestion to reduce the number of consoles from six by editing /etc/inittab.

If you use Ubuntu, you will not find /etc/inittab no matter how hard you try!

I don't know the proper Ubuntu way to do it, but you will find that the console start up is controlled by /etc/event.d/tty1-tty6.

Begin mini-rant:

Who cares if there are umpteen distributions? Not me. OTOH...umpteen ways to do supposedly standard things drives me nuts.

End mini-rant.
Abe

Jul 28, 2007
5:41 AM EDT
Quoting:OK, after a bit of reading, here's what I found that could with respect to 'snappiness':
The links you cite are very informative, on the other hand, Tracyanne wants all that done by the kernel developers.

While I agree it should be done for the end user, I believe it should be done outside the kernel. The developers are busy with a lot of things on their plate to fry bigger fish and have their priorities to enhance the kernel to furnish more important features and capabilities.

What the kernel developers need to do, if they haven't done so yet, is furnish sufficient information for optimizing Linux. In other words, tell what and how those /proc parameters are used for and can be used to tune the kernel.

It is up to the Distros to use the furnished information to make their desktops the best it can be. I would expect Red Hat, Suse, Ubuntu etc... to take the lead on that. So far Red Hat showed high interest in the server but none in the Linux desktop. Suse doesn't seem to have done much in the area of improving desktop performance as it shows in their's being slower than others. Gentoo, Slackware, VectorLinux chose different approaches and so far VectorLinux seems to have done the best job.

It is not the kernel developers responsibility to tailor Linux to work better for the desktop and we should not expect them to spend their valuable time on that. If some of them want to do that, that is great, no one is stopping them and they would be doing us a great favor.

Con Kolivas is absolutely right to bring the issue up, and Tracyanne and others are absolutely right in pushing and supporting the idea. But what I disagree with is demanding from the developers to modify the kernel to suit the desktop. It is not the right approach in my opinion.



dinotrac

Jul 28, 2007
5:50 AM EDT
>But what I disagree with is demanding from the developers to modify the kernel to suit the desktop. It is not the right approach in my opinion.

Nobody is demanding anything.

If, however, Linux is to be a serious desktop option for more than dedicated souls like you and me, kernel developers cannot ignore single-user considerations. "One size fits all" works only when that size is infinitely configurable, and that introduces its own problems.
Abe

Jul 28, 2007
6:18 AM EDT
Quoting:"One size fits all" works only when that size is infinitely configurable, and that introduces its own problems.


I agree, on the other hand, we need to give parameter tuning a try first to see how far it helps without having two different kernel versions. Supporting two different kernel editions is not as easy and practical as a single version by the same group.

Take the case of MS for instance. They concentrated on the desktop and in the process created a server version that is basically very weak. We can see that in the limited scalability of the server edition especially in the areas of how many applications we can reliably run and how many processors Windows can practically support. It is very costly even for MS to maintain two different editions.

MS even made it even harder for themselves by bundling everything they thought could help to make a snappy desktop into the OS to reap the consequences in the server edition.

dinotrac

Jul 28, 2007
6:20 AM EDT
>two different kernel versions

Two different kernel versions?

Ummm....

Taking into account the compile time options and the loadable module, we currently already have literally thousands of kernel versions.
azerthoth

Jul 28, 2007
6:31 AM EDT
Two other things that can help if one does their own kernels. During config check these settings.

-Preemption Model --Voluntary Kernel Preemption (Desktop)

-Timer frequency --1000 Hz

Also setting your memory up correctly

-High Memory Support --off -if you have less than 1 GB of RAM --1GB Low Memory Support -if you have 1GB of RAM --4GB -if you have more than 1GB of RAM

Along with the tweaks that hkwint posted previously and changing swappiness as dinotrac suggested you should see a marked improvement in desktop operation.
Abe

Jul 28, 2007
7:06 AM EDT
Quoting:Two different kernel versions?

Ummm....

Taking into account the compile time options and the loadable module, we currently already have literally thousands of kernel versions.
Come on now Dino, you know what I mean. The version number of a Linux kernel released by Linus. I didn't think he makes different editions for the same releases, does he?

Abe

Jul 28, 2007
7:16 AM EDT
Quoting:Along with the tweaks that hkwint posted previously and changing swappiness as dinotrac suggested you should see a marked improvement in desktop operation.


I am sure they should, on the other hand, the question remains whether the Distros are utilizing such tuning in their builds. And if they are, are they doing a good enough job in testing and evaluating for best possible results. Having many multiple Distros is a good thing since we will have a baseline to compare to.

The other questions is whether we expect end users to be able to do that for themselves. Actually we shouldn't, at least not from desktop end users.

azerthoth

Jul 28, 2007
7:49 AM EDT
While not arguing that desktop aimed distros should be making these tuning adjustments if I follow your line of reasoning no desktop user should ever have need of doing kernel level tweaks to their systems to get the most out of them. Unfortunately thats just not possible for the whole range of what people term as desktop use. What I use mine for may not be the same as what you use yours for. Sure there are some tweaks that could come with the generic kernels, although I have yet to find one that has the 1Khz timer set. The reason for that should be evident if one looks at the hardware that the generic kernels are designed for. They need to be compatible not only with the new hardware and processor but with the old as well.

This means a kernel that will function across a broad range. In doing that something has to give and that falls firmly on overall performance. If you want razors edge performance you can only get so far before you have to get in to kernel level tweaks. Microsoft at least got that part right, even if they do tend to under estimate, you know that there is a floor under which you can expect the OS to not perform at all. Linux on the other hand takes pride in being able to run and is supported on nearly anything that was ever made. The distributions want also to cover the greatest possible range of hardware, this comes at the price of top performance on any specific system.
dinotrac

Jul 28, 2007
9:16 AM EDT
>Come on now Dino, you know what I mean.

No I don't. Nobody has suggested different version numbers for Linux releases. Last I looked, the same version number is able to accommodate an incredible number of variations, including significant things like:

loadable module support

different I/O schedulers preemption model NUMA Timer frequency

System V IPC

Format of executable files

file systems

Disclaimer -- some of these may just be options of the kernel I use, though, I guess, that may be the point: It can be done and it is done.
Abe

Jul 28, 2007
10:42 AM EDT
Quoting:...this comes at the price of top performance on any specific system.
In principle, I agree.

Like Dino noted
Quoting:"One size fits all" works only when that size is infinitely configurable, and that introduces its own problems.


Linux runs on many different hardware because it was designed from the ground up with that in mind. The reason Linux is able to handle that is being open highly modularized. What ever hardware is being used, you include the necessary modules, what is not used you just don't include its code. For specialized devices and systems, you just add what is necessary. Vendors do that all the time. e.g. Tivo, routers, phones etc...

Let's focus on what is currently most important, which is Intel based servers and desktops. Both are pretty much the same in terms of hardware components and they basically differ in their resource allocation to the various application. The same type of disk, memory, Ethernet cards and what have you, they all basically are the same.

In this aspect, the kernel doesn't differ much other than their resource allocation. Those can be handled fairly well by tuning system parameters. Keep in mind that, the optimal values of these parameters are also dependent on the type of applications you run on a computer.

Hardware optimization is mainly done at the driver level and that shouldn't differ from the server to the desktop since they are basically the same.

So you see, the is a limit on what you can do with the kernel to optimize it for a desktop or a server, but there is a lot more that can be done on the application level.

Bob_Robertson

Jul 28, 2007
10:48 AM EDT
Abe, thank you for listing all those great reasons why Linux is a better platform for the user. I agree with you, which is why I don't use Windows.

I'm not "giving Microsoft credit" at all. In order to overcome the miserable state of their code base, they _have_ to run the GUI at a system service priority and permissions, which is one primary cause of the abominable security experience.

A Linux based system is better in every measurable way except one: Perception. It is perceived as being slower for the user because some few little things sometimes take a perceptable time to do.

Not that Windows doesn't do _exactly_ the same thing and worse! Defending Linux as a desktop environment is like trying to defend a free-market when all people do is attack by saying what bad thing(s) might happen, while ignoring that even under the tightest police-state regulation those same bad things _still_ happen.

Argh.

Ok, I'm done.

BTW, I haven't used multiple virtual desktops since OLVWM in 1994. I find that I prefer to have my working applications minimized into the bar at the bottom of the screen, as "perfected" by Win95 and embodied in KDE, rather than have the multiple desktops.

However, I have one entire desktop environment (Alt-F7) for my local machine, and another (Alt-F8) for the server. I can tell the difference in display times, of course, but such absolutely native functionality on a remote system for MS-Win is hardly a native function of the Windows GUI. In Xwindows, it is.

Like the guy said in _RoboCop_, "I LIKE IT!"
Abe

Jul 28, 2007
10:59 AM EDT
Quoting:loadable module support

different I/O schedulers preemption model NUMA Timer frequency ...


Those are adaptation to the single and only one kernel that is released by Linus. What you, Red Hat, Suse, or any one else releases are all adaptations of the core.

If any of those do change the fundamentals of the Linux kernel, it is up to the developers to include into the official kernel or not.

That is not what we are talking about.

Sorry, I can't make any clearer.

Abe

Jul 28, 2007
12:08 PM EDT
Quoting:I'm not "giving Microsoft credit" at all


I know, but others sounded, at least to, like they are and might have been unintentionally.

You brought a very important point about using services that MS started with NT. NT wasn't a bad kernel, on the contrary, it had a really good kernel. After all, it was the inception of the guy who was the main architect of VMS. Services on a VMS server, that used to be called detached background processes, were also the work horse on VMS. They handled application interface and user requests via memory mailbox (now we know them as threads).

Combined with efficient threading, NT kernel was a pretty fast and capable system as a server. Services are very fast and clean processes with minimal resource requirements. I don't know much about the internals of Linux, but if it could have such easy and clean processes and better threading, it will blow Windows out of the water on both the server and desktop side.

krisum

Jul 28, 2007
1:55 PM EDT
From my experience (running on a K6-2 300Mhz 192M machine), the best is elive (http://www.elivecd.org) with low requirements running enlightenment (which I like much better than xfce) and a quite decent elive control panel to configure most of the things. The look is very good and best part is that it is Debian.

Only issue could be that the release version is not free -- requires some donation. So you could try the last development version live CD and if you like it then get the release version.
krisum

Jul 28, 2007
2:02 PM EDT
Sorry, wrong thread!!
tracyanne

Jul 28, 2007
2:03 PM EDT
Quoting: I know, but others sounded, at least to, like they are and might have been unintentionally.


Intentionally in my case. If you can't accept that the opposition does something better, then you will never fix the what needs to be fixed. Also because it brought people who know about things I don't know out of the woodwork.
dinotrac

Jul 28, 2007
3:34 PM EDT
>Intentionally in my case.

And mine too.
Abe

Jul 28, 2007
4:54 PM EDT
Quoting:Intentionally in my case.
In that case, I don't agree with your assessment that Windows is snappier.

I use XP at work and Linux at home. In my opinion and in general, I find XP to be slower than Linux and at times is annoying like heck. I think you are unjustly exaggerating just to drive your point. There is no need for that really and there is no need to blame it specifically on the kernel.

Quoting:If you can't accept that the opposition does something better, then you will never fix the what needs to be fixed.


Tracyann: Did you read my post about how good services and threading work on Windows NT? I do call them as I see them even if the opposition was better.

Beside, the kernel developers are constantly working on improving it. The fast cycle of releasing patches and the new scheduler in specific are obvious proof. I believe there is no need for "Windows does this and that better than Linux" especially when it is not substantiated and purely subject.

jdixon

Jul 28, 2007
6:15 PM EDT
> In my opinion and in general, I find XP to be slower than Linux

That's my experience also. I've found that an initial install of Windows 2000 on the same hardware seems almost as fast as Linux, but it starts slowing down almost immediately, and within a few months it's much slower. I find XP to be slightly slower than 2000. I haven't tried Vista, but from what I've heard, it would be far slower than XP.

I think some of this is distro specific. Slackware based distros (which is what I use) just seem to be faster than most of the others out there, so I've never noticed the slowness the others are reporting.
tuxchick

Jul 28, 2007
6:32 PM EDT
It takes a lot of CPU cycles to feed the botnets.
gus3

Jul 28, 2007
6:56 PM EDT
LOL @ tuxchick.

You have a way with words.
Steven_Rosenber

Jul 28, 2007
7:47 PM EDT
After all these discussions, I did load Vector Linux. It is pretty snappy -- no more so than Debian or Slackware. LILO didn't pick up my Xubuntu partition, so I'll have to deal with that next week. Also, I couldn't seem to mount my USB flash drive. Again, I didn't have a lot of time, and the online man pages for Vector that held the answer to my questions were down. I'm probably going to throw ZenWalk on there, or even Slackware again -- there's a lot to like in Slackware for older hardware, especially running Xfce or Fluxbox.

I took a peek at Edelstein's article on optimizing Linux for the desktop. How many 2-year-old articles have that kind of staying power? He mentions a GNOME feature that preloads part of OpenOffice. If you are a heavy user of OO, that's great. Otherwise it's a waste of system resources. OO does preload to some extent in Windows, but it is user-configurable, if I am correct.

The thread is so long now, I don't know if anybody mentioned turning off non-essential services that start up at boot.

And as far as things like MS Office and IE being "snappy" in Windows, a lot of that is due to key components for those applications being included in the kernel. It's more evident for IE than for MS Office, but the OS does seem to favor those apps. Also, the GNOME loader for
tracyanne

Jul 29, 2007
12:25 AM EDT
Quoting:Did you read my post about how good services and threading work on Windows NT? I do call them as I see them even if the opposition was better.


Yes I did, the "you" in my statement would better read as "one", as in "If one can't accept..........", it wasn't an attack on anything you, or anyone else had said.

The simple fact is that in most of the time the user experience of the desktop is superior on the Windows desktop. Yes I can and do, often push the Windows desktop to the point where it chokes, and the experience deteriorates markedly, while no matter how hard I push the Linux desktop it continues to give the same less then perfect experience that never gets any worse.
dinotrac

Jul 29, 2007
3:41 AM EDT
> it continues to give the same less then perfect experience that never gets any worse.

You have a better experience than I do. I find that to be mostly true, but memory starvation can really s-l-o-w my systems down.

Still, mostly, they hold up TONS better to abuse than Windows.

Maybe the appropriate automotive comparison is a pickup truck and something like a Toyota Corolla/Honda Civic, etc.

The little care looks nice when you're sitting in it, feels good and works well. It feels nice and zippy. Steering is light and controls responsive. Driving in traffic is a breeze. Drop half a ton of crap on it, and, well, it still looks pretty good, aside from any squishing that may occur, but it ain't moving much. You certainly want to avoid the uphills.

The pickup truck may be nice and shiny, but, no matter what you do, it's still, well, a pickup truck. Power assists may help the steerting, but it lacks the precision of a little zipmobile. Ride is, umm, trucklike. It's got a bigger motor than the little car, but isn't as quick because of the weight it pulls around. As a car, the truck isn't terrible, but it isn't wonderful, either. Drop half a ton of crap on it and little changes. You'll feel the load. The ride might actually improve. Overall, it isn't terrible, but it isn't wonderful.
hkwint

Jul 29, 2007
1:18 PM EDT
Abe, beware! You said:

Quoting:I believe there is no need for "Windows does this and that better than Linux" especially when it is not substantiated and purely subject.


That's exactly what Linus said about ck-patches, and the reason not to offer the staircase scheduler as an option in the mainline-Linux-kernel. No matter how they argued about substantiated or not, ck-users still felt ck-linux was snappier, which is why people use ck-sources and they're happy. You can't deny those users by saying they 'are going through experiences that don't exist in the real physical world but only in their head'. We have to accept user experiences, even if we can't measure them, and we should care about them.

We should care if ck-users say ck-patches make Linux feel snappier, and in the same way we should care if people say Windows XP feels snappier than vanilla-Linux.

Of course, being snappy isn't a significant difference when it comes to productivity, in my opinion, so in an objective way, snappiness is not an issue at all. I mean, what are five seconds more of startup time for OOo, and fifteen seconds in startup time more in Linux than in Windows, when in Windows you have to spent hours to keep the system clean from mal/ad/crap/cripple/spyware, viruses and worms, and re-patch your patches? Well, I can tell, from an objective viewpoint, this lack of snappiness in Linux is totally insignificant compared to the waisted time in Windows.

However, it are the objective 'small' issues which can be in a subjective manner quite big. For example, when saving in AutoCAD over a network, I have to wait about four seconds before my drawing is saved. I save about once every five minutes or so. That is totally insignificant compared to my style of drawing: Sometimes, I could have saved a quarter of work by thinking better before starting to draw, reading the assignment better before starting to draw (which would save my quarters of drawing the wrong aspects because I didn't read the assignment but only looked at the drawings), or using the mirror feature more than I do, in which case I only would have drawn instead of two sides. So, you could say those four seconds lack of snappiness are totally insignificant in an objective way. Than you'd be right.

Nonetheless, you can't guess how frustrated I am every time I'm waiting for the d*rn file to save. Those five seconds feel like eternity for some reason, and I get p*ssed of much more, than when realizing I could have saved a quarter by drawing in a different way.

So what I'm saying is this: When it comes to user experience, you can't deny subjective feelings. Even more than that: the subjective feelings are what makes the user experience (which by the way is the general theorem of marketing if you'd ask me: Even if your product s*cks in an objective way, relate it to the positive feelings user may experience with the product in a subjective way, than you will get it sold)
dinotrac

Jul 29, 2007
1:57 PM EDT
Hans --

You get it! You get it! You get it!

Users' interactions with their tools are not a Frederick Taylor exercise. The human element is large and has as much to do with productivity as keystrokes and total processing time.

Happy people tend to be more productive. Frustrated ones less so.
azerthoth

Jul 29, 2007
2:54 PM EDT
Agreed
dinotrac

Jul 29, 2007
6:29 PM EDT
As a coincidence to this thread, I've been involved in mailing-list conversation with some friends -- serious Unix heads that include a Linux kernel contributor -- that was spurred by a local school district's letter to parents that they should get Microsoft Office 2007 for their kids. This is Batavia, Illinois, and the notice drew the attention of Slashdot.

The interesting part of the discussion was frustration by several of us with the Linux desktop. One of us had abandoned it for a Mac two years ago, and a couple of us have favorable impressions. The truth, whether any ubergeeks want to admit it, is that the Linux desktop is functional, stable, easier to use than ever -- and not very satisfying. I referred to "snappiness" in this thread, my mailing-list mate refers to smoothness. Compared to his Mac, the Linux desktops just seem cruder.

Of course, compared to the Mac, pretty much all desktops seem cruder.

Bob_Robertson

Jul 29, 2007
6:44 PM EDT
"Of course, compared to the Mac, pretty much all desktops seem cruder."

If there were no benefit to vertically-integrated systems, there wouldn't be vertically-integrated systems.

The entire Mac (even using the BSD core kernel) is written by a relatively small group of people who are very well integrated. They have a plan, and they have _COMPLETE_ control over what hardware the software runs on.

Myself, I consider the conflicts and competition between (just for example) KDE and GNOME to be a GoodThing(tm, reg us pat off) because it motivates improvement on both (all) sides. Such a vertically integrated system as Apple doesn't have that direct feature-for-feature competition.

However, they have a goal and no hesitation to leave out what they believe does not need to be there in order to reach that goal. Compare this to KDE's "everything and complete control of the kitchen sink too" and of course KDE looks clunky.

Compare the "user interface" of a Mercedes Benz with the same user experience in a full-sized Kentworth long-haul tractor. Which is going to be "smoother", which is going to seem "cruder"?

I'm a bit of a bigot myself. I've used Macs, they're just a bit too polished, a bit to abstracted for me. I _want_ to be able to have the fine grained control when I want it. I happen to enjoy changing my own oil, knowing the pressure in my own tires.

I've used Windows, every version except VISTA (which I powered up for only long enough to make a system restore disk before I erased it with extreme prejudice, just in case), and every argument against the Linux desktop experience are problems I've had with Windows at one time or another.

But here's why this thread is important: With Linux, we can FIX it. We can have some effect on what is done to solve these problems of perception. Windows? Mac? Forget it, because they are someone else's visions of what they want.

With Linux, we have met the community and they is us.

(apologies to Pogo)
Abe

Jul 29, 2007
7:25 PM EDT
Quoting:...With Linux, we have met the community and they is us.


Well said Bob.

I wouldn't take a Mac over Linux for the same reasons.

Quoting:With Linux, we can FIX it.
Couple weeks ago, my wife mentioned to me that Frank & Claire (both are doctors) are very depressed and didn't sleep last night trying to revive their Mac. Frank needed it badly to get his files to make a presentation at a conference. His backup on a 2nd USB drive wasn't any good or incomplete. He had similar situation before and he ended up sending to some outfit to recover. It cost him $1,500. to recover about 70% for the files and took 3 weeks.

They asked If I can help, I said I will try but can't promise anything.

Well, Linux to the rescue. I downloaded Debian PPC 4.0 (I never used before) and created a boot CD. Guess what, They thought I was genius after recovering all his files (With his help since I wasn't familiar with the Mac). It did take me sometime to figure out things, but I did it. His hard drive was pretty much at it last leg.

That was not the first incident, about a year ago, another friend had a problem with his Windows XP hard drive signature. Linux was the hero also.

Now, about the Mac "smoothness", in my very brief exposure to the Mac, I really thought it was funky.



gus3

Jul 29, 2007
8:49 PM EDT
Says dinotrac:

Quoting:The little care looks nice when you're sitting in it, feels good and works well. It feels nice and zippy. Steering is light and controls responsive. Driving in traffic is a breeze. Drop half a ton of crap on it, and, well, it still looks pretty good, aside from any squishing that may occur, but it ain't moving much. You certainly want to avoid the uphills.
I'm going to go a little off-topic, and who knows, it may actually help the discussion.

I moved across the country in my little Honda Civic CRX. It was packed to the limit: I could see out the mirrors, and barely see out the rear windows. It may have been "half a ton of crap," what with the books and computers and monitors. Oh, and I had a few clothes in there, too. ;-)

Truth is, that cross-country trip was a better ride than nearly any other I've taken in it. The extra mass hid the bumps in the road a lot better, I could still accelerate going uphill (thanks to the engine's power curve), and I actually got improved fuel economy with the prevailing tailwind. Normally I get 42 MPG on the highway, but over this trip I got 46 MPG. I ran the figures 3 times to make sure I did them right.

That was a year and a half ago, when the engine had ~244,000 miles on it. It now has over a quarter-million.

Would I prefer to have moved in a pickup truck? I doubt it. Even with having shipped the less fragile stuff, I used a lot less fuel than a truck would have used. The cost savings alone with my little car's fuel usage offset the shipping cost of what I didn't carry.

Now, back on-topic, I guess the moral of my story would be: Under the right circumstances, and with a good handler, a "weak" system can still have some surprising abilities. Yet M$ would have us believe that, if it isn't the latest and shiniest hardware, it isn't worth having. Phooey on them.

No, let me re-word that. M$ would have us believe that, if you don't have the latest and shiniest hardware, you aren't worth having as a customer. You might actually make them look bad.
hkwint

Jul 30, 2007
6:51 AM EDT
OK, I compiled ck-sources. Actually, it was no-sources-2.6.20, an overlay available for (amongst others I believe) Gentoo, based on the -mm, -ck and -love patchsets, including Con's newest scheduling (Rotating Staircase Deadline Scheduler, RDSL for short. Don't ask me what it means by the way. I'll try it and let you folks know if I experience any difference. I hope it does. Probably I can't measure it, but it should 'feel' smoother / snappier, not? The sad thing is, my computer (the one I'm on now) isn't very old, it has got a Sempron 2600 and 768M RAM, but I could underclock and downgrade RAM maybe.

The only thing I'd need is a 'random kernel chooser' in GRUB to make this test 'trustworthy' and less subjective. Any suggestions on how to do this? Probably I could use an initrd to do this, but I have bad experiences using initrd's, because they are a lot of work to make, and most of the times they don't do the things I'd like them to do. However, performing a 'blind' test seems like a fine job for the holidays which I'm having now.

In the meantime, to keep you busy, please read http://archlinux.org/pipermail/arch-dev-public/2007-May/0005... , about a kernel-patchset developer / maintainer frustrated with the Linux kernel. Sounds familiar, not? Remember, this 'dev' posted this about a month before Con ventilated his critics about the kernel.

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!