The first thought to cross my mind on seeing this, was

Story: The ~200 Line Linux Kernel Patch That Does WondersTotal Replies: 31
Author Content
dinotrac

Nov 16, 2010
11:18 PM EDT
Holy Kaboley, I've got to get that on my Myth box.

Watching video while commercial flagging -- hmmm. Wonder if it'll help Hulu out at all? Wireless streaming from my mythbackend?

I can think of a thousand places that are hurt more by latencies than by peak throughput.

This is very exciting to me, especially considering the disdain I have had to suffer from Windows and Mac users.
tuxchick

Nov 16, 2010
11:29 PM EDT
OMG teh dino. *rub eyes*
dinotrac

Nov 17, 2010
12:34 AM EDT
Relax, TC. It's all an illusion. I'm really off on facebook enemying people.
tuxchick

Nov 17, 2010
1:54 AM EDT
No way! I want to enemy people too! I'm going to go look for the enemy button even if it takes me all night.
Steven_Rosenber

Nov 17, 2010
2:57 AM EDT
The fact that they have a "like" button but not a "hate" button. It's just wrong. "Like" it or shut the h#ll up ... sounds eerily familiar, don't it?
jezuch

Nov 17, 2010
3:33 AM EDT
Hmm, how often do you have load of 50 on your desktop? I do my kernel compiles with nice +20 (or SCHED_IDLE), seems to have the same effect... In theory of course. I haven't tried it yet :)
jacog

Nov 17, 2010
4:13 AM EDT
@Steven_Rosenber

That all depends on how you choose to interpret it. It could be "if you don't have anything nice to say, then don't say anything", or it could be "we assume everyone hates it until they say otherwise".
dinotrac

Nov 17, 2010
7:43 AM EDT
Sorry you guys don't have a hate button on your facebook pages. I don't want to be indelicate, but all the really cool people have one. I thought you knew.
gus3

Nov 17, 2010
10:07 AM EDT
@dino:

Then the really cool people got that Facebook malware that went around a couple months ago.
dinotrac

Nov 17, 2010
10:14 AM EDT
@gus3 -

No, we didn't. That was for everybody else.
jdixon

Nov 17, 2010
2:47 PM EDT
Welcome back, Dino.
dinotrac

Nov 17, 2010
4:21 PM EDT
Thanks, all. Don't know how steady I'll be, what with life and all. But nice to drop by, at the very least.
hkwint

Nov 17, 2010
7:40 PM EDT
Hey Dino, where were you when we needed you? We had all those fancy discussions about laws and such and none of us having a clue, you know, the whole thread full of IANAL-(and-never-were) types, darn what a mess.

Don't know what took you so long. Maybe people hacked your Facebook account, upset your friends who are now your enemies (so LXer are the only friends left) and to prevent future ID-fraud, you finally invented the new Carton ID after all? But glad you're back!

Especially, to read my "planned new LXer feature", about five years of LXer-memories, as you clearly deserve a place in it, ahem.

'bout them patches: Just read about it an hour ago, I'm going to test them I think. But I don't have the slightest clue how to 'measure' if they help anything?

update: Oh joy, it's compiling!
tuxchick

Nov 17, 2010
9:36 PM EDT
I hope somebody hacked Dino's Facebook page because it has a big banner adorned with pictures of blue butterflies, and it says WINDOWS 7 FOR WORLD PEACE.

hkwint

Nov 17, 2010
9:40 PM EDT
Apart from that photo of a tux-like chicken roasting on the BBQ.
gus3

Nov 17, 2010
9:53 PM EDT
@hkwint, try building a kernel with "make -jN" (where N is your CPU count +4 or some such), and then play Big Buck Bunny full-screen.

I was amazed.
dinotrac

Nov 17, 2010
10:52 PM EDT
Hans --

Not a tux-like chicken at all, but a genuine penguin!

There's tasty, and there's endangered species tasty.

Wait...hold on. Drugs wearing off, which is strange considering I don't do drugs and am not on any prescriptions, unless, of course, you count pure pharmaceutical-grade dinotrac...

My corporate logo is an emperor penguin in red sneakers! Somebody's messin' with the old guy's mind.
dinotrac

Nov 17, 2010
11:04 PM EDT
BTW - Hans:

I have a certain sympathy for your dilemma. In my old mainframe days, I was a capacity planner and performance measurement geek. Thanks to the need to maximize use of the hardware (I was at EDS where we made money off of doing that, and my data center had $1 billion worth of hardware to pay for), we made damned sure to have data in both quantity and granularity that would make Unix geeks turn green.

The truth, however, is that there is no great mystery to measuring responsiveness -- although it might be difficult to actually carry the numbers.

Those latencies do add up and they make tasks take longer to run, so -- if you measure response times, beginning to end, you can capture something that correlates very well to responsiveness.

Here's the catch:

Average response time is a useful but inadequate measure. The reason for that is that we're very sensitive to abnormally long latencies, especially in audio, and, to a lesser extent, video.

Something that completes in 1 second on average, but actually completes in 500 milliseconds 4 times and 3 seconds the 5th time, will not be perceived to be as response as something that averages 1.5 seconds, but varies only between 1.4 and 1.6.

I used to measure both average response tims and 90% times (95% for certain service level requirements).

At any rate, a quick average coupled with a 90th percentile at or below that average will probably seem pretty responsive.











hkwint

Nov 18, 2010
1:06 AM EDT
Dumb question, but how do you measure 'response times' these days?

For example, it's hard to see when a webpage is finished loading and such.

gus3: Tried, but it worked pretty well without the patch, and after the patch applied the kernel didn't compile, as it threw an error. Seems the error was reported today (17th, yesterday here) by some Brazilian bloke. I hope it will be solved soon, I don't feel like posting to gmane.
dinotrac

Nov 18, 2010
8:34 AM EDT
Hans --

That's the $64,000 question, isn't it? Linux simply isn't as rich in data as the old mainframes. An awful lot of what passes for serious performance analysis is done by looking at queue lengths, analyzing waits, etc.

One reasonable place to look would be apache logs. You might be able to construct a proxy (ie, a measurement that looks and acts like what you really want, even if it isn't exactly what you want) from that and run a s -- umm -- boat -- load of short Gets designed to bring up web pages. Of course, caching will kill you if you don't have a boatload of different short pages to bring up.

As a developer, I used to insert timestamps in critical pieces of code so that I could gather up the information that I needed.

One thing that should help is that this is a system level patch. That means you can reasonably assume that all boats will get a similar lift. You can't know that, but, given the descriptions, you should have confidence in your ability to create an artificial transaction (if you lack some good real ones) that does some I/O and uses some CPU -- and timestamps to a log -- will reflect the improvements if you randomly (That queuing theory stuff, y'know. Transactions tend to come in randomly and, in interactive applications, tend demand randomly distributed amounts of work) fire them at your machine, in varying loads.



Sander_Marechal

Nov 18, 2010
3:59 PM EDT
For desktops I think you can objectively measure responsiveness. It's the time between clicking on something, and the system giving some indication that it's doing what you want (e.g, the button gets pressed).
hkwint

Nov 18, 2010
5:33 PM EDT
Quoting:the time between clicking on something, and the system giving some indication that it's doing what you want


Sure, but then again you still need some device or something to measure that time. I tried loading 20 webpages with loads of JavaScript as a kind of comparison between FF3.6 and 4.0b-JS preview. Of course, 4.0 was much quicker, but how to measure? Use a stopwatch? And how do you know all 20 pages are done loading?

Indeed, making a local copy of the webpages and serve them through local Apache might be a good idea, but it requires downloading all resources needed for a single webpage. My skills are not up to the task, sadly.

Also, I wanted to graph CPU usage while those 20 pages are loading. So I found out 'ps' doesn't show "current" CPU-load, only accumulative CPU load. Top in batch mode requires quite some system resources. So I went looking in the code, but it's quite complex it seems, especially given the fact I'm on a dual-core nowadays. If you can catch jiffies though, you're almost there. But again, my coding skills are not up to the task. Looked at Stackoverflow, somebody posed the exact same question (how to measure CPU-load caused by certain PID's), and again no answer. Can't find the answer in /proc neither, even not in the /proc/pid/stats.
dinotrac

Nov 18, 2010
6:48 PM EDT
Hans --

And, of course, you have the problem of what you measure. For responsiveness, average times simply aren't good enough.

Hmmmm.

Here's a very strange thought -- too strange, probably, too make any sense at all, but... what if you wgetted a bunch of web pages and fed themthrough wkhtmltopdf?

That might be a little too heavy, but, if done from a script, would give you something you could measure, with granularity that would let you gather distribution statistics.
Bob_Robertson

Nov 18, 2010
7:05 PM EDT
The first thing I saw was, "The ~200 line Linux kernel patch that does Windows"
hkwint

Nov 18, 2010
7:14 PM EDT
Quoting:"The ~200 line Linux kernel patch that does Windows"


Yeah, the new 'small' Wayland-server really is a one-of-its-kind miracle of efficiency, isn't it?

Dino: I was trying to find out how FF3.6 / FF4.0 compare to each other, so I don't see the point in using wget?
dinotrac

Nov 18, 2010
7:57 PM EDT
Ah. No, it wouldn't be much use for that, would it?

Remember -- I'm old. I don't have to make sense.
Sander_Marechal

Nov 19, 2010
6:48 AM EDT
Quoting:Indeed, making a local copy of the webpages and serve them through local Apache might be a good idea, but it requires downloading all resources needed for a single webpage. My skills are not up to the task, sadly.


What skills? `wget -m` is all you need.
hkwint

Nov 19, 2010
11:32 AM EDT
Again, if I want to benchmark Firefox, how is wget going to be of any help?
gus3

Nov 19, 2010
12:33 PM EDT
Because if you're going to benchmark Firefox, one variable you want to reduce and control is network latency.
dinotrac

Nov 19, 2010
2:08 PM EDT
Hans --

I feel your pain. In the old days, we could do a lot with expect, but I don't think that works so well in GUI world. I know that QA folks have a bunch of "auto drivers". Don't know if any are free software. They would have some decent potential -- especially if you can timestamp request out and request complete. If you've got all the start and stop times, you should be able to calculate response times and get all the statistics you could want.

It will, of course, include some stuff that isn't strictly firefox, but... especially if you are serving the pages up --- you can probably do a series of runs (not a single run) and get scatterplots -> correllations -> etc that will give you interesting info.
hkwint

Nov 19, 2010
11:16 PM EDT
2.6.36 kernel patch is here (for those of you who're not running git)

http://www.glasen-hardt.de/wp-content/uploads/2010/11/sched_...

Also applies against TuxOnIce-sources 2.6.36 (the one I'm currently using).
mmelchert

Nov 20, 2010
2:50 PM EDT
thanx for the link to the 2.6.36 patch, hans!

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!