OK Doing the math leaves me dumbfounded....

Story: Linux down time may be due to missing documentation or changing ...Total Replies: 7
Author Content
dinotrac

Jun 08, 2006
5:09 PM EDT
Quoting: Windows Server 2003 had nearly 20 percent more annual uptime in similar deployment scenarios over Linux. ... The Yankee Group found that corporate Linux, Windows and Unix servers experience on average three to five failures per server per year, resulting in 10.0 to 19.5 hours of annual downtime for each server.


Ok...lessee...

10-19.5 hours is less than 1 full day.

There are 365 days in the year, so downtime is less than 1/365, for an uptime greater than 99.7%.

So..............

How does WIndows 2003 have 20% more than an uptime that is already greater than 99.7%?

Doesn't inspire much confidence in any of the "facts" they provide, does it?





grouch

Jun 08, 2006
5:21 PM EDT
dinotrac:

The Yankee Group never allows facts to interfere with their conclusions.

"The customers who received these 1,500 letters from SCO have been told this isn't going anywhere," DiDio told TechNewsWorld. "I don't think anyone should listen to such empty assurances."-- Laura DiDio, 2003-07-22 http://www.technewsworld.com/perl/story/31166.html
dek

Jun 08, 2006
5:32 PM EDT
Typical DiDiot tripe! Those who read Yankee Group postings regularly will understand that she needs to be taken with a whole bag of salt!

Iwanna be an anal-yst when I grow up!! Only thing is I might have an attack of conscience . . . scary thought!!

Don K.
stevem

Jun 09, 2006
3:17 AM EDT
dinotrac:

What it means is that over a 365 day period, according to Yankee, Windows servers have 2-4 hours more uptime than Linux. It's all in how you do the math and phrase the words. :-)

The facts that they have are probably correct, the presentation of those is heavily biased.

Without reading the article do they include the difference between scheduled and unscheduled downtime? 'Cause in my world the 'dows servers are forever going down for scheduled outages. Solaris and Linux servers? Almost never.
dinotrac

Jun 09, 2006
3:32 AM EDT
stevem:

>It's all in how you do the math and phrase the words. :-)

Mebbe so, but math and words have meaning. If 20% more uptime means 2-4 hours, then you can't run the server more than 10-20 yours all year long.

Your point about scheduled vs. unscheduled downtime is right on target, and actually supports one of the article's contentions. When you bounce a system all the time, it becomes routine.
JaseP

Jun 09, 2006
5:23 AM EDT
Well, Since the Yankee group did the study...

They could have hired a Linux sys admin with irritable bowel syndrome... that would account for a good hour of the 2-4 hours...
richo123

Jun 09, 2006
5:41 AM EDT
My own unscientific study (no M$ checks likely for me):

Bought a P4 box in 2004; installed RH8; left running; two years later:

uptime 9:35am up 710 days, 18:47, 1 user, load average: 0.12, 0.14, 0.11

system then crashed due to hardware issues (graphics card). Replaced and rebooted

uptime

8:31am up 44 days, 11:32, 1 user, load average: 0.00, 0.00, 0.00

Software relate downtime since purchase 0secs.

Bottomline:

Save money and buy Linux.

Tale of a wistful Linux support person:

http://www.dslreports.com/forum/remark,16261099
devnet

Jun 09, 2006
6:52 AM EDT
I agree...I've had a Windows 2000 server at work running side by side with a Linux server (CentOS).

Windows Uptime: 42 days CentOS Uptime: 147 days

Windows Reboots in past 60 days: 3 CentOS Reboots in past 60 days: None

I would have greater uptime but I only started working here about 180 days ago :)

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!