Gee, in 32+ years time_t will still be 4 bytes?

Story: Y2K-like bug to hit Linux computersTotal Replies: 3
Author Content
chris

May 07, 2005
10:14 AM EDT
While it's true that we face this "problem", it will vanish through time as machines upgrade and/or the definition of time_t changes to 64 bits. On many systems it already is. Embedded systems that may not change could be susceptible, if they use time_t to represent time, AND they care about time, AND they're still in use in 30+ years.

However, the reason we call time_t by that name rather than simply "int" is so that the underlying type is not hard-coded into our applications. We use a time_t in our programs, and the platform defines time_t to be of an appropriate type.

Even on 32-bit machines, one can construct a 64 bit number by combining two 32-bit numbers. Most C compilers already offer such a type, called "long long" or __int64. Thus, changing ONE LINE OF CODE we can have 64-bit time values, by defining time_t to be a "long long" instead of a "long".

On machines where the natural word size is already 64-bits, the time_t has already changed. For example, on my AMD64 machine the size of a time_t value is 8 bytes, and so the problem is ALREADY solved. How many people are so naive as to think that in 32 years the world will still predominately be running on 32-bit machines? Just for comparison's sake, consider what a signed 8 byte value can hold. There are 2^63-1 possible values we can represent, meaning that with a 64-bit number we have 9223372036854775807 seconds before overflowing. Doing the math again, with 31536000 seconds per year, we can represent time without overflowing for 292,471,208,677 years. Yes, that's 292 BILLION years. But if that's not long enough, perhaps it'll comfort you to put things in perspective. Our sun is expected to burn out in approximately 5 billion years. Have a nice day. :)

peragrin

May 07, 2005
1:55 PM EDT
Of course they are just trying to throw fud againist linux to see what sticks.

Notice how the only example of a problem today is mortgage calculations?

Now in 30 years do we really expect the bulk of 32 bit intel and AMD processors to be in use?

since by the end of the decade most shipping systems will be 64 bit. Intel chips aren't designed to last like the big iron does. Those systems didn't get an xtra gig of ram just to use the latest OS.

AnonymousCoward

May 07, 2005
6:08 PM EDT
Thirty-three years ago, 6502s, 6809s and Z80s ruled the roost and 16-bit CPUs like 68000s were the new kid on the block, only just getting shipped. The native word size, at 8 bits, is one quarter of the word-size (at 32 bits) of most modern desktop PCs.

If the trend continues, we'll be using a native word size of 128 bits by then, able to slice time up into nanoseconds across a signed span of many billions of years, and 256-bit machines will be making their debut.

Even if not, chris's suggestion will work. Not as simply as he puts it, of course, because I still see code with time-values stored in longs, and there will be data files where the date is stored in a 32-bit binary field, but hardly the trauma that y2k turned out to be, let alone was predicted to be.
pat

May 08, 2005
3:11 AM EDT
There is already an excellent library for doing date time manipulation: http://cr.yp.to/libtai.html .

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!