people were talking about the millennium bug recently, well it turns out a lot of the fixes just delayed the problem to 2020

"Programmers wanting to avoid the Y2K bug had two broad options: entirely rewrite their code, or adopt a quick fix called “windowing”, which would treat all dates from 00 to 20, as from the 2000s, rather than the 1900s. An estimated 80 per cent of computers fixed in 1999 used the quicker, cheaper option."

"Those systems that used the quick fix have now reached the end of that window, and have rolled back to 1920. Utility company bills have reportedly been produced with the erroneous date 1920, while tens of thousands of parking meters in New York City have declined credit card transactions because of the date glitch."

newscientist.com/article/22292

Follow

@radikalgrafitio I find it so weird that they were/are encoding dates in decimal. Y2038 makes more sense to me.

@alcinnz @radikalgrafitio Binary coded decimal was very widespread in the 70s and 80s, everything from mainframes to 6502s had hardware support for it. It makes debugging the hex data easier and saves a few lines of code on conversion for display. 😀

@alcinnz @radikalgrafitio
Encoding dates in decimal or ASCII is the most reasonable option. Epoch timestamps are deeply problematic because the number of seconds in each year isn't known very far in advance.

@alcinnz @radikalgrafitio
(POSIX avoids the problem by saying all years have the same number of seconds, but the seconds may be of different lengths. This, of course, introduces a different problem.)

@mathew @alcinnz @radikalgrafitio yes, but no. Sure, because of leap seconds each year has a variable length. It _will_ be a problem if we need a second to second accurate timestamps, but, actually, for the abovementioned purposes we don't. In that case usage of Epoch timestamps will be less errorneus.

@peexea @alcinnz @radikalgrafitio
You seem to think that Y2K problems only affected systems that didn't need sub-second accuracy. I believe you're mistaken, not least because WWVB has Y2K problems and GPS had very similar windowing issues which had to be fixed.

@mathew @alcinnz @radikalgrafitio it _will_ have subsecond accurancy inside it's frame of reference. Only converting to human-readable timestamps will not be accurate.

@mathew @alcinnz @radikalgrafitio my point is that machine time does not have to be correlated with human time. Machine time have to be accurate and easily managable _inside_ the network.

Need to store a table of leap second adjustments seems reasonable, because machine-to-human time conversions are relatively rare.

@peexea @alcinnz @radikalgrafitio
POSIX doesn't require leap second table support. You could implement your own epoch timestamp library separately from POSIX, I guess, but doing so in a way which will work with future timestamps is still going to be a nuisance, and you'll have to be careful that you don't use the standard calls anywhere. Or you could do what GPS does, and use a UT1 basis for your timestamps and have them drift away from UTC.

Or, you know, you could just use ISO-8601 ASCII.

@mathew @peexea @alcinnz @radikalgrafitio also, ait of these issues were on COBOL applications. What makes COBOL different is that it merges variable definitions and output formatting.

You can't just change a variable to store things in a different format, because that will change the layout of the reports. So they were stuck with 2 digits for the year unless they wanted to change the output, and that would have taken a lot more work.

We can make a favor for futere programmers and redefine Epoch timestamps to the original "number of seconds from the 00:00:00 UTC of 01.01.1970". Such timestamps will be absolutely independent of the calendar changes but will make a translation to human-readable format harder.

IMHO, it is perfectly ok for enterprise and embedded systems.

Sign in to participate in the conversation
FLOSS.social

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).