Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
UTC vs. UT1 time and other nuances (2020) (mperdikeas.github.io)
78 points by UpstandingUser on Aug 13, 2022 | hide | past | favorite | 30 comments


TAI is also a kind of approximation, since time elapses differently depending on gravitational pull, due to relativistic effects. It is defined as the physical time elapsing on a geodesic, an imaginary surface covering the earth at roughly sea level with equal gravitational pull at all points.

In reality, physical time elapses at slightly different “rate” depending on where you’re located on the surface of the earth (or possibly not on the surface of the earth), also due to nonuniformities in the earth’s mantle affecting the gravitational field.

TAI is taken as the average between the 400-or-so contributing atomic clocks, adjusted for their relative height above sea level, and possibly other factors, and taking into account the signal propagation time between the clocks.

Compared to the time in the reference frame of the sun, for example, (which may be taken as a solar-system wide time-keeping standard) TAI wiggles around that solar time according to earth’s yearly cycle around the sun.

Of course, those variations are much smaller than DUT1 (at least close to earth’s reference frame).


TAI and UTC are based on the SI second. The SI second is specifically defined in terms of cesium atoms at no temperature, no velocity, and no elevation (aka, on the geoid). Consequently TAI and UTC are immune from relativistic effects, by definition.

On the other hand, the physical clocks in the laboratory are not immune and that's why they have to be corrected for a dozen factors, the largest of which is usually gravitational redshift. To appreciate the complexity and precision of this correction see this NIST paper:

https://tf.nist.gov/general/pdf/2883.pdf


> TAI and UTC are based on the SI second. The SI second is specifically defined in terms of cesium atoms at no temperature, no velocity, and no elevation (aka, on the geoid). Consequently TAI and UTC are immune from relativistic effects, by definition.

This is not quite correct. The definition of the SI second does not make any requirements or assumptions about the state of motion or location of the cesium atoms. The only requirement is that the device that measures the frequency of the cesium atom hyperfine transition is at rest relative to the atoms themselves and spatially co-located with them. That is what ensures that no relativistic effects are involved in the measurement.

The definitions of TAI and UTC are not simply based on the SI second, but on the SI second as recorded by clocks on the geoid that are at rest relative to the rotating Earth. That extra qualifier is why measurements recorded by clocks not on the geoid have to be adjusted.


I think you might be misunderstanding. Clocks must necessarily have a frame of reference. TAI's frame of reference is imaginary (geoid), and in reality we calculate it from an average.

Because TAI's frame of reference is the geoid it will experience relativistic effects based on earth's motion viewed from any other frame of reference (in OP's example, from the reference of the sun)

Clocks cannot exist without a frame of reference. Accurate timekeeping necessarily involves tracking spatial information as well.


The current SI second[0] does not make any mention of Earth. Elevation does not factor into the definition.

[0]: https://www.bipm.org/documents/20126/41483022/SI-Brochure-9-... page 16


Rather than geodesic I think you meant geoid.


You’re right of course, can’t edit it anymore.


> Essentially, UTC is a compromise devised to satisfy the needs of two communities of users:* Astronomers and navigators, * Physicists and engineers

I think author is wrong about astronomers, for whom UTC is an unwanted complication. Astronomers use sidereal time, which is unrelated to ther apparent motion of the sun. For short intervals, physicists and engineers may as well use atomic time.

The WP article on UTC has a section titled "Rationale", but doesn't explain what problems/compromises UTC was supposed to address. It's worthy of note, however, that of the three bodies involved in the first version of UTC, two were national naval observatories.


UT1 (and therefore UTC) is however a better starting point for calculating the sidereal time than TAI since it is connected to the rotation of the Earth, whereas TAI is not. This way the relation to calculate the (L)ST only depends on fixed constants and the time and date.

Also UT1 is actually measured based on distant celestial objects, because it is easier to measure those at very high precision than the precise transit time of the Sun.


That depends on if UTC alone is an accurate enough estimate of UT1 for you. If you only need to know UT1 to within +/- 0.9 seconds, then you can just use UTC and you're done.

If you'd like greater accuracy, UTC isn't a great starting point because UTC's leap seconds complicate getting an accurate estimate of UT1. Its hard to tell if your local idea of UTC has had the latest leap second applied or has had false leap seconds applied (or worse, accidentally received a leap smeared source, which can't be backed out to convert to UT1).

If you're going to produce a more accurate estimate of UT1 either using interpolations of IERS bulletin B predictions or a model based on the daily USNO ultra-rapid UT1 VLBI measurements the leap seconds don't do anything to help you, but they can mess up your calculations because you need to correctly and consistently back them out.

So essentially the practice of leap seconds helps applications that need UT1 to within better than an hour or two (otherwise just using TAI with offset or TAI plus a static linear correction is good for thousand of years), but hurts applications that need subsecond UT1 accuracy, hurts anyone that needs consistent or accurate duration, and creates a lot of software bugs including ones that can show up when there hasn't actually been a leap second.

I would argue that today the number of applications that need UT1 at all are much smaller and less significant than ones that need subsecond consistent durations or times. And that of the applications that want UT1 most either don't need leapseconds at all (e.g. TAI or TAI plus the simplest static linear correction is enough) or would prefer better than 1 second accuracy, where at best leapseconds don't help and in practice they add a lot of complexity and still sometimes cause failure.


We cannot predict Earths rotation to a sufficient degree into the future to get away with TAI + constant or TAI + some equation. This is the reason we don't know the precise time of all leap seconds decades into the future. Currently it looks like we won't have any leap second for the foreseeable future, since TAI+offset and UT1 tick quite closely.

However the less than 0.9s offset to UT1 is often good enough to allow accurate pointing for all except the highest accuracy observations (and for those you are essentially part of the pipeline that determines UT1 in the first place). A time difference of 1s corresponds to a error in position that is often equivalent to the pointing accuracy of the telescope/dish (or not too far off anyway). Whereas beyond 10s or so you might not even be able to see the target within your scope anymore.

Not sure about the points about the complications of calculating the true UT1, I don't really see how those would come up and at least for the bulletin B it is already included, so you don't have to back anything out.


> to a sufficient degree into the future

Depends on the application. If you need the lights to go down at sundown a constant is good for thousands of years, a rate correction is good for a fair bit longer.

> Currently it looks like we won't have any leap second for the foreseeable future

In fact, bootstrapping from recent residuals suggests there is a reasonable chance of a positive leapsecond in the next several years. Each ordinary leapsecond causes substantial disruptions and system outages, and there has never been a positive leap second so it's reasonable to expect substantial issues.

> Whereas beyond 10s or so you might not even be able to see the target within your scope anymore

If your pointing accuracy is 10% of your FOV your life will be hard, why make it harder by not using an accurate UT1? Besides, for accurate pointing you'll want the pole offsets that are also in circular b. Once you're using an accurate source of the UT1 offset, the leapseconds at best do nthing.

> Not sure about the points about the complications of calculating the true UT1

To calculate UT1 using circular b or USNO observations you need to have an accurate UTC. Leapseconds make UTC failure prone. It's extremely easy to miss leap seconds, gain false leaps seconds, or (as of recently) accidentally end up with leap-smeared time.

> so you don't have to back anything out

They need to be backed out to linearly interpolate between entries to avoid a discontinuity.


The pole offsets are in the order of hundreds of milli-arcseconds, whereas each second offset between UT1 and UTC produces an offset of (worst-case) ~14 arcseconds. Worlds of difference. Subtracting all leapseconds would yield an error of ~6 arcminutes, which can ruin your day if you are using any instrument with a small FoV (I know since I had that issue before where due to some hardware issues the telescope clock had lost a couple clock pulses and was 10-20s off compared to the observatory master clock).

> They need to be backed out to linearly interpolate between entries to avoid a discontinuity.

Ah.. if you want to apply it on the day of the leapsecond. Fair. I meant more in general. As in all other cases you can just more or less blindly apply it.

In practice I can't say I have been bitten by UTC+leap second issues before nor have I even heard from observatories being bitten by this. Now GPS epoch rollover - that I have been personally experienced issues with.

> In fact, bootstrapping from recent residuals suggests there is a reasonable chance of a positive leapsecond in the next several years. Each ordinary leapsecond causes substantial disruptions and system outages, and there has never been a positive leap second so it's reasonable to expect substantial issues.

We'll see if there will be a negative leapsecond in the future. But so far it also seems in line with no leapsecond at all.


How in the world did astronomers & their software developers manage to push off onto the rest of the software world (and society at large) the atrocious concept of leap seconds.


Not astronomers but naval navigators. UTC is direct descendant of time signals broadcast by naval observatories as a companion for their naval almanacs. It ended up as general civil time pretty much because it was there and there wasn't much competition around.

Besides when UTC was standardized and adopted in the sixties, software wasn't really much of a concern


UT1 might not work for engineering applications with tight tolerances, as the length of a UT1 second varies.

Personally, in my non-expert opinion, we should stop adding or subtracting leap seconds from UTC (that is, UTC would thereafter be at a constant offset from TAI), and leap seconds would instead be added to the timezone DB. That way local time (like Aug 14 2022 xx:53) would still retain the familiarity with 12:00 being noon, at least at some point in each timezone, but calculations with UTC seconds would not need to bother with leap seconds.


If people don't like leap seconds, there is no need to redefine UTC; they can just use TAI today. That gives them the additional advantage of not having to handle leap seconds in the past, because software that handles historical UTC timestamps will always need to know about the leap seconds that have already happened.


Spoken like someone who hasn't attempted this. :)

All our computer and protocol infrastructure is setup to handle and distribute UTC. Everything else you talk to is speaking UTC.

Trying to use TAI in a UTC world doesn't save you from dealing with leapseconds in the slightest: You get all the leapsecond induced problems just obtaining TAI from your UTC feeds and them you get them again at every boundary where you need to communicate with something else that is using UTC.

At least if you use UTC across the board you'll only fail around leap seconds (or false leap seconds created by issues in leapsecond distribution infrastructure). If you attempt to use TAI you'll get those failures plus many extra ways to fail.

> that handles historical UTC timestamps will always need to know about the leap seconds that have already happened

That's static data that can be hardcoded and tested, massively easier and safer than events which are being inserted (or accidentally failed to be inserted) in real time.

The big problems with leapseconds arise because of the realtime discontinuity and because the offset function changes in unpredictable ways.

In many applications there just isn't any old dates to begin with. E.g. in my code that points my telescopes (and determines where they're pointed from the encoders) I have a pile of nonsense to hopefully handle leapseconds, which may or may not actually work right when one happens as its really hard to test how other software will behave. But the same software never needs to handle times more than a day (or a few days) in the past.

If leap seconds stopped being issued bugs in software impacting billions of people would just vanish. Not every bug-- some things might also mishandle historical leap seconds, but most of the issues come from realtime bugs. If you go too far back times wouldn't have been known to 1 second in any case, so bugs with historical data would mostly be in the form of things breaking due to inconsistent time difference calculations.


I think the problem is that we already have the networks and protocols to synchronize time (NTP, PTP, etc.), but not for timezone DB data.


RFC 7808 specifies a TZDIST service.

But most operating systems already have their own mechanism to update their timezone database. This is not a problem. Shifting timezones by 15 minutes every few thousand years would work with current software.

All we need to do is to agree that UTC will have no more leap seconds.


Leap seconds are problematic for astronomers whom either are doing things that don't care (e.g. 1 star calibrated against the sky, that calibration approximately incorporates whatever difference between UT1 and their local clock exists) or where they do care they have to back the leap seconds out in order to apply a more accurate model of UT1.

It's actually quite tricky to back out leap seconds accurately because the underlying time will be discontinuous for you and you can't reliably tell when leap seconds have or haven't been applied... you might even, without knowing it, be being fed leap-smeared time which could evenmake your sidereal tracking wrong and can't be backed out because different smearers do it differently. Because leapseconds they're infrequent, you also don't get many live fire tests.

I'm aware of multiple observatories that just shut down operations across leap seconds. The significant amount of work to handle leapseconds correctly and be confident you're correct isn't justified vs taking some downtime.


After reading the comments, it seems to me a theme is that everyone thinks somebody else is mistaken or misunderstanding the subject. This is not a complaint, but an observation on how difficult and slippery this subject is! I, at least, hardly understand it at all.


The author argues that only TAI would make sense to an alien civilization, but I don't think that's true at all. I think they miss the fact that it also is only defined within the reference frame of the Earth and also that it depends on many technically subtle aspects.

Especially if one only wants to communicate times between the civilizations, such as meet here at 13:00 in exactly three years from now. This is much easier to communicate based on rotation angles of the Earth than on a atomic time scale that would need to be transferred very precisely to be useful at all.


Unfortunately this continues the perception that UTC/GMT is a time zone. It isn't. A time zone is a geographical area where all the clocks show the same time, but the time that they show may vary - for instance, despite Microsoft's insistence, the UK is not in the "UTC timezone".

This may seem pedantic, and most of the time it's not important to understand the difference. But when this misunderstanding bites, it will take your leg off at the knee.


Even more confusing, the time printed by our computers may appear to be UTC, but is probably POSIX (or Unix) time. It is very close to UTC but ignores leap seconds. https://en.wikipedia.org/wiki/Unix_time

During a positive leap second, the POSIX second repeats itself (an alternative mental model could be that the given POSIX-second lasts 2 UTC-seconds). During a negative leap second, the given POSIX-second disappears (or alternatively, the given POSIX-second lasts 0 UTC-seconds).

On my Ubuntu box, the `date +%s` command prints the number of POSIX seconds since 1970-01-01, not the number of UTC seconds since 1970-01-01. To get the number of UTC seconds, we must add the number of intervening leap seconds since 1970-01-01.

I get a headache every time I have to think about this.


The often made and only technically true claim that posix/unix time "ignores leap seconds" is actually a major source of leapsecond handling bugs, because it makes people think that posix time is TAI (with some offset). Instead posix time is UTC with some seconds that have unusual lengths (0 or two SI seconds, as you note).

When you want to know the number of SI seconds between two dates in either posix or UTC you need an accurate table of past leapseconds. If you want to compute TAI from posix or UTC you need an accurate table of leap seconds that your UTC clock has applied (which may not be the same as the table of leap seconds, since you might have missed the most recent one!).


Your computer periodically syncs with other computers to compensate for the time drift. Posix is just an API and a set of conventions. What your computer syncs with (typically via NTP), is a bunch of very precise clocks. Without that, you'd have a lot of time drift. The reason POSIX ignores leap seconds is because a leap second is well below the margin of error that most computer clocks have. They simply are not precise enough. Without syncing they'd be drifting apart many seconds within days/weeks probably.

That's also why the leap second correction is not a big deal because it just happens when computers sync. A few seconds correction is just a routine correction. Happens all the time.


> That's also why the leap second correction is not a big deal because it just happens when computers sync. A few seconds correction is just a routine correction. Happens all the time.

That may be how things work for someone's word processor appliance, but on Linux/etc. NTPD or chrony continually estimate your local clocks drift and compensate for it and change their polling rate as a function of uncertainty (e.g. from temperature effects). They save the estimate drift rate in a persistent file on disk (chrony can optionally compensate for temperature effects; see tempcomp in the manpage).

On some random supermicro server here the crhony drift file says that the clock is slow by -33.208515 parts per million (not atypical for a non-temp-compensated oscillator), with an uncertainty of 0.031571 in that measurement.

As a result other than leap seconds there is no discontinuity in your local time, no few seconds correction that you suggest. Just a continuous clock that is steered smoothly to agree with UTC.

This is really important when you're dealing with subsecond events that must be synchronized among distributed systems.

Hosts on my network here that have no special handling of their time keeping will have time that agrees with each other within +/- 100 microseconds (and with UTC, for that matter, thanks to the local gps disciplined clock). Without a local GPS clock you'll get extra uncertainty from network path asymmetry, but that uncertainty is limited to the round trip time... which is a lot less than 1s if your network is at all usable interactively. :)

A 1 second jump would be gargantuan.

For many applications +/- 1 second is fine. But for others it isn't fine. Modern computers with network time available should have no problem being accurate much much better than 1 second.


The reason why POSIX time ignores leap seconds was that it was considered simpler for userspace if you can do things like `time() % 86400` to get the time of day etc without needing to worry about leap seconds, essentially allowing userspace to be ignorant of leap seconds.


This is also why I think it's silly people make such a big deal out of leap seconds. Yes, to the programmers it would be great if time were to flow uniformly, but it doesn't, and it doesn't even stay consistent on your computer. Your time may go back and forth several seconds depending on your RTC and its conditions and leap seconds should be treated the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: