~/imallett (Ian Mallett)

Time Standards Reference

Motivation

Measuring and calculating time are important everywhere. It is astounding, then, that few seem to actually understand what they're doing. Programmers don't know what to do, creating bad software, and average people don't understand the issues—and so don't understand why they should care. The world is rife with abominations of chronology: subtracting UNIX time stamps, using GMT instead of UTC, and so on.

These errors are actively inimical to science, and the horrible reality of the situation is that most data, most simulations, most everything cannot be interpreted accurately at scales sometimes as large as minutes[1]. I have come to realize that there needs to be a centralized introduction to the issues. Such a thing has not previously existed, in part because the usual attempts at centralized information (e.g. Wikipedia) get it dead wrong[2]. This is what I optimistically present here[3].

This article aims to be a complete, but simple, description to bootstrap a correct understanding of modern timekeeping. As always, corrections/suggestions are welcome.

[1]In practice, data based on UTC will be mostly right, unless the span of data included leap seconds, in which case most software gets it terribly wrong.
[2]Wikipedia is typically extremely accurate (ref, ref, ref, ref), but anecdotally their time-related articles are sub-average.
[3]Note: for clarity, I will sometimes use a source's wording without in-line attribution.

Counting Seconds

The unqualified word "second" (s) has a precise, scientific definition: since 1967, it is 9 192 631 770 cycles of radiation from a particular quantum hyperfine transition in a Caesium-133 atom. That is its definition: it's the length of time for a certain measurable physical process to happen.

A solar day has roughly 86 400 seconds, but because the Earth's rotational speed changes, that cannot be exact. The Earth's day length changes slightly, drifting farther away from time measured by accurate clocks. Whether and how we correct for this is something of a philosophical issue, and the abstractions/approaches we need to resolve it have caused an explosion of time standards. The five most important today are TT (underpins other four), TAI (accurate clocks), UT1 (astronomical measurements), UTC (accurate clocks, adjusted to be close to astronomical measurements), and Local Time (UTC corrected to your timezone).

Reference Frames

Picture a clock: an imaginary clock that ticks perfect SI seconds. It doesn't know what a calendar or leap day or leap second or anything is. It just ticks seconds, one after the other.

As we know from General Relativity, both gravity and velocity cause time dilation. Therefore, if we want to use the clock as a basis for time, we need to be very precise about specifying where this clock is and how we're looking at it. The IAU (International Astronomical Union) defines three reference frames corresponding to three different situations:

  • TCB (Temps-Coordonnée Barycentrique) (Barycentric Coordinate Time)
    I put my clock at the center of mass (barycenter) of the Solar System. Then I remove everything in the Solar System so that its gravity doesn't affect my clock. That is, my clock performs the same movements as the Solar System but is outside its gravity well.
  • TCG (Temps-Coordonnée Géocentrique) (Geocentric Coordinate Time)
    I put my clock at the center of the Earth. Then I remove the Earth. My clock performs the same movements as the Earth but is outside its gravity well.
  • TT (Terrestrial Time)
    I put my clock on the Earth's geoid (roughly; its surface). Actually, I put my clock in the same place as Earth's center of mass, but pretend that point had the same gravitational time-dilation due to the Earth as if it were on its geoid.
These three imaginary clocks are all ticking perfect SI seconds, but because of General Relativity, they are ticking at different rates relative to each other. As you can see in the following diagram, TCB ticks a lot faster than TCG, which ticks a little faster than TT. This makes perfect sense; e.g. TCB is outside the gravity well of the entire Solar System, and so is not slowed down by it. If you zoom in (or click the checkbox to remove linear factors), you'll see some periodicity, which is due to the Earth's orbit. N.B. the chart also displays an obsolescent time scale called TDB[1].



Relative to TCB
Relative to TCG
Relative to TDB
Relative to TT
Remove linear components

TT is unquestionably the most relevant reference frame for people living on the Earth. In the following, we will attempt to make clocks that measure time kept in this reference frame.

[1]TDB (Temps Dynamique Barycentrique) (Barycentric Dynamical Time) is an older time standard that is still widely used (e.g. to implement conversions among the others), mentioned here for completeness (though we shall not discuss it thoroughly). Note: For complex reasons, the "seconds" TDB ticks are not SI seconds.

Coupling with Accurate Clocks

The first thing we have to do is tie the reference frames, and in particular TT, to the real world.

If you build a single atomic clock (i.e., a clock based on highly accurate measurement of physical processes), you get a great starting point for measuring time. An astonishing amount of research has made atomic clocks extremely accurate[1]. Nevertheless, due to various sources of error, a much better option is to combine readings from several. Various national labs, universities, and the like have one (or several) atomic clocks. These define atomic scales called "TA"[2]. For example, the Belarusian State Institute of Metrology's seven hydrogen maser clocks define "TA(BY)" (the lab code of of the Institute is "BY").

From here, an algorithm called "ALGOS" combines all the data from all the clocks at all the institutions around the world (more than 700[3]!) into a weighted average called EAL (Échelle Atomique Libre) (Free Atomic Scale)[4]. During this process, corrections and statistical weighting are applied[5]. From here, more corrections are applied to EAL, taking the form of "frequency steering" (the second-ticking rate of EAL is adjusted slightly), to transform it into TAI (Temps Atomique International) (International Atomic Time)[6]. The calculated TAI is frequently published in "Circular T"[7]. TAI is the super-important main international standard that defines time. The entire affair is maintained and run by the BIPM (Bureau International des Poids et Mesures) (International Bureau of Weights and Measures).

What we need now is to define some relationship between TAI (accurate clocks that actually exist) and the three (idealized, imaginary) reference frames mentioned above (TCB, TCG, and TT). The relationship that is now agreed upon is that all the following refer to the same instant (at the center of mass of the Earth), called the "epoch"[8][9]:

  • 1977-01-01 00:00:00.0000000 TAI
  • 2443144.5 JDTAI[10]
  • 1977-01-01 00:00:32.1840000 TCB[11]
  • 1977-01-01 00:00:32.1840000 TCG
  • 1977-01-01 00:00:32.1840000 TT
  • 1977-01-01 00:00:32.1839345 TDB[12]

Notice what we did here: TCB, TCG, and TT (the idealized reference frames) have now been tied precisely to TAI (which is something we can actually measure using real clocks on Earth). For example, we can estimate TT by just measuring TAI and adding 32.184s. Such an estimate is called a realization of TT using TAI, and is written "TT(TAI)"[13][14][15].

Over time, TT(TAI) (and the analogous TCB(TAI), TCG(TAI), etc.) will drift, becoming more and more wrong (TAI is based on clocks in the real world, after all). Interestingly, as technology has improved (along with both clocks and our understanding of physics), we can get improving, retrospective estimates of how wrong our estimates were (or, equivalently, we get a better realization of TT). BIPM actually retroactively publishes this data. The latest such estimate (as of early 2021) is called TT(BIPM20)[16]. It shows that TT(TAI) has drifted by about -27.6646μs.

The aforementioned definition TT(TAI):=TAI+32.184s can't (and shouldn't) be changed to try to fix this. Instead, the best way to estimate TT is to use the current TT(BIPM20) estimate. This is nice because future revisions (the next will be, presumably, TT(BIPM21)) are likely to improve the estimate of TT for past values.

The following chart shows the error (in microseconds) of TT(TAI) and BIPM estimates relative to TT. One reads each line as the error of a time standard, as estimated from the current data. For example, ΔTT(TAI):=TT(TAI)-TT, where TT is estimated using the best data, TT(BIPM20). The value of ΔTT(TAI), is currently (as of early 2021) the aforementioned -27.6646μs



To get a better feel of how this works, try pretending it's a different year (and our data is worse). Note that 2000 and 2002 are unavailable; BIPM did not publish models for those years.

Low-resolution dataset
Medium-resolution dataset
High-resolution dataset

Notice how as the years go by we learn more, and so TT(TAI), as well as older BIPM estimates, are seen to be increasingly flawed. The first analysis, TT(BIPM92), showed that TAI had diverged from the epoch. TT(BIPM93), TT(BIPM94), and TT(BIPM95) continued to refine this. At this point, we found that thermal radiation was affecting Caesium clocks. This discovery (and correction, reflected in TT(BIPM96)) showed that TAI had diverged even further than was previously thought. Hence, all previous BIPM predictions were also in error, jumping off the x-axis zero-line. More corrections were—and continue to be—added.

Notice how at each year, the final slope of TT(TAI) is roughly flat. This signifies that all known sources of error up to that time have been controlled for, and TT(TAI) is not diverging further from the contemporary estimate of TT. This is because of the frequency steering of TAI, which is used to correct all sources of error known at the time.

[1]Nowadays, many minuscule corrections compensate for tiny sources of error. E.g. some kinds of clocks are corrected for Earth's magnetic field on dipoles of individual atomic nuclei. An appalling amount of effort goes into making such things right.
[2]Presumably from "Temps Atomique" ("Atomic Time"); "TA" is nearly un-Google-able.
[3]The commonly presented figure is "more than 400", but this is dated. As of early 2021, 712 clocks are participating in the computation of TAI.
[4]I haven't ever seen "ÉAL" instead of "EAL", though it would make more sense. Oh well.
[5]Descriptions of the operation of ALGOS are universally vague—at least, I couldn't find a definitive description of it from BIPM; it seems that ALGOS is regarded as semi-secret. It is some species of weighted average, though. Perhaps the minutiae are regarded as unimportant for public dissemination.
[6]These corrections tend to be coarser. E.g. the EAL average is not valid at the geoid; most clocks tick too fast because they are above sea level, and hence gravity is a bit less. The frequency steering fixes this, and has been used to correct errors (we'll see this soon).
[7]That is, TAI cannot be measured in real-time; "current" TAI timestamps are actually extrapolations from Circular T. Despite BIPM's discouraging such usage, there isn't a ready alternative. As far as I can tell, nowadays TAI's extrapolation is assumed to be perfect (e.g. in the definition of UTC, later). Since it actually is to within the tolerance our best instruments, the distinction is more-or-less irrelevant.
[8]You might occasionally see the year 1997 instead. This seems to be due to a typo in the examples documentation of the SOFA software package (see §3.6, pg.15); whereas, if you read the original source linked above, you'll find that 1977 is correct.
[9]Although TAI had been defined as early as June 1970, this epoch also marks the time at which the modern definition I described above was adopted: TAI was slowed by 1 part in a trillion (see pg. 27, resolution 2), tied to EAL, and decreed to be henceforth adjusted by varying the transformation therefrom (i.e. frequency steering).
[10]The Julian Date (JD) is way to write a count of days in a particular (but usually only unclearly implied) timescale, such as TAI. JD and related notations are useful primarily in Astronomy, but we shall not need to discuss them further.
[11]The weird offset by 32.184s in TCB, TCG, and TT is to provide continuity with an older time standard.
[12]The offset for TDB is usually written instead as "32.184s - 65.5μs". That -65.5μs offset is a more-recent addition (so some sources don't have it). There were terrible problems with TDB, and a complete redefinition was eventually required. While maintaining continuity with the older TDB, it was impossible to both keep the new TDB-TT centered around zero and to synchronize the new TDB without an offset at the epoch. Most models had implicitly picked being centered around zero, and so it was the -65.5μs offset that was chosen for the redefinition. A more complete history can be found here or in Time: From Earth Rotation to Atomic Physics §8.5.6. But again, since continued use of TDB is discouraged; don't worry too much.
[13]See? It's like a function! "TT(TAI)" instead of "f(x)".
[14]Indeed, this is the definition of TT(TAI) from IAU 1991.A4.IV.9 (see pg. 6).
[15]Unfortunately, there's a lot of sloppiness; when people say e.g. "TT" they often mean "TT(TAI)". I won't do that.
[16]As far as I can tell, the "20" is because it describes data up through 2020; it was published in early 2021.

Making It Useful I: Calendars

There are three calendars[1] you need to know about.

First, there was the Julian Calendar (introduced 46 BC). The Julian Calendar was based on various earlier Roman calendars. In the Julian Calendar, there is exactly one leap day every four years falling on the familiar February 29th. So, on average, there are 365.25 days per Julian year. The Julian Calendar was widely popular, though there were many complications and errors implementing it in practice. The Julian Calendar is fairly accurate, although by medieval times it was clear that it was wrong.

Christianity got annoyed that their holidays were moving around relative to the equinoxes, and so the scientists of Pope Gregory XIII introduced the Traditional Gregorian Calendar, a major fix to the Julian Calendar, which changed the leap year rules to make them more accurate. The Traditional Gregorian Calendar removes the Julian Calendar's leap days that are divisible by 100 but not 400. For example, 1700, 1800, and 1900 were not leap years, but 1600 and 2000 were. On average, there are 365.2425 days per Gregorian year. The Traditional Gregorian Calendar was introduced in 1582, but was not adopted everywhere simultaneously, making historical dates complicated[2].

The Julian and Traditional Gregorian calendars are based on solar days instead of SI seconds, so days have a slightly varying length. Also, they refer to years using "BC" and "AD" (or secularized "BCE", etc.), and do not have a year zero. The years around that time instead go "3 BC", "2 BC", "1 BC", "AD 1", "AD 2", "AD 3". This is unintuitive and made life complicated for astronomers, even back then.

These deficiencies led to the development of the third and final calendar, the ISO 8601 Gregorian Calendar. The ISO 8601 Gregorian Calendar has days that are exactly 86 400 SI seconds long (except for days with leap seconds, discussed soon) and has a year zero. That is, the years go "-0002", "-0001", "0000", "0001", "0002", "0003". Notice that using "BC"/"AD", etc., in the ISO Gregorian Calendar is incorrect. The ISO 8601 Gregorian and Traditional Gregorian calendars are often confused[3], but in fairness the difference for modern dates is rarely significant.

With the exceptions of Afghanistan, Ethiopia, Iran, and Nepal, every country on Earth uses the ISO 8601 Gregorian Calendar for civil timekeeping—and with good reason: it is unquestionably the most-accurate calendar yet invented—perhaps the most-accurate it's possible even to define. There is some dated legislation and various inaccuracies, of course, but the modern globalized reality ensures that, whether they know it or not, basically everyone is using the ISO 8601 Gregorian Calendar.

[1]Random note: extrapolating either calendar before its invention makes it "proleptic". So e.g. one can talk about the date 1500-01-01 in the "Proleptic Traditional Gregorian Calendar". In practice, it's a fancy word that doesn't mean anything.
[2]The gory list of adoption details (when such are known) can be found here.
[3]This confusion lies at the heart of the troll pop culture (non-)issue of whether decades start on 00s or 01s, which I write about here.

Making It Useful II: Tying to Astronomical Time Standards

So what about those leap seconds in the ISO 8601 Gregorian Calendar?

Intuitively, we expect "day" to mean a "solar day"—that is, one cycle of the sun rising and setting. However, due to tiny changes in the Earth's rotational rate, the Earth's orbit around the Sun, and so on, a solar day's length isn't exactly a fixed number of SI seconds long. Taken literally, this would mean that the length of a "day" and the length of a "second" are unrelated. That would be horrible and confusing.

To reconcile these different ways of measuring a day's length, there is a family of time standards called UT (Universal Time) that are based on direct or indirect measurements of solar time. The two important varieties today are UT1 and UTC[1].

UT1 notionally measures the Mean Solar Time at 0° longitude, a measure of the Sun's position in the sky[2][3]. UT1 is distributed by the IERS (International Earth Rotation and Reference Systems Service)[4]. Officially, UT1 is given out irregularly in IERS Bulletin D, expressed as DUT1, a difference from UTC defined as DUT1:=UT1-UTC. In practice, more precise values come from IERS Bulletin A via the United States Naval Observatory (USNO) via NASA (free registration required; see cols. 59 through 68).

Well, what is UTC? UTC (Coordinated Universal Time) reconciles the SI second-ticking awesomeness of TAI with the universe-following relevance of UT1. UTC is now the de-facto world standard of "Civil Time" (time that people actually commonly use). UTC ticks SI seconds, just like TAI, and in modern times ticks at the same instant as TAI. However, UTC is adjusted, by means of inserting/removing occasional "leap second"s (announced at least six months in advance by the IERS in their IERS Bulletin C; as of early 2021, there have been 27 so far), to be within 0.9 seconds of UT1[5]. So UTC is almost the same time as UT1, but gloriously, it ticks well-defined SI seconds instead.

What is a leap second, you ask? It's basically a 61st second inserted[6] at the end of the last minute of a particular day. For example, consider the leap second in June 2015 (its announcement). The difference between 2015-06-30 23:59:59 UTC and 2015-07-01 00:00:00 UTC would ordinarily be one second. However, a 61st second was added in-between, written 2015-06-30 23:59:60 UTC. The practical upshot is that the difference was actually two seconds. The effect is that UTC dropped a bit further behind TAI (which doesn't stop for nuthin), to fall more in-line with UT1. Sequentially, it looked like this:

Date/Time TAI Date/Time UTC Date/Time UT1 DUT1[7] UTX (Unix Time)[8]
2015-07-01 00:00:32 2015-06-30 23:59:57 2015-06-30 23:59:56.3233627 -0.6766373 1 435 708 797
2015-07-01 00:00:33 2015-06-30 23:59:58 2015-06-30 23:59:57.3233627 -0.6766373 1 435 708 798
2015-07-01 00:00:34 2015-06-30 23:59:59 2015-06-30 23:59:58.3233627 -0.6766373 1 435 708 799
2015-07-01 00:00:35 2015-06-30 23:59:60 2015-06-30 23:59:59.3233627 -0.6766373 1 435 708 799
2015-07-01 00:00:36 2015-07-01 00:00:00 2015-07-01 00:00:00.3233627 +0.3233627 1 435 708 800
2015-07-01 00:00:37 2015-07-01 00:00:01 2015-07-01 00:00:01.3233627 +0.3233627 1 435 708 801
2015-07-01 00:00:38 2015-07-01 00:00:02 2015-07-01 00:00:02.3233627 +0.3233627 1 435 708 802

You may also encounter the term ΔT (sometimes written "dT" or "DT"). ΔT is defined as ΔT:=TT-UT, but in practice, values are estimated by ΔT≈TT(TAI)-UT1 (or, equivalently, calculated from DUT1 as ΔT≈[leap secs]+42.184-DUT1). The value of ΔT is of interest because it reflects the changing rotational speed of the Earth. Intuitively, ΔT is the error one accrues from trying to use the TT idealization.

[1]Be aware that evil people will sometimes say "UT" when they really mean "UT1" (or occasionally, "UTC"). Also, many people say "GMT" when they mean "UTC"; UTC replaces GMT, which is no longer scientifically defined (indeed, it has been defined in many different, incompatible ways). Some countries—and particularly the UK–shamefully retain the nationalistic pretense that GMT still exists.
[2]In-practice, since measuring the ever-mercurial Sun is hard, it's measured from quasars, the Moon, and artificial satellites.
[3]UT1 used to be computed using observatory-specific standards called UT0. The Earth's rotational axis actually varies slightly, an effect called polar motion, and also the observatory is on the Earth's surface—UT1 was computed from (several observatories' (?)) UT0 compensated for these effects. However, UT1 is now apparently exclusively computed by ensembles of observatories in VLBI configurations, rendering UT0 an anachronism for this purpose. If an observatory's UT0 is required, it is now computed the other direction, from UT1.
[4]Since a solar day varies in length, its 86 400 UT1 seconds also vary in length (they are not SI seconds). As a consequence, you need to convert a UT1 timestamp to something like TAI before doing any arithmetic on it. We won't talk about UT1 seconds again; no one uses them because they make everyone sad.
[5]You might have noticed that UTC tries to be close to UT1, but that the data for UT1 is given as an offset from it. At first glance, this appears circular. It's actually okay because UTC is really tied to TAI, and UT1 is actually measured from the sky.
[6]Or, in theory, skipped, although this has not yet been required.
[7]Precise UT1 data is only available for 2015-07-01 00:00:00 UTC. However, the value for the previous day (-0.6760362) suggests that DUT1 was varying by -601.1μs/day. So these figures should be correct to the given tolerance.
[8]We shall discuss UNIX time soon. It's here so that I don't have to write this table twice.

Making It Useful III: Offsets

Most people don't like the complexity of living on a roughly spherical planet, and sortof wish the whole problem would just go away. If you thought the preceding was complex, wait 'til you see the needless complexity average citizens/politicians invented so they wouldn't have to do addition.

The obvious approach is to split the world into "timezones", all of them tied to UTC. You can make a list of 24 timezones, each 15° of longitude and all offset sequentially by one hour each.

Then, because summer has more light, you can add an additional, seasonal shift to make sunrise occur at a more-constant time on the clock (this also has the effect of an extra hour of light in the evening). This approach, as well as the changed time itself, is called Daylight-Saving Time (DST) or "Summer Time"[1]. Thus, three time standards are generated per UTC offset: the standard time (ignores DST), the daylight-saving time (same, but plus one hour), and the generic zone, which switches between the two as appropriate. For example, the generic zone "Pacific Time" switches between "Pacific Standard Time (PST)" during standard time and "Pacific Daylight Time (PDT)" during DST. When specifying the time, it is best to specify the generic zone.

But, there are problems. For example, the country Kiribati is made up of several islands, and if you make the aforementioned zones, you'll find that poor Kiribati is cut in half by the International Date Line—with local times differing by a full day! This got to be so confusing that in 1995 two more time zones were added (UTC+13:00 and UTC+14:00). Other countries all did similar things, some with much-less-reasonable motivation. Nepal, for example, added its very own timezone in 1986 (UTC+05:45), since that better fits Kathmandu. On the other hand, China geographically spans five time zones, but officially only has one, corresponding to Beijing.

Nepal and China, like many other countries, also ignore daylight-saving time. To begin with, DST makes international collaboration extremely complex (which is partly because there exists an absurd variety of incompatible implementations of DST, so nothing is synchronized). DST also causes a quantitative spike in both lost productivity and suicides.

Everyone also redrew all the timezone boundaries according to political lines, so they were irregularly shaped and didn't correspond to degrees of longitude anymore anyway. Worse for this, political boundaries are not eternal—over time, new countries sprang up while old ones split, merged, or disappeared. In the process, timezones changed meanings, so now the system you use to measure time requires you to know what date it is when you're asking and so you have multiple timezones for the same place. In some cases, different ethnic groups living in the same region wanted different timezones or different observance of DST, so now you have multiple timezones for the same place, at the same time. As if it could possibly get any worse, countries also took a lackadaisical approach to implementing leap seconds, upon which the whole concept of UTC is based in the first place. Some countries couldn't get it together in time (leap seconds being announced "only" six months in advance), while some countries decided just not to bother.

All this is such a catastrophe that three things are now true:

  1. It is up to individual people, their computers, and those computers' OSes to specify what timezone they want to be in.
  2. All the horrific rules for timezones are centralized and abstracted into an international standard timezone database, which is maintained by IANA (Internet Assigned Numbers Authority). The database also describes (idealized representations of) local implementations of leap seconds (which should be an oxymoron, but isn't).
  3. Basically all scientific collaboration that takes place in more than one institution is (or is becoming) coordinated using UTC instead.
Theoretically, this solves the problem. We can talk about offset UTC time as "Local Time" and, with a bunch of invisible number-crunching behind-the-scenes, have it now make some kind of appalling sense.

[1]The common Americanism "Daylight Savings Time" (with an "s") is erroneous. Also, the hyphen is preferred.

Computerization: Leap Seconds and UNIX Time

UNIX Time (no standard abbreviation, but UXT seems good) is the de-facto method by which all modern computers measure time. Windows, Linux, and OSX all measure time this way, for example. On these platforms, it's what the C library function "time(...)" returns, despite not technically being obligated to. UXT is widely misunderstood, even by UNIX experts. The basic idea is that it's a count of seconds since 1970-01-01 00:00:00 UTC. The confusing part is that it is not a count of SI seconds; it is a count of UNIX seconds.

UNIX seconds were originally defined rather badly, but a working definition has emerged, become consensus, and is being gradually formalized. Now, as implicitly defined by the Single Unix Specification §4.15, UNIX seconds are the same as SI seconds, except for the last second before a leap second is to be inserted[1]. That last UNIX second is either stretched out to be two SI seconds long (thus covering the leap second), or else is repeated once (with the same effect)[2]. An example of UXT during a leap second was given in the table above.

I describe UXT as pretending leap seconds had never been inserted into UTC in the first place[3]. Of course, "UTC doesn't have leap seconds" is as bald-faced as lies come, since literally the whole point of UTC is that it does. Most resources instead say very confusing/misleading things, such as "[UXT is ]the number of seconds since Jan 1st 1970, 00:00 UTC, but without leap seconds" (ref) or not distinguishing SI seconds from UNIX seconds at all (e.g.). As a direct and unfortunate consequence, most programmers erroneously still treat UNIX seconds the same as SI seconds because they don't know any better, and so there is still a godawful mess concerning real-world code.

The problem is exacerbated by poor language support. For example, the C language function "difftime(...)" returns the difference between two UNIX timestamps. Internally, it is implemented as a simple subtraction, which means that the result is given in UNIX seconds, not SI seconds. But since UNIX seconds are intrinsically tied to timestamps (because you need to know whether they include leap seconds), the result is utterly meaningless. It might be SI seconds, but there might have been a leap second in there. You can't know, because differences are relative time, not absolute.

How are leap seconds handled by real-world systems? The short answer is, poorly. Even today, much, if not most[4], professional software gets it utterly wrong (enough to e.g. cause government agencies to suspend operations and to bring down servers at Google).

Theoretically, UXT defines what should happen. In practice, the UXT self-delusional timekeeping coupled with a legacy of ignorant, wrong code, leads to absurd glitches and policies. For example, the stock exchanges worldwide closed for the 2015-06-30 23:59:60 UTC leap second. Milliseconds of difference are worth literally millions of dollars in the stock exchange. They were (rightly) afraid of bugs in their timekeeping algorithms.

To accommodate bad software, which is completely clueless of the concept of leap seconds, some systems were configured to adjust their clocks' tick rates to smear it out over several seconds. Many timekeeping servers (which the computing device you're using right now probably syncs with regularly) implement this non-scientific, irreproducible policy, and there's prettymuch nothing you can do about it.

[1]Note: UNIX seconds, when they tick, tick at the same instant as TAI and UTC do. That is, the offset between either in SI seconds is always an integer.
[2]POSIX requires this latter approach.
[3]This means that a particular UNIX timestamp usually corresponds to a particular SI second, but sometimes to a pair of consecutive SI seconds (the latter of which was an inserted leap second). If a negative leap second were to occur in the future, there would be a UNIX timestamp that refers to a time that doesn't exist. All this confusing ambiguity as to which SI second is being referred to is the ultimate result of lying to yourself.
[4]Seriously. "Most".

Takeaways

When writing code or discussing time, your treatment of time should depend on what you're trying to do. Let's assume you want to be as accurate as possible, not necessarily as compatible as possible.

Situation Recommendation
What time is it? You'll probably want to give the time in Local Time (i.e. offset UTC) or in UTC itself. GMT doesn't exist anymore, so don't use that.
What time should I record this as? Record it as TAI or UTC. TAI is better, because it is more fundamental.
How long ago was an event? Get the current time in TAI. Get the event's time in TAI. From each TAI timestamp, compute the best estimate of TT (currently, you use TT(BIPM20)). Subtract the results. Be aware that your answer may change if we get better data.
Time an operation on a computer? Ideally, you should handle this the same way as the previous case. For short intervals, especially with modern clocks, subtracting the TAI estimates directly is acceptable. Subtracting UNIX seconds is unacceptable.

Summary

We have discussed three idealized reference frames: TCB, TCG, and TT. These were tied to measurements by atomic clocks (TAI) by means of the 1977 epoch. We can estimate time within these reference frames by using TAI—for example, using TT(TAI), which is a linear offset, or better, a continually improving estimate (currently TT(BIPM20)).

It is useful to define a system of time (UT1) and a calendar family (Gregorian) that are related to the Earth's rotation and orbit. The Gregorian calendars have leap years every four years, except every century not divisible by 400.

To wed TAI's accurate timekeeping to the useful time values of UT1, the UTC standard was eventually developed. UTC ticks at exactly the same instant and rate as TAI, but to keep it roughly in-sync with UT1, leap seconds are occasionally removed from/inserted into UTC. UTC is the current worldwide standard for time. UTC, plus modernization of year names, forms the basis of the ISO 8601 Gregorian Calendar, which is the calendar prettymuch everyone actually uses.

Local times are conversions of UTC to a timezone: one of a horribly complicated mess of offsets from UTC, now maintained by IANA.

On computers, time is measured by UNIX Time (which I call UXT). UXT is based on UTC; however, UXT handles UTC's leap seconds by pretending they didn't happen. This has led to hopelessly broken software and complicated workarounds.

Conclusion

I hope you've found this to be illuminating of the issues involved, or perhaps informative, or even useful. Again, your corrections and feedback are always welcome.


COMMENTS
Ian Mallett - Contact -
Donate
- 2021 - Creative Commons License