|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Time Standards ReferenceMotivationMeasuring and calculating time are important everywhere. It is astounding, then, that few seem to actually understand what they're doing. Programmers don't know what to do, creating bad software, and average people don't understand the issues—and so don't understand why they should care. The world is rife with abominations of chronology: subtracting UNIX time stamps, using GMT instead of UTC, and so on. These errors are actively inimical to science, and the horrible reality of the situation is that most data, most simulations, most everything cannot be interpreted accurately at scales sometimes as large as minutes[1]. I have come to realize that there needs to be a centralized introduction to the issues. Such a thing has not previously existed, in part because the usual attempts at centralized information (e.g. Wikipedia) get it dead wrong[2]. This is what I optimistically present here[3]. This article aims to be a complete, but simple, description to bootstrap a correct understanding of modern timekeeping. As always, corrections/suggestions are welcome. [1]In practice, data based on UTC will be mostly right, unless the span of data included leap seconds, in which case most software gets it terribly wrong. Counting SecondsThe unqualified word "second" (s) has a precise, scientific definition: since 1967, it is 9 192 631 770 cycles of radiation from a particular quantum hyperfine transition in a Caesium-133 atom. That is its definition: it's the length of time for a certain measurable physical process to happen. A solar day has roughly 86 400 seconds, but because the Earth's rotational speed changes, that cannot be exact. The Earth's day length changes slightly, drifting farther away from time measured by accurate clocks. Whether and how we correct for this is something of a philosophical issue, and the abstractions/approaches we need to resolve it have caused an explosion of time standards. The five most important today are TT (underpins other four), TAI (accurate clocks), UT1 (astronomical measurements), UTC (accurate clocks, adjusted to be close to astronomical measurements), and Local Time (UTC corrected to your timezone). Reference FramesPicture a clock: an imaginary clock that ticks perfect SI seconds. It doesn't know what a calendar or leap day or leap second or anything is. It just ticks seconds, one after the other. As we know from General Relativity, both gravity and velocity cause time dilation. Therefore, if we want to use the clock as a basis for time, we need to be very precise about specifying where this clock is and how we're looking at it. The IAU (International Astronomical Union) defines three reference frames corresponding to three different situations:
Remove linear components
TT is unquestionably the most relevant reference frame for people living on the Earth. In the following, we will attempt to make clocks that measure time kept in this reference frame. [1]TDB (Temps Dynamique Barycentrique) (Barycentric Dynamical Time) is an older time standard that is still widely used (e.g. to implement conversions among the others), mentioned here for completeness (though we shall not discuss it thoroughly). Note: For complex reasons, the "seconds" TDB ticks are not SI seconds. Coupling with Accurate ClocksThe first thing we have to do is tie the reference frames, and in particular TT, to the real world. If you build a single atomic clock (i.e., a clock based on highly accurate measurement of physical processes), you get a great starting point for measuring time. An astonishing amount of research has made atomic clocks extremely accurate[1]. Nevertheless, due to various sources of error, a much better option is to combine readings from several. Various national labs, universities, and the like have one (or several) atomic clocks. These define atomic scales called "TA"[2]. For example, the Belarusian State Institute of Metrology's seven hydrogen maser clocks define "TA(BY)" (the lab code of of the Institute is "BY"). From here, an algorithm called "ALGOS" combines all the data from all the clocks at all the institutions around the world (more than 700[3]!) into a weighted average called EAL (Échelle Atomique Libre) (Free Atomic Scale)[4]. During this process, corrections and statistical weighting are applied[5]. From here, more corrections are applied to EAL, taking the form of "frequency steering" (the second-ticking rate of EAL is adjusted slightly), to transform it into TAI (Temps Atomique International) (International Atomic Time)[6]. The calculated TAI is frequently published in "Circular T"[7]. TAI is the super-important main international standard that defines time. The entire affair is maintained and run by the BIPM (Bureau International des Poids et Mesures) (International Bureau of Weights and Measures). What we need now is to define some relationship between TAI (accurate clocks that actually exist) and the three (idealized, imaginary) reference frames mentioned above (TCB, TCG, and TT). The relationship that is now agreed upon is that all the following refer to the same instant (at the center of mass of the Earth), called the "epoch"[8][9]:
Notice what we did here: TCB, TCG, and TT (the idealized reference frames) have now been tied precisely to TAI (which is something we can actually measure using real clocks on Earth). For example, we can estimate TT by just measuring TAI and adding 32.184s. Such an estimate is called a realization of TT using TAI, and is written "TT(TAI)"[13][14][15]. Over time, TT(TAI) (and the analogous TCB(TAI), TCG(TAI), etc.) will drift, becoming more and more wrong (TAI is based on clocks in the real world, after all). Interestingly, as technology has improved (along with both clocks and our understanding of physics), we can get improving, retrospective estimates of how wrong our estimates were (or, equivalently, we get a better realization of TT). BIPM actually retroactively publishes this data. The latest such estimate (as of early 2021) is called TT(BIPM20)[16]. It shows that TT(TAI) has drifted by about -27.6646μs. The aforementioned definition TT(TAI):=TAI+32.184s can't (and shouldn't) be changed to try to fix this. Instead, the best way to estimate TT is to use the current TT(BIPM20) estimate. This is nice because future revisions (the next will be, presumably, TT(BIPM21)) are likely to improve the estimate of TT for past values. The following chart shows the error (in microseconds) of TT(TAI) and BIPM estimates relative to TT. One reads each line as the error of a time standard, as estimated from the current data. For example, ΔTT(TAI):=TT(TAI)-TT, where TT is estimated using the best data, TT(BIPM20). The value of ΔTT(TAI), is currently (as of early 2021) the aforementioned -27.6646μs To get a better feel of how this works, try pretending it's a different year (and our data is worse). Note that 2000 and 2002 are unavailable; BIPM did not publish models for those years. Notice how as the years go by we learn more, and so TT(TAI), as well as older BIPM estimates, are seen to be increasingly flawed. The first analysis, TT(BIPM92), showed that TAI had diverged from the epoch. TT(BIPM93), TT(BIPM94), and TT(BIPM95) continued to refine this. At this point, we found that thermal radiation was affecting Caesium clocks. This discovery (and correction, reflected in TT(BIPM96)) showed that TAI had diverged even further than was previously thought. Hence, all previous BIPM predictions were also in error, jumping off the x-axis zero-line. More corrections were—and continue to be—added. Notice how at each year, the final slope of TT(TAI) is roughly flat. This signifies that all known sources of error up to that time have been controlled for, and TT(TAI) is not diverging further from the contemporary estimate of TT. This is because of the frequency steering of TAI, which is used to correct all sources of error known at the time. [1]Nowadays, many minuscule corrections compensate for tiny sources of error. E.g. some kinds of clocks are corrected for Earth's magnetic field on dipoles of individual atomic nuclei. An appalling amount of effort goes into making such things right. Making It Useful I: CalendarsThere are three calendars[1] you need to know about.
First, there was the Julian Calendar (introduced 46 BC). The Julian Calendar was based on various earlier Roman calendars. In the Julian Calendar, there is exactly one leap day every four years falling on the familiar February 29th. So, on average, there are 365.25 days per Julian year. The Julian Calendar was widely popular, though there were many complications and errors implementing it in practice. The Julian Calendar is fairly accurate, although by medieval times it was clear that it was wrong. Christianity got annoyed that their holidays were moving around relative to the equinoxes, and so the scientists of Pope Gregory XIII introduced the Traditional Gregorian Calendar, a major fix to the Julian Calendar, which changed the leap year rules to make them more accurate. The Traditional Gregorian Calendar removes the Julian Calendar's leap days that are divisible by 100 but not 400. For example, 1700, 1800, and 1900 were not leap years, but 1600 and 2000 were. On average, there are 365.2425 days per Gregorian year. The Traditional Gregorian Calendar was introduced in 1582, but was not adopted everywhere simultaneously, making historical dates complicated[2]. The Julian and Traditional Gregorian calendars are based on solar days instead of SI seconds, so days have a slightly varying length. Also, they refer to years using "BC" and "AD" (or secularized "BCE", etc.), and do not have a year zero. The years around that time instead go "3 BC", "2 BC", "1 BC", "AD 1", "AD 2", "AD 3". This is unintuitive and made life complicated for astronomers, even back then. These deficiencies led to the development of the third and final calendar, the ISO 8601 Gregorian Calendar. The ISO 8601 Gregorian Calendar has days that are exactly 86 400 SI seconds long (except for days with leap seconds, discussed soon) and has a year zero. That is, the years go "-0002", "-0001", "0000", "0001", "0002", "0003". Notice that using "BC"/"AD", etc., in the ISO Gregorian Calendar is incorrect. The ISO 8601 Gregorian and Traditional Gregorian calendars are often confused[3], but in fairness the difference for modern dates is rarely significant. With the exceptions of Afghanistan, Ethiopia, Iran, and Nepal, every country on Earth uses the ISO 8601 Gregorian Calendar for civil timekeeping—and with good reason: it is unquestionably the most-accurate calendar yet invented—perhaps the most-accurate it's possible even to define. There is some dated legislation and various inaccuracies, of course, but the modern globalized reality ensures that, whether they know it or not, basically everyone is using the ISO 8601 Gregorian Calendar. [1]Random note: extrapolating either calendar before its invention makes it "proleptic". So e.g. one can talk about the date 1500-01-01 in the "Proleptic Traditional Gregorian Calendar". In practice, it's a fancy word that doesn't mean anything. Making It Useful II: Tying to Astronomical Time StandardsSo what about those leap seconds in the ISO 8601 Gregorian Calendar? Intuitively, we expect "day" to mean a "solar day"—that is, one cycle of the sun rising and setting. However, due to tiny changes in the Earth's rotational rate, the Earth's orbit around the Sun, and so on, a solar day's length isn't exactly a fixed number of SI seconds long. Taken literally, this would mean that the length of a "day" and the length of a "second" are unrelated. That would be horrible and confusing. To reconcile these different ways of measuring a day's length, there is a family of time standards called UT (Universal Time) that are based on direct or indirect measurements of solar time. The two important varieties today are UT1 and UTC[1]. UT1 notionally measures the Mean Solar Time at 0° longitude, a measure of the Sun's position in the sky[2][3]. UT1 is distributed by the IERS (International Earth Rotation and Reference Systems Service)[4]. Officially, UT1 is given out irregularly in IERS Bulletin D, expressed as DUT1, a difference from UTC defined as DUT1:=UT1-UTC. In practice, more precise values come from IERS Bulletin A via the United States Naval Observatory (USNO) via NASA (free registration required; see cols. 59 through 68). Well, what is UTC? UTC (Coordinated Universal Time) reconciles the SI second-ticking awesomeness of TAI with the universe-following relevance of UT1. UTC is now the de-facto world standard of "Civil Time" (time that people actually commonly use). UTC ticks SI seconds, just like TAI, and in modern times ticks at the same instant as TAI. However, UTC is adjusted, by means of inserting/removing occasional "leap second"s (announced at least six months in advance by the IERS in their IERS Bulletin C; as of early 2021, there have been 27 so far), to be within 0.9 seconds of UT1[5]. So UTC is almost the same time as UT1, but gloriously, it ticks well-defined SI seconds instead. What is a leap second, you ask? It's basically a 61st second inserted[6] at the end of the last minute of a particular day. For example, consider the leap second in June 2015 (its announcement). The difference between 2015-06-30 23:59:59 UTC and 2015-07-01 00:00:00 UTC would ordinarily be one second. However, a 61st second was added in-between, written 2015-06-30 23:59:60 UTC. The practical upshot is that the difference was actually two seconds. The effect is that UTC dropped a bit further behind TAI (which doesn't stop for nuthin), to fall more in-line with UT1. Sequentially, it looked like this:
You may also encounter the term ΔT (sometimes written "dT" or "DT"). ΔT is defined as ΔT:=TT-UT, but in practice, values are estimated by ΔT≈TT(TAI)-UT1 (or, equivalently, calculated from DUT1 as ΔT≈[leap secs]+42.184-DUT1). The value of ΔT is of interest because it reflects the changing rotational speed of the Earth. Intuitively, ΔT is the error one accrues from trying to use the TT idealization. [1]Be aware that evil people will sometimes say "UT" when they really mean "UT1" (or occasionally, "UTC"). Also, many people say "GMT" when they mean "UTC"; UTC replaces GMT, which is no longer scientifically defined (indeed, it has been defined in many different, incompatible ways). Some countries—and particularly the UK–shamefully retain the nationalistic pretense that GMT still exists. Making It Useful III: OffsetsMost people don't like the complexity of living on a roughly spherical planet, and sortof wish the whole problem would just go away. If you thought the preceding was complex, wait 'til you see the needless complexity average citizens/politicians invented so they wouldn't have to do addition. The obvious approach is to split the world into "timezones", all of them tied to UTC. You can make a list of 24 timezones, each 15° of longitude and all offset sequentially by one hour each. Then, because summer has more light, you can add an additional, seasonal shift to make sunrise occur at a more-constant time on the clock (this also has the effect of an extra hour of light in the evening). This approach, as well as the changed time itself, is called Daylight-Saving Time (DST) or "Summer Time"[1]. Thus, three time standards are generated per UTC offset: the standard time (ignores DST), the daylight-saving time (same, but plus one hour), and the generic zone, which switches between the two as appropriate. For example, the generic zone "Pacific Time" switches between "Pacific Standard Time (PST)" during standard time and "Pacific Daylight Time (PDT)" during DST. When specifying the time, it is best to specify the generic zone. But, there are problems. For example, the country Kiribati is made up of several islands, and if you make the aforementioned zones, you'll find that poor Kiribati is cut in half by the International Date Line—with local times differing by a full day! This got to be so confusing that in 1995 two more time zones were added (UTC+13:00 and UTC+14:00). Other countries all did similar things, some with much-less-reasonable motivation. Nepal, for example, added its very own timezone in 1986 (UTC+05:45), since that better fits Kathmandu. On the other hand, China geographically spans five time zones, but officially only has one, corresponding to Beijing. Nepal and China, like many other countries, also ignore daylight-saving time. To begin with, DST makes international collaboration extremely complex (which is partly because there exists an absurd variety of incompatible implementations of DST, so nothing is synchronized). DST also causes a quantitative spike in both lost productivity and suicides. Everyone also redrew all the timezone boundaries according to political lines, so they were irregularly shaped and didn't correspond to degrees of longitude anymore anyway. Worse for this, political boundaries are not eternal—over time, new countries sprang up while old ones split, merged, or disappeared. In the process, timezones changed meanings, so now the system you use to measure time requires you to know what date it is when you're asking and so you have multiple timezones for the same place. In some cases, different ethnic groups living in the same region wanted different timezones or different observance of DST, so now you have multiple timezones for the same place, at the same time. As if it could possibly get any worse, countries also took a lackadaisical approach to implementing leap seconds, upon which the whole concept of UTC is based in the first place. Some countries couldn't get it together in time (leap seconds being announced "only" six months in advance), while some countries decided just not to bother. All this is such a catastrophe that three things are now true:
[1]The common Americanism "Daylight Savings Time" (with an "s") is erroneous. Also, the hyphen is preferred. Computerization: Leap Seconds and UNIX TimeUNIX Time (no standard abbreviation, but UXT seems good) is the de-facto method by which all modern computers measure time. Windows, Linux, and OSX all measure time this way, for example. On these platforms, it's what the C library function "time(...)" returns, despite not technically being obligated to. UXT is widely misunderstood, even by UNIX experts. The basic idea is that it's a count of seconds since 1970-01-01 00:00:00 UTC. The confusing part is that it is not a count of SI seconds; it is a count of UNIX seconds. UNIX seconds were originally defined rather badly, but a working definition has emerged, become consensus, and is being gradually formalized. Now, as implicitly defined by the Single Unix Specification §4.15, UNIX seconds are the same as SI seconds, except for the last second before a leap second is to be inserted[1]. That last UNIX second is either stretched out to be two SI seconds long (thus covering the leap second), or else is repeated once (with the same effect)[2]. An example of UXT during a leap second was given in the table above. I describe UXT as pretending leap seconds had never been inserted into UTC in the first place[3]. Of course, "UTC doesn't have leap seconds" is as bald-faced as lies come, since literally the whole point of UTC is that it does. Most resources instead say very confusing/misleading things, such as "[UXT is ]the number of seconds since Jan 1st 1970, 00:00 UTC, but without leap seconds" (ref) or not distinguishing SI seconds from UNIX seconds at all (e.g.). As a direct and unfortunate consequence, most programmers erroneously still treat UNIX seconds the same as SI seconds because they don't know any better, and so there is still a godawful mess concerning real-world code. The problem is exacerbated by poor language support. For example, the C language function "difftime(...)" returns the difference between two UNIX timestamps. Internally, it is implemented as a simple subtraction, which means that the result is given in UNIX seconds, not SI seconds. But since UNIX seconds are intrinsically tied to timestamps (because you need to know whether they include leap seconds), the result is utterly meaningless. It might be SI seconds, but there might have been a leap second in there. You can't know, because differences are relative time, not absolute. How are leap seconds handled by real-world systems? The short answer is, poorly. Even today, much, if not most[4], professional software gets it utterly wrong (enough to e.g. cause government agencies to suspend operations and to bring down servers at Google). Theoretically, UXT defines what should happen. In practice, the UXT self-delusional timekeeping coupled with a legacy of ignorant, wrong code, leads to absurd glitches and policies. For example, the stock exchanges worldwide closed for the 2015-06-30 23:59:60 UTC leap second. Milliseconds of difference are worth literally millions of dollars in the stock exchange. They were (rightly) afraid of bugs in their timekeeping algorithms. To accommodate bad software, which is completely clueless of the concept of leap seconds, some systems were configured to adjust their clocks' tick rates to smear it out over several seconds. Many timekeeping servers (which the computing device you're using right now probably syncs with regularly) implement this non-scientific, irreproducible policy, and there's prettymuch nothing you can do about it. [1]Note: UNIX seconds, when they tick, tick at the same instant as TAI and UTC do. That is, the offset between either in SI seconds is always an integer. TakeawaysWhen writing code or discussing time, your treatment of time should depend on what you're trying to do. Let's assume you want to be as accurate as possible, not necessarily as compatible as possible.
SummaryWe have discussed three idealized reference frames: TCB, TCG, and TT. These were tied to measurements by atomic clocks (TAI) by means of the 1977 epoch. We can estimate time within these reference frames by using TAI—for example, using TT(TAI), which is a linear offset, or better, a continually improving estimate (currently TT(BIPM20)). It is useful to define a system of time (UT1) and a calendar family (Gregorian) that are related to the Earth's rotation and orbit. The Gregorian calendars have leap years every four years, except every century not divisible by 400. To wed TAI's accurate timekeeping to the useful time values of UT1, the UTC standard was eventually developed. UTC ticks at exactly the same instant and rate as TAI, but to keep it roughly in-sync with UT1, leap seconds are occasionally removed from/inserted into UTC. UTC is the current worldwide standard for time. UTC, plus modernization of year names, forms the basis of the ISO 8601 Gregorian Calendar, which is the calendar prettymuch everyone actually uses. Local times are conversions of UTC to a timezone: one of a horribly complicated mess of offsets from UTC, now maintained by IANA. On computers, time is measured by UNIX Time (which I call UXT). UXT is based on UTC; however, UXT handles UTC's leap seconds by pretending they didn't happen. This has led to hopelessly broken software and complicated workarounds. ConclusionI hope you've found this to be illuminating of the issues involved, or perhaps informative, or even useful. Again, your corrections and feedback are always welcome.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|