|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Time Standards ReferenceMotivationMeasuring and calculating time are important everywhere. It is astounding, then, that no one seems to actually understand what they're doing. Mainly, programmers don't know what to do, and average people don't understand the issues—and so don't care. Therefore, the world is rife with abominations of chronology: subtracting UNIX time stamps, using GMT instead of UTC, und so weiter. These errors are actively inimical to science, and the horrible reality of the situation is that most data, most simulations, most everything cannot be interpreted accurately at scales sometimes as large as minutes[1]. While I spent several frustrating days coming to this conclusion, I came to realize that there needs to be a centralized compendium/introduction. Such a thing doesn't exist, in part because the usual attempts at centralized information (e.g. Wikipedia) get it dead wrong[2]. Therefore, what I optimistically present here is an iterative reconstruction based on countless sources, some of the more salient of which have been linked post-hoc[3]. This article aims to be a complete description to bootstrap understanding of modern timekeeping. As always, corrections/suggestions are welcome. [1]In practice, data based on UTC will be mostly right, unless leap seconds happened in which case most software gets it terribly wrong (UNIX timestamps again). Clashing Goals and the Start of Our AdventureLet's talk about seconds. The unqualified word "second" has a scientific meaning: since 1967, it is 9 192 631 770 cycles of radiation from a particular quantum hyperfine transition in a Caesium-133 atom. That's a bunch of words that mean it's well-defined. A day has 86 400 seconds (s), notionally, but because the Earth's rotational speed changes, that cannot be exact; over time, astronomical measurements will begin to disagree with clock-based methods. This basic and fundamental problem has caused an explosion of time standards. The five most important today are TT (underpins other four), TAI (accurate clocks), UT1 (astronomical measurements), UTC (accurate clocks, corrected to be close to astronomical measurements), and Local Time (UTC corrected to your timezone). Time ScalesNow picture a clock: an imaginary clock that ticks perfect SI seconds. It doesn't know what a calendar or leap day or leap second or anything is. It just ticks seconds, one after the other. I haven't told you where this imaginary clock is and how we're looking at it. That's an important omission, because both gravity and velocity cause time dilation. Therefore, the IAU (International Astronomical Union) defines three time scales corresponding to three different places I can put it:
Remove linear components
If you don't understand relativity, just forget about it, and think of TT as being the one perfect, imaginary clock. [1]TDB (Temps Dynamique Barycentrique) (Barycentric Dynamical Time) is an older time standard that is still widely used, mentioned here for completeness (though we shall not discuss it thoroughly). Note: For complex reasons, the "seconds" TDB ticks are not SI seconds. The Dawn of Time / Coupling with Accurate ClocksOur imaginary clocks are currently useless (because they're imaginary). The first thing we have to do is tie them to the real world. If you build a single (atomic—based on highly accurate measurement of physical processes) clock, you get a great starting point for measuring time. An astonishing amount of research has made atomic clocks extremely accurate[1]. Nevertheless, due to measurement noise, a much better option is to average readings from several. Various national labs, universities, and the like have one (or several) atomic clocks. These define atomic scales called "TA"[2]. For example, the Belarusian State Institute of Metrology's three hydrogen maser clocks define "TA(BY)" (the lab code of of the Institute is "BY"). From here, an algorithm called "ALGOS" combines all the data from all the clocks at all the institutions around the world (more than 650[3]!) into a weighted average called EAL (Échelle Atomique Libre) (Free Atomic Scale).[4] During this process, corrections and statistical weighting are applied[5]. From here, more corrections are applied to EAL, taking the form of "frequency steering" (the second-ticking rate of EAL is adjusted slightly), to transform it into TAI (Temps Atomique International) (International Atomic Time), the current international standard[6]. The calculated TAI is frequently published in "Circular T"[7]. The entire affair is maintained and run by the BIPM (Bureau International des Poids et Mesures) (International Bureau of Weights and Measures). What we need now is to define some relationship between TAI (accurate clocks that actually exist) and the three (idealized, imaginary) time scales mentioned above (TCB, TCG, and TT). The relationship that is now agreed upon is that all the following refer to the same instant, called the epoch[8][9]:
Notice what we did here: TCB, TCG, and TT (the imaginary time standards we want) have now been tied precisely to TAI (which is something we can actually measure using real clocks). For example, we can estimate TT by just measuring TAI and adding 32.184s. Such an estimate is called a realization of TT using TAI, and is written "TT(TAI)"[13][14][15]. Over time, TT(TAI) (and the analogous TCB(TAI), TCG(TAI), etc.) will drift, becoming more and more wrong (TAI is based on clocks in the real world, after all), but we can still use these as useful estimates. Interestingly, as technology has improved (along with both clocks and our understanding of physics), we can get improving, retrospective estimates of how wrong our estimates were (or, equivalently, we get a better realization of TT). BIPM retroactively publishes this data. The latest such estimate (as of early 2021) is called TT(BIPM19)[16]. It shows that TT(TAI) has drifted by about -27.6835μs. The aforementioned definition TT(TAI):=TAI+32.184s can't (and shouldn't) be changed to try to fix this. Instead, the best way to estimate TT is to use the current TT(BIPM19) estimate. This is nice because future revisions (the next will be, presumably, TT(BIPM19)) are likely to improve the estimate of TT for past values. The following chart shows the error (in microseconds) of TT(TAI) and BIPM estimates relative to TT. One reads each line as the error of a time standard, as estimated from the current data. For example, ΔTT(TAI):=TT(TAI)-TT, where TT is estimated using the best data, TT(BIPM19). The value of ΔTT(TAI), is currently (as of early 2021) the aforementioned -27.6835μs To get a better feel of how this works, try pretending it's a different year (and our data is worse). Note that 2000 and 2002 are unavailable; BIPM did not publish models for those years. Notice how as the years go by we learn more, and so TT(TAI), as well as older BIPM estimates, are seen to be increasingly flawed. The first analysis, TT(BIPM92), showed that TAI had diverged from the epoch. TT(BIPM93), TT(BIPM94), and TT(BIPM95) continued to refine this. At this point, we found that thermal radiation was affecting Caesium clocks. This discovery (and correction, reflected in TT(BIPM96)) showed that TAI had diverged even further than was previously thought. Hence, all previous BIPM predictions were also in error, jumping off the x-axis zero-line. More corrections were—and continue to be—added. Notice how at each year, the final slope of TT(TAI) is roughly flat. This signifies that all known sources of error up to that time have been controlled for, and TT(TAI) is not diverging further from the contemporary estimate of TT. This is because of the frequency steering, which is used to correct all known sources of error. [1]Nowadays, many minuscule corrections compensate for tiny sources of error. E.g. some kinds of clocks are corrected for Earth's magnetic field on dipoles of individual atomic nuclei. An appalling amount of effort goes into making such things right. Making It Useful I: CalendarsSo far, we've only talked about clocks (imaginary and atomic) ticking off seconds. Let's briefly sidetrack and talk about days and dates. If you didn't know, all dates you've probably used were given in the Gregorian calendar—the modern Western calendar. The Gregorian calendar (introduced 1582[1]) was developed directly from the Julian calendar (introduced 46 BC)[2]. The difference is how they handle leap years. In the Julian calendar, there is exactly one leap day every four years falling on the familiar February 29th. So, on average, there are 365.25 days per Julian year. Unfortunately, this is not accurate over more than a century or so. Christianity got annoyed that their holidays were moving around relative to the equinoxes, and so the Gregorian calendar was introduced. The Gregorian calendar removes the Julian calendar's leap days that are divisible by 100 but not 400. For example, 1700, 1800, and 1900 were not leap years, but 1600 and 2000 were. On average, there are 365.2425 days per Gregorian year. The Gregorian calendar works very well. If you count out days of 86 400 seconds each and follow the calendar, you'll be in almost exactly the right place relative to the Sun for millennia into the past or future. But . . . not exactly so. [1]Unfortunately, because the real-world is a terrible place, the standard was adopted slowly. The gory list of adoption details (when such are known) can be found here. Making It Useful II: Tying to Astronomical Time StandardsSee, the length of a (solar) "day" is not actually a constant. It varies due to tiny changes in the Earth's rotational rate, its orbit around the Sun, and so on. To handle this, there is a family of time standards called UT (Universal Time) that are based on direct or indirect measurements of it. The two important varieties today are UT1 and UTC[1]. UT1 notionally measures the Mean Solar Time at 0° longitude, a measure of the Sun's position in the sky[2][3]. UT1 is distributed by the IERS (International Earth Rotation and Reference Systems Service)[4]. Officially, UT1 is given out irregularly in IERS Bulletin D, expressed as DUT1, a difference from UTC defined as DUT1:=UT1-UTC. In practice, more precise values come from IERS Bulletin A via the United States Naval Observatory (see cols. 59 through 68). Well, what is UTC? UTC (Coordinated Universal Time) reconciles the SI second-ticking awesomeness of TAI with the universe-following relevance of UT1. UTC is now the de-facto world standard of "Civil Time" (time that people actually commonly use). UTC ticks SI seconds, just like TAI, and in modern times ticks at the same instant as TAI. However, UTC is set, by means of occasional leap seconds (announced six months in advance by the IERS in their IERS Bulletin C; as of early 2019, there have been 27 so far), to be within 0.9 seconds of UT1[5]. So UTC is almost the same time as UT1, but gloriously, it ticks well-defined SI seconds instead. What is a leap second, you ask? It's basically a 61st second inserted[6] at the end of the last minute of a particular day. For example, consider the leap second in June 2015 (its announcement). The difference between 2015-06-30 23:59:59 UTC and 2015-07-01 00:00:00 UTC would ordinarily be one second. However, a 61st second was added in-between, written 2015-06-30 23:59:60 UTC. The practical upshot is that the difference was actually two seconds. The effect is that UTC dropped a bit further behind TAI (which doesn't stop for nuthin), to fall more in-line with UT1. Sequentially, it looked like this:
You may also encounter the term ΔT (sometimes written "dT", or "DT"). ΔT is defined as ΔT:=TT-UT, but in practice, values are estimated by ΔT≈TT(TAI)-UT1 (or, equivalently, calculated from DUT1 as ΔT≈[leap secs]+42.184-DUT1). The value of ΔT is of interest because it reflects the changing rotational speed of the Earth. Intuitively, ΔT is the error one accrues from trying to use the TT idealization. [1]Be aware that evil people will sometimes say "UT" when they really mean "UT1" (or occasionally, "UTC"). Also, many people say "GMT" when they mean "UTC"; UTC replaces GMT, which is no longer scientifically defined (indeed, it has been defined in many different, incompatible ways). Making It Useful III: OffsetsMost people don't like the complexity of living on a roughly spherical planet, and sortof wish the whole problem would just go away. If you thought the preceding was complex, wait 'til you see the needless complexity average citizens/politicians invented so they wouldn't have to do addition. The obvious approach is to split the world into "timezones", all of them tied to UTC. You can make a list of 24 timezones, each 15° of longitude and all offset sequentially by one hour each. Then, because summer has more light, you can add an additional, seasonal shift to make sunrise occur at a more-constant time on the clock (this also has the effect of an extra hour of light in the evening). This approach, as well as the changed time itself, is called Daylight-Saving Time (DST) or "Summer Time"[1]. Thus, three time standards are generated per UTC offset: the standard time (ignores DST), the daylight-saving time (same, but plus one hour), and the generic zone, which switches between the two as appropriate. For example, the generic zone "Pacific Time" switches between "Pacific Standard Time (PST)" during standard time and "Pacific Daylight Time (PDT)" during DST. [1]The common Americanism "Daylight Savings Time" (with an "s") is erroneous. Also, the hyphen is preferred. Making It Useful IV: The ProblemsBut, there are problems. See, the country Kiribati is made up of several islands, and if you make the aforementioned zones, you'll find that poor Kiribati is cut in half by the International Date Line—with local times differing by a full day! This got to be so confusing that in 1995 two more time zones were added (UTC+13:00 and UTC+14:00). Other countries all did similar things, some with much-less-reasonable motivation. Nepal, for example, added its very own timezone in 1986 (UTC+05:45), since that better fits Kathmandu. On the other hand, China geographically spans five time zones, but officially only has one, corresponding to Beijing. Nepal and China, like many other countries, also ignore daylight-saving time. To begin with, DST makes international collaboration extremely complex (which is partly because there exists an absurd variety of incompatible implementations of DST, so nothing is synchronized). DST also causes a quantitative spike in both lost productivity and suicides. Everyone also redrew all the timezone boundaries according to political lines, so they were irregularly shaped and didn't correspond to degrees of longitude anymore anyway. But, funny thing: political boundaries are not eternal. Over time, new countries sprang up while old ones split, merged, or disappeared. In the process, timezones changed meanings, so now the system you use to measure time requires you to know what date it is when you're asking and so you have multiple timezones for the same place. In some cases, different ethnic groups living in the same region wanted different timezones or different observance of DST, so now you have multiple timezones for the same place, at the same time. As if it could possibly get any worse, countries also took a lackadaisical approach to implementing leap seconds, upon which the whole concept of UTC is based in the first place. Some countries couldn't get it together in time (leap seconds being announced "only" six months in advance), while some countries decided just not to bother. All this is such a catastrophe that three things are now true:
Computerization I: UNIX TimeUNIX Time (no standard abbreviation, but UXT seems good) is the de-facto method by which all modern computers measure time. Windows, Linux, and OSX all measure time this way. On these platforms, it's what the C library function "time(...)" returns, despite not technically being obligated to. UXT is widely misunderstood, even by UNIX experts. The basic idea is that it's a count of seconds since 1970-01-01 00:00:00 UTC. The confusing part is that it is not a count of SI seconds; it is a count of UNIX seconds. UNIX seconds were originally defined rather badly, but a working definition has emerged, become consensus, and is being gradually formalized. Now, as implicitly defined by the Single Unix Specification §4.15, UNIX seconds are the same as SI seconds, except for the last second before a leap second is to be inserted[1]. That last UNIX second is either stretched out to be two SI seconds long (thus covering the leap second), or else is repeated once (with the same effect)[2]. An example of UXT during a leap second was given in the table above. I describe UXT as pretending leap seconds had never been inserted into UTC in the first place[3]. Of course, "UTC doesn't have leap seconds" is as bald-faced as lies come, since literally the whole point of UTC is that it does. Most resources instead say very confusing/misleading things, such as "[UXT is ]the number of seconds since Jan 1st 1970, 00:00 UTC, but without leap seconds" (ref) or not distinguishing SI seconds from UNIX seconds at all (e.g.). As a direct and unfortunate consequence, most programmers erroneously still treat UNIX seconds the same as SI seconds because they don't know any better, and so there is still a godawful mess concerning real-world code. The problem is exacerbated by poor language support. For example, the C language function "difftime(...)" returns the difference between two UNIX timestamps. Internally, it is implemented as a simple subtraction, which means that the result is given in UNIX seconds, not SI seconds. But since UNIX seconds are intrinsically tied to timestamps (because you need to know whether they include leap seconds), the result is utterly meaningless. It might be SI seconds, but there might have been a leap second in there. You can't know, because differences are relative time, not absolute. [1]Note: UNIX seconds, when they tick, tick at the same instant as TAI and UTC do. That is, the offset between either in SI seconds is always an integer. Computerization II: Handling Leap SecondsHow are leap seconds handled by real-world systems? The short answer is, poorly. Even today, much, if not most[1], professional software gets it utterly wrong (enough to e.g. cause government agencies to suspend operations and to bring down servers at Google). Theoretically, UXT defines what should happen. In practice, the UXT self-delusional timekeeping coupled with a legacy of ignorant, wrong code, leads to absurd glitches and policies. For example, the stock exchanges worldwide closed for the 2015-06-30 23:59:60 UTC leap second. Milliseconds of difference are worth literally millions of dollars in the stock exchange. They were (rightly) afraid of bugs in their timekeeping algorithms. To accommodate bad software, which is completely clueless of the concept of leap seconds, some systems were configured to adjust their clocks' tick rates to smear it out over several seconds. Many timekeeping servers (which the computing device you're using right now probably syncs with regularly) implement this non-scientific, irreproducible policy, and there's prettymuch nothing you can do about it. [1]Seriously. "Most". What Should I Do?When writing code or discussing time, your treatment of time should depend on what you're trying to do. Let's assume you want to be as accurate as possible, not necessarily as compatible as possible.
SummaryWe have discussed three idealized, imaginary time scales: TCB, TCG, and TT. These were tied to measurements by atomic clocks (TAI) by means of the 1977 epoch. We can estimate these time scales using TAI—for example, using TT(TAI), which is a linear offset, or better, a continually improving estimate (currently TT(BIPM19)). It is useful to define a system of time (UT1) and a calendar (Gregorian) that are related to the Earth's rotation and orbit. The Gregorian calendar has leap years every four years, except every century not divisible by 400. To wed TAI's accurate timekeeping to the useful time values of UT1, the UTC standard was eventually developed. UTC ticks at exactly the same instant and rate as TAI, but to keep it roughly in-sync with UT1, leap seconds are occasionally removed from/inserted into UTC. UTC is the current worldwide standard for time. Local times are conversions of UTC to a timezone: one of a horribly complicated mess of offsets from UTC, now maintained by IANA. On computers, time is measured by UNIX Time (which I call UXT). UXT is based on UTC; however, UXT handles UTC's leap seconds by pretending they didn't happen. This has led to hopelessly broken software and complicated workarounds. ConclusionI hope you've found this to be illuminating of the issues involved, or perhaps informative, or even useful. Again, your corrections and feedback are always welcome.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|