_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2024-03-17 wilhelm haller and photocopier accounting

In the 1450s, German inventor Johannes Gutenburg designed the movable-type printing press, the first practical method of mass-duplicating text. After various other projects, he applied his press to the production of the Bible, yielding over one hundred copies of a text that previously had to be laboriously hand-copied.

His Bible was a tremendous cultural success, triggering revolutions not only in printed matter but also in religion. It was not a financial success: Gutenburg had apparently misspent the funds loaned to him for the project. Gutenburg lost a lawsuit and, as a result of the judgment, lost his workshop. He had made printing vastly cheaper, but it remained costly in volume. Sustaining the revolution of the printing press evidently required careful accounting.

For as long as there have been documents, there has been a need to copy. The printing press revolutionized printed matter, but setting up plates was a labor-intensive process, and a large number of copies needed to be produced at once for the process to be feasible. Into the early 20th century, it was not unusual for smaller-quantity business documents to be hand-copied. It wasn't necessarily for lack of duplicating technology; if anything, there were a surprising number of competing methods of duplication. But all of them had considerable downsides, not least among them the cost of treated paper stock and photographic chemicals.

The mimeograph was the star of the era. Mimeograph printing involved preparing a wax master, which would eventually be done by typewriter but was still a frustrating process when you only possessed a printed original. Photographic methods could be used to reproduce anything you could look at, but required expensive equipment and a relatively high skill level. The millennial office's proliferation of paper would not fully develop until the invention of xerography.

Xerography is not a common term today, first because of the general retreat of the Xerox corporation from the market, and second because it specifically identifies an analog process not used by modern photocopiers. In the 1960s, Xerox brought about a revolution in paperwork, though, mass-producing a reprographic machine that was faster, easier, and considerably less expensive to operate than contemporaries like the Photostat. The photocopier was now simple and inexpensive enough that they ventured beyond the print shop, taking root in the hallways and supply rooms of offices around the nation.

They were cheap, but they were costly in volume. Cost per page for the photocopiers of the '60s and '70s could reach $0.05, approaching $0.40 in today's currency. The price of photocopies continued to come down, but the ease of photocopiers encouraged quantity. Office workers ran amok, running off 30, 60, even 100 pages of documents to pass around. The operation of photocopiers became a significant item in the budget of American corporations.

The continued proliferation of the photocopier called for careful accounting.

Illustration


Wilhelm Haller was born in Swabia, in Germany. Details of his life, in the English language and seemingly in German as well, are sparse. His Wikipedia biography has the tone of a hagiography; a banner tells us that its neutrality is disputed.

What I can say for sure is that, in the 1960s, Haller found the start of his career as a sales apprentice for Hengstler. Hengstler, by then nearly a hundred years old, had made watches and other fine machinery before settling into the world of industrial clockwork. Among their products were a refined line of mechanical counters, of the same type we use today: hour meters, pulse counters, and volume meters, all driving a set of small wheels printed with the digits 0 through 9. As each wheel rolled from 9 to 0, a peg pushed a lever to advance the next wheel by one digit. They had numerous applications in commercial equipment and Haller must have become quite familiar with them before he moved to New York City, representing Hengstler products to the American market.

Perhaps he worked in an office where photocopier expenses were a complaint. I wish there was more of a story behind his first great invention, but it is quite overshadowed by his later, more abstract work. No source I can find cares to go deeper than to say that, along with Hengstler employee Paul Buser, he founded an American subsidiary of Hengstler called the Hecon Corporation. I can speculate somewhat confidently that Hecon was short for "Hengstler Counter," as Hecon dealt entirely in counters. More specifically, Hecon introduced a new application of the mechanical counter invented by Haller himself: the photocopier key counter.

Xerox photocopiers already included wiring that distributed a "pulse per page" signal, used to advance a counter used for scheduled maintenance. The Hecon key counter was a simple elaboration on this idea: a socket and wiring harness, furnished by Hecon, was installed on the photocopier. An "enable" circuit for the photocopier passed through the socket, and had to be jumpered for the photocopier to function. The socket also provided a pulse per page wire.

Photocopier users, typically each department, were issued a Hecon mechanical counter that fit into the socket. To make photocopies, you had to insert your key counter into the socket to enable the photocopier. The key counter was not resettable, so the accounting department could periodically collect key counters and read the number displayed on them like a utility meter. Thus the name key counter: it was a key to enable the photocopier, and a counter to measure the keyholder's usage.

Key counters were a massive success and proliferated on office photocopiers during the '70s. Xerox, and then their competitors, bought into the system by providing a convenient mounting point and wiring harness connector for the key counter socket. You could find photocopiers that required a Hecon key counter well into the 1990s. Threads on office machine technician forums about adapting the wiring to modern machines suggest that there were some users into the 2010s.


Hecon would not allow the technology to stagnate. The mechanical key counter was reliable but had to be collected or turned in for the counter to be read. The Hecon KCC, introduced by the mid-1990s, replaced key counters with a microcontroller. Users entered an individual PIN or department number on a keypad mounted to the copier and connected to the key counter socket. The KCC enabled the copier and counted the page pulses, totalizing them into a department account that could be read out later from the keypad or from a computer by serial connection.

Hecon was not only invested in technological change, though. At some point, Hecon became a major component of Hengstler, with more Hengstler management moving to its New Jersey headquarters. "Must have good command of German and English," a 1969 newspaper listing for a secretarial job stated, before advising applicants to call a Mr. Hengstler himself.

By 1976, the "Liberal Benefits" in their job listing had been supplemented by a new feature: "Hecon Corp, the company that pioneered & operates on flexible working hours."

During the late '60s, Wilhelm Haller seems to have returned to Germany and shifted his interests beyond photocopiers to the operations of corporations themselves. Working with German management consultant Christel Kammerer, he designed a system for mechanical recording of employee's working hours.

This was not the invention of the time clock. The history of the time clock is obscure but they were already in use during the 19th century. Haller's system implemented a more specific model of working hours promoted by Kammerer: flexitime (more common in Germany) or flextime (more common in the US).

Flextime is a simple enough concept and gained considerable popularity in the US during the 1970s and 1980s, making it almost too obvious to "invent" today. A flextime schedule defines "core hours," such as 11a-3p, during which employees are required to be present in the office. Outside of core hours, employees are free to come and go so long as their working hours total eight each day. Haller's time clock invention was, like the key counter, a totalizing counter: one that recorded not when employees arrived and left, but how many hours they were present each day.

It's unclear if Haller still worked for Hengstler, but he must have had some influence there. Hecon was among the first, perhaps the first, companies to introduce flextime in the United States.


Photocopier accounting continued apace. Dallas Semiconductor and Sun Microsystems popularized the iButton during the late 1990s, a compact and robust device that could store data and perform cryptographic operations. Hecon followed in the footprints of the broader stored value industry, introducing the Hecon Quick Key system that used iButtons for user authentication at the photocopier. Copies could even be "prepaid" onto an iButton, ideal for photocopiers with a regular cast of outside users, like those in courthouses and county clerk's offices.

The Quick Key had a distinctive, angular copier controller apparently called the Base 10. It had the aesthetic vibes of a '90s contemporary art museum, all white and geometric, although surviving examples have yellowed to to the pallor of dated office equipment.

As the Xerographic process was under development, British Bible scholar Hugh Schonfield spent the 1950s developing his Commonwealth of World Citizens. Part micronation, part NGO, the Commonwealth had a mission of organizing its members throughout many nations into a world community that would uphold the ideals of equality and peace while carrying out humanitarian programs.

Adopting Esperanto as its language, it renamed itself to the Mondcivitan Republic, publishing a provisional constitution and electing a parliament. The Mondcivitan Republic issued passports; some of its members tried to abandon citizenship of their own countries. It was one of several organizations promoting "world citizenship" in the mid-century.

In 1972, Schonfield published a book, Politics of God, describing the organization's ideals. Those politics were apparently challenging. While the Mondcivitan Republic operated various humanitarian and charitable programs through the '60s and '70s, it failed to adopt a permanent constitution and by the 1980s had effectively dissolved. Sometime around then, Wilhelm Haller joined the movement and established a new manifestation of the Mondcivitan Republic in Germany. Haller applied to cancel his German citizenship, he would be a citizen of the world.

As a management consultant and social organizer, he founded a series of progressive German organizations. Haller's projects reached their apex in 2004, with the formation of the "International Leadership and Business Society," a direct extension of the Mondcivitan project. That same year, Haller passed away, a victim of thyroid cancer.


A German progressive organization, Lebenshaus Schwäbische Alb eV, published a touching obituary of Haller. Hengstler and Hecon are mentioned only as "a Swabian factory," his work on flextime earns a short paragraph.

In translation:

He was able to celebrate his 69th birthday sitting in a wheelchair with a large group of his family and the circle of friends from the Reconciliation Association and the Life Center. With a weak and barely audible voice, he took part in our discussion about new financing options for the local independent Waldorf school from the purchasing power of the affected parents' homes.

Haller is, to me, a rather curious type of person. He was first an inventor of accounting systems, second a management consultant, and then a social activist motivated by both his Christian religion and belief in precision management. His work with Hengstler/Hecon gave way to support and adoption programs for disadvantaged children, supportive employment programs, and international initiatives born of unique mid-century optimism.

Flextime, he argued, freed workers to live their lives on their own schedules, while his timekeeping systems maintained an eight-hour workday with German precision. The Hecon key counter, a footnote of his career, perhaps did the same on a smaller scale: duplication was freed from the print shop but protected by complete cost recovery. Later in his career, he would set out to unify the world.

But then, it's hard to know what to make of Haller. Almost everything written about him seems to be the work of a true believer in his religious-managerial vision. I came for a small detail of photocopier history, and left with this strange leader of West German industrial thought, a management consultant who promised to "humanize" the workplace through time recording.

For him, a new building in the great "city on a hill" required only two things: careful commercial accounting with the knowledge of our own limited possibilities, and a deep trust in God, who knows how to continue when our own strength has come to an end.

Illustration

--------------------------------------------------------------------------------

>>> 2024-03-09 the purple streetscape

Across the United States, streets are taking on a strange hue at night. Purple.

Purple streetlights have been reported in Tampa, Vancouver, Wichita, Boston. They're certainly in evidence here in Albuquerque, where Coal through downtown has turned almost entirely to mood lighting. Explanations vary. When I first saw the phenomenon, I thought of fixtures that combined RGB elements and thought perhaps one of the color channels had failed.

Others on the internet offer more involved explanations. "A black light surveillance network," one conspiracist calls them, as he shows his mushroom-themed blacklight poster fluorescing on the side of a highway. I remain unclear on what exactly a shadowy cabal would gain from installing blacklights across North America, but I am nonetheless charmed by his fluorescent fingerpainting demonstration. The topic of "blacklight" is a somewhat complex one with LEDs.

Historically, "blacklight" had referred to long-wave UV lamps, also called UV-A. These lamps emitted light around 400nm, beyond violet light, thus the term ultraviolet. This light is close to, but not quite in, the visible spectrum, which is ideal for observing the effect of fluorescence. Fluorescence is a fascinating but also mundane physical phenomenon in which many materials will absorb light, becoming excited, and then re-emit it as they relax. The process is not completely efficient, so the re-emited light is longer in wavelength than the absorbed light.

Because of this loss of energy, a fluorescent material excited by a blacklight will emit light down in the visible spectrum. The effect seems a bit like magic: the fluorescence is far brighter, to the human eye, than the ultraviolet light that incited it. The trouble is that the common use of UV light to show fluorescence leads to a bit of a misconception that ultraviolet light is required. Not at all, fluorescent materials will emit just about any light at a slightly lower wavelength. The emitted light is relatively weak, though, and under broad spectrum lighting is unlikely to stand out against the ambient lighting. Fluorescence always occurs, it's just much more visible under a light source that humans can't see.

When we consider LEDs, though, there is an economic aspect to consider. The construction of LEDs that emit UV light turns out to be quite difficult. There are now options on the market, but only relatively recently, and they run a considerable price premium compared to visible wavelength LEDs. The vast majority of "LED blacklights" are not actually blacklights; they don't actually emit UV. They're just blue. Human eyes aren't so sensitive to blue, especially the narrow emission of blue LEDs, and so these blue "blacklights" work well enough for showing fluorescence, although not as well as a "real" blacklight (still typically gas discharge).

This was mostly a minor detail of theatrical lighting until COVID, when some combination of unknowing buyers and unscrupulous sellers lead to a wave of people using blue LEDs in an attempt to sanitize things. That doesn't work, long-wave UV already barely has enough energy to have much of a sanitizing effect and blue LEDs have none at all. For sanitizing purposes you need short wave UV, or UV-C, which has so much energy that it is almost ionizing radiation. The trouble, of course, is that this energy damages most biological things, including us. UV-C lights can quickly cause mild (but very unpleasant) eye damage called flashburn or "welder's eye," and more serious exposure can cause permanent damage to your eyes and skin. Funny, then, that all the people waving blue LEDs over their groceries on Instagram reels were at least saving themselves from an unpleasant learning experience.

You can probably see how this all ties back to streetlights. The purple streetlights are not "blacklights," but the clear fluorescence of our friend's psychedelic art tells us that they are emitting energy mostly at the short end of the visible spectrum, allowing the longer wave light emitted by the poster to appear inexplicably bright to our eyes. We are apparently looking at some sort of blue LED.

Those familiar with modern LED lighting probably easily see what's happening. LEDs are largely monochromatic lighting sources, they emit a single wavelength that results in very poor color rendering, which is both aesthetically unpleasing and produces poor perception for drivers. While some fixtures do indeed combine LEDs of multiple colors to produce white output, there's another technique that is less expensive, more energy efficient, and produces better quality light. Today's inexpensive, good quality LED lights have been enabled by phosphor coatings.

Here's the idea: LEDs of a single color illuminate a phosphorous material. Phosphorescence is actually a closely related phenomenon to fluorescence, but involves kicking an electron up to a different spin state. Fewer materials exhibit this effect than fluorescence, but chemists have devised synthetic phosphors that can sort of "rearrange" light energy within the spectrum.

Blue LEDs are the most energy efficient, so a typical white LED light uses blue LEDs coated in a phosphor that absorbs a portion of the blue light and re-emits it at longer wavelengths. The resulting spectrum, the combination of some of the blue light passing through and red and green light emitted by the phosphor, is a high-CRI white light ideal for street lighting.

Incidentally, one of the properties of phosphorescence that differentiates it from fluorescence is that phosphors take a while to "relax" back to their lower energy state. A phosphor will continue to glow after the energy that excited it is gone. This effect has long been employed for "glow in the dark" materials that continue to glow softly for an extended period of time after the room goes dark. During the Cold War, the Civil Defense Administration recommended outlining stair treads and doors with such phosphorescent tape so that you could more safely navigate your home during a blackout. The same idea is still employed aboard aircraft and ships, and I suppose you could still do it to your house, it would be fun.

Phosphor-conversion white LEDs use phosphors that minimize this effect but they still exhibit it. Turn off a white LED light in a dark room and you will probably notice that it continues to glow dimly for a short time. You are observing the phosphor slowly relaxing.

So what of the purple streetlights? The phosphor has failed, at least partially, and the lights are emitting the natural spectrum of their LEDs rather than the "adjusted" spectrum produced by the phosphor. The exact reason for this failure doesn't seem to have been publicized, but judging by the apparently rapid onset most people think the phosphor is delaminating and falling off of the LEDs rather than slowly burning away or undergoing some sort of corrosion. They may have simply not used a very good glue.

So we have a technical explanation: white LED streetlights are not white LEDs but blue LEDs with phosphor conversion. If the phosphor somehow fails or comes off, their spectrum shifts towards deep blue. Some combination of remaining phosphor on the lights and environmental conditions (we are not used to seeing large areas under monochromatic blue light) causes this to come off as an eery purple.

There is also, though, a system question. How is it that so many streetlights across so many cities are demonstrating the same failure at around the same time?

The answer to that question is monopolization.

Virtually all LED street lighting installed in North America is manufactured by Acuity Brands. Based in Atlanta, Acuity is a hundred-year-old industrial conglomerate that originally focused on linens and janitorial supplies. In 1969, though, Acuity acquired Lithonia: one of the United States' largest manufacturers of area lighting. Acuity gained a lighting division, and it was on the war path. Through a huge number of acquisitions, everything from age-old area lighting giants like Holophane to VC-funded networked lighting companies have become part of Acuity.

In the mean time, GE's area lighting division petered out along with the rest of GE (they recently sold their entire lighting division to a consumer home automation company). Directories of street lighting manufacturers now list Acuity followed by a list of brands Acuity owns. Their dominant competitor for traditional street lighting are probably Cree and Cooper (part of Eaton), but both are well behind Acuity in municipal sales.

Starting around 2017, Acuity started to manufacture defective lights. The exact nature of the defect is unclear, but it seems to cause abrupt failure of the phosphor after around five years. And here we are, over five years later, with purple streets.

The situation is not quite as bad as it sounds. Acuity offered a long warranty on their street lighting, and the affected lights are still covered. Acuity is sending contractors to replace defective lights at their expensive, but they have to coordinate with street lighting operators to identify defective lights and schedule the work. It's a long process. Many cities have over a thousand lights to replace, but finding them is a problem on its own.

Most cities have invested in some sort of smart streetlighting solution. The most common approach is a module that plugs into the standard photocell receptacle on the light and both controls the light and reports energy use over a municipal LTE network. These modules can automatically identify many failure modes based on changes on power consumption. The problem is that the phosphor failure is completely nonelectrical, so the faulty lights can't be located by energy monitoring.

So, while I can't truly rule out the possibility of a blacklight surveillance network, I'd suggest you report purple lights to your city or electrical utility. They're likely already working with Acuity on a replacement campaign, but they may not know the exact scale of the problem yet.


While I'm at it, let's talk about another common failure mode of outdoor LED lighting: flashing. LED lights use a constant current power supply (often called a driver in this context) that regulates the voltage applied to the LEDs to achieve their rated current. Unfortunately, several failure modes can cause the driver to continuously cycle. Consider the common case of an LED module that has failed in such a way that it shorts at high temperature. The driver will turn on until the faulty module gets warm enough and the driver turns off again on current protection. The process repeats indefinitely. Some drivers have a "soft start" feature and some failure modes cause current to rise beyond limits over time, so it's not unusual for these faulty lights to fade in before shutting off.

It's actually a very similar situation to the cycling that gas discharge street lighting used to show, but as is the way of electronics, it happens faster. Aged sodium bulbs would often cause the ballast to hit its current limit over the span of perhaps five minutes, cycling the light on and off. Now it often happens twice in a second.

I once saw a parking lot where nearly every light had failed this way. I would guess that lightning had struck, creating a transient that damaged all of them at once. It felt like a silent rave, only a little color could have made it better. Unfortunately they were RAB, not Acuity, and the phosphor was holding on.

--------------------------------------------------------------------------------

>>> 2024-03-01 listening in on the neighborhood

Last week, someone leaked a spreadsheet of SoundThinking sensors to Wired. You are probably asking "What is SoundThinking," because the company rebranded last year. They used to be called ShotSpotter, and their outdoor acoustic gunfire detection system still goes by the ShotSpotter name.

ShotSpotter has attracted a lot of press and plenty of criticism for the gunfire detection service they provide to many law enforcement agencies in the US. The system involves installing acoustic sensors throughout a city, which use some sort of signature matching to detect gunfire and then use time of flight to determine the likely source.

One of the principle topics of criticism is the immense secrecy with which they operate: ShotSpotter protects information on the location of its sensors as if it were state secret, and does not disclose them even to the law enforcement agencies that are its customers. This secrecy attracts accusations that ShotSpotter's claims of efficacy cannot be independently validated, and that ShotSpotter is attempting to suppress research into the civil rights impacts of its product.

I have encountered this topic before: the Albuquerque Police Department is a ShotSpotter customer, and during my involvement in police oversight was evasive in response to any questions about the system and resisted efforts to subject its surveillance technology purchases to more outside scrutiny. Many assumed that ShotSpotter coverage was concentrated in disadvantaged parts of the city, an unsurprising outcome but one that could contribute to systemic overpolicing. APD would not comment.

I have always assumed that it would not really be that difficult to find the ShotSpotter sensors, at least if you have my inclination to examine telephone poles. While the Wired article focuses heavily on sensors installed on buildings, it seems likely that in environments like Albuquerque with city-operated lighting and a single electrical utility, they would be installed on street lights. That's where you find most of the technology the city fields.

The thing is, I didn't really know what the sensors looked like. I've seen pictures, but I know they were quite old, and I assumed the design had gotten more compact over time. Indeed it has.

ShotSpotter sensor on light pole

An interesting thing about the Wired article is that it contains a map, but the MapBox embed produced with Flourish Studio had a surprisingly high maximum zoom level. That made it more or less impossible to interpret the locations of the sensors exactly. I'm concerned that this was an intentional decision by Wired to partially obfuscate the data, because it is not an effective one. It was a simple matter to find the JSON payload the map viewer was using for the PoI overlay and then convert it to KML.

I worried that the underlying data would be obscured; it was not. The coordinates are exact. So, I took the opportunity to enjoy a nice day and went on an expedition.

ShotSpotter sensor in a neighborhood

The sensors are pretty much what I imagined, innocuous beige boxes clamped to street light arms. There are a number of these boxes to be found in modern cities. Some are smart meter nodes, some are base stations for municipal data networks, others collect environmental data. Some are the police, listening in on your activities.

This is not as hypothetical of a concern as it might sound. Conversations recorded by ShotSpotter sensors have twice been introduced as evidence in criminal trials. In one case the court allowed it, in another the court did not. The possibility clearly exists, and depending on interpretation of state law, it may be permissible for ShotSpotter to record conversations on the street for future use as evidence.

ShotSpotter sensor in a neighborhood

This ought to give us pause, as should the fact that ShotSpotter has been compellingly demonstrated to manipulate their "interpretation" of evidence to fit a prosecutor's narrative---even when ShotSpotter's original analysis contradicted it.

But pervasive surveillance of urban areas and troubling use of that evidence is nothing new. Albuquerque already has an expansive police-operated video surveillance network connected to the Real-Time Crime Center. APD has long used portable automated license plate readers (ALPR) under cover of "your speed is" trailers, and more recently has installed permanent ALPR at major intersections in the city.

All of this occurs with virtually no public oversight or even public awareness.

ShotSpotter sensor in a neighborhood

What most surprised me is the density of ShotSpotter sensors. In my head, I assumed they were fairly sparse. A Chicago report on the system says there are 20 to 25 per square mile. Density in Albuquerque is lower, probably reflecting the wide streets and relative lack of high rises. Still, there are a lot of them. 721 in Albuquerque, a city of about 190 square miles. At present, only parts of the city are covered.

Map of ShotSpotter sensors in Albuquerque

And those coverage decisions are interesting. The valley (what of it is in city limits) is well covered, as is the west side outside of Coors/Old Coors. The International District, of course, is dense with sensors, as is inner NE bounded by roughly by the freeways to Louisiana and Montgomery.

Conspicuously empty is the rest of the northeast, from UNM's north campus area to the foothills. Indian School Road makes almost its entire east side length without any sensors.

ShotSpotter sensor in a neighborhood

The reader can probably infer how this coverage pattern relates to race and class in Albuquerque. It's not perfect, but the distance from your house to a ShotSpotter sensor correlates fairly well with your household income. The wealthier you are, the less surveilled you are.

The "pocket of poverty" south of Downtown where I live, the historically Spanish Barelas and historically Black South Broadway, are predictably well covered. All of the photos here were taken within a mile, and I did not come even close to visiting all of the sensors. Within a one mile radius of the center of Barelas, there are 31 sensors.

ShotSpotter sensor in a neighborhood

Some are conspicuous. Washington Middle School, where 13-year-old Bennie Hargrove was shot by another student, has a sensor mounted at its front entrance. Another sensor is in the cul de sac behind the Coors and I-40 Walmart, where a body was found in a burned-out car. Perhaps the deep gulch of the freeway poses a coverage challenge, there are two more less than a thousand feet away.

In the Downtown Core, buildings were preferred to light poles. The PNM building, the Anasazi condos, and the Banque building are all feeding data into the city's failing scheme of federal prosecutions for downtown gun crime.

The closest sensor to the wealthy Heights is at Embudo Canyon, and coverage stops north of Central in the affluent Nob Hill residential area. Old Town is almost completely uncovered, as is the isolationist Four Hills.

Highland High School has a sensor on its swimming pool building. The data says there are two at the intersection of Gibson and Chavez, probably an error, it also says there are two sensors on "Null Island." Don't worry about coverage in the south campus area, though. There are 16 in the area bounded by I-25 to Yale and Gibson to Coal.

Detail of a ShotSpotter sensor

KOB quotes APD PIO Gallegos saying "We don't know, technically, where all the sensors are." Well, I suppose they do now, the leak has been widely reported on. APD received about 14,000 ShotSpotter reports last year. The accuracy of these reports, in terms of their correctly identifying gunfire, is contested. SoundThinking claims impressive statistics, but has actively resisted independent evaluation. A Chicago report found that only 11.3% of ShotSpotter reports could be confirmed as gunfire. APD, for its part, reports a few hundred suspects or victims identified as a result of ShotSpotter reports.

APD has used a local firearms training business, Calibers, to fire blanks around the city to verify detection. They say the system performed well.

But, if asked, they provide a form letter written by ShotSpotter. Their contract prohibits the disclosure of any actual data.

--------------------------------------------------------------------------------

>>> 2024-02-25 a history of the tty

It's one of those anachronisms that is deeply embedded in modern technology. From cloud operator servers to embedded controllers in appliances, there must be uncountable devices that think they are connected to a TTY.

I will omit the many interesting details of the Linux terminal infrastructure here, as it could easily fill its own article. But most Linux users are at least peripherally aware that the kernel tends to identify both serial devices and terminals as TTYs, assigning them filesystem names in the form of /dev/tty*. Probably a lot of those people remember that this stands for teletype or perhaps teletypewriter, although in practice the term teleprinter is more common.

Indeed, from about the 1950s (the genesis of electronic computers) to the 1970s (the rise of video display terminals/VDTs), teleprinters were the most common form of interactive human-machine interface. The "interactive" distinction here is important; early computers were built primarily around noninteractive input and output, often using punched paper tape. Interactive operation was a more advanced form of computing, one that took almost until the widespread use of VDTs to mature. Look into the computers of the 1960s especially, the early days of interactive operation, and you will be amazed at how bizarre and unfriendly the command interface is. It wasn't really intended for people to use; it was for the Computer Operator (who had attended a lengthy training course on the topic) to troubleshoot problems in the noninteractive workload.

But interactive computing is yet another topic I will one day take on. Right now, I want to talk about the heritage of these input/output mechanisms. Why is it that punched paper tape and the teleprinter were the most obvious way to interact with the first electronic computers? As you might suspect, the arrangement was one of convenience. Paper tape punches and readers were already being manufactured, as were teleprinters. They were both used for communications.

Most people who hear about the telegraph think of Morse code keys and rhythmic beeping. Indeed, Samuel Morse is an important figure in the history of telegraphy. The form of "morse code" that we tend to imagine, though, a continuous wave "beep," is mostly an artifact of radio. For telegraphs, no carrier wave or radio modulation was required. You can transmit a message simply by interrupting the current on a wire.

This idea is rather simple to conceive and even to implement, so it's no surprise that telegraphy has a long history. By the end of the 18th century inventors in Europe and Great Britain were devising simple electrical telegraphs. These early telegraphs had limited ranges and even more limited speeds, though, a result mostly of the lack of a good way to indicate to the operator whether or not a current was present. It is an intriguing aspect of technical history that the first decades of experimentation with electricity were done with only the clumsiest means of measuring or even detecting it.

In 1820, three physicists or inventors (these were vague titles at the time) almost simultaneously worked out that electrical current induced a magnetic field. They invented various ways of demonstrating the effect, usually by deflecting a magnetic needle. This innovation quickly lead to the "electromagnetic telegraph," in which a telegrapher operates a key to switch current, which causes a needle or flag to deflect at the other end of the circuit. This was tremendously simpler than previous means of indicating current and was applied almost immediately to build the first practical telegraphs. During the 1830s, the invention of the relay allowed telegraph signals to be repeated or amplified as the potential weakened (the origin of the term "relay"). Edward Davy, one of the inventors of the relay, also invented the telegraph recorder.

From 1830 to 1850, so many people invented so many telegraph systems that it is difficult to succinctly describe how an early practical telegraph worked. There were certain themes: for non-recording systems, a needle was often deflected one way or the other by the presence or absence of current, or perhaps by polarity reversal. Sometimes the receiver would strike a bell or sound a buzzer with each change. In recording systems, a telegraph printer or telegraph recorder embossed a hole or left a small mark on a paper tape that advanced through the device. In the first case, the receiving operator would watch the needle, interpreting messages as they came. In the second case, the operator could examine the paper tape at their leisure, interpreting the message based on the distances between the dots.

Recording systems tended to be used for less time-sensitive operations like passing telegrams between cities, while non-recording telegraphs were used for more real-time applications like railroad dispatch and signaling. Regardless, it is important to understand that the teleprinter is about as old as the telegraph. Many early telegraphs recorded received signals onto paper.

The interpretation of telegraph signals was as varied as the equipment that carried them. Samuel Morse popularized the telegraph in the United States based in part on his alphabetic code, but it was not the first. Gauss famously devised a binary encoding for alphabetic characters a few years earlier, which resembles modern character encodings more than Morse's scheme. In many telegraph applications, though, there was no alphabetic code at all. Railroad signal telegraphs, for example, often used application-specific schemes that encoded types of trains and routes instead of letters.

Morse's telegraph system was very successful in the United States, and in 1861 a Morse telegraph line connected the coasts. It surprises some that a transcontinental telegraph line was completed some fifty years before the transcontinental telephone line. Telegraphy is older, though, because it is simpler. There is no analog signaling involved; simple on/off or polarity signals can be amplified using simple mechanical relays. The tendency to view text as more complex than voice (SMS came after the first cellphones, for one) has more to do with the last 50 years than the 50 years before.

The Morse telegraph system was practical enough to spawn a large industry, but suffered a key limitation: the level of experience required to key and copy Morse quickly and reliably is fairly high. Telegraphers were skilled and, thus, fairly well paid and sometimes in short supply [1]. To drive down the cost of telegraphy, there would need to be more automation.

Many of the earliest telegraph designs had employed parallel signaling. A common scheme was to provide one wire for each letter, and a common return. These were impractical to build over any meaningful distance, and Morse's one-wire design (along with one-wire designs by others) won out for obvious reasons. The idea of parallel signaling stayed around, though, and was reintroduced during the 1840s with a simple form of multiplexing: one "logical channel" for each letter could be combined onto one wire using time division muxing, for example by using a transmitter and receiver with synchronized spinning wheels. Letters would be presented by positions on the wheel, and a pulse sent at the appropriate point in the revolution to cause the teleprinter to produce that letter. With this alphabetic teleprinter, an experienced operator was no longer required to receive messages. They appeared as text on a strip of paper, ready for an unskilled clerk to read or paste onto a message card.

This system proved expensive but still practical to operate, and a network of such alphabetic teleprinters was built in the United States during the mid 19th century. A set of smaller telegraph companies operating one such system, called the Hughes system after its inventor, joined together to become the Western Union Telegraph Company. In a precedent that would be followed even more closely by the telephone system, practical commercial telegraphy was intertwined with a monopoly.

The Hughes system was functional but costly. The basic idea of multiplexing across 30 channels was difficult to achieve with mechanical technology. Émile Baudot was employed by the French telegraph service to find a way to better utilize telegraph lines. He first developed a proper form of multiplexing, using synchronized switches to combine five Hughes system messages onto one wire and separate them again at the other end. Likely inspired by his close inspection of the Hughes system and its limitations, Baudot went on to develop a more efficient scheme for the transmission of alphabetic messages: the Baudot code.

Baudot's system was similar to the Hughes system in that it relied on a transmitter and receiver kept in synchronization to interpret pulses as belonging to the correct logical channel. He simplified the design, though, by allowing for only five logical channels. Instead of each pulse representing a letter, the combination of all five channels would be used to form one symbol. The Baudot code was a five-bit binary alphabetic encoding, and most computer alphabetic encodings to the present day are at least partially derived from it.

One of the downsides of Baudot's design is that it was not quite as easy to operate as telegraphy companies would hope. Baudot equipment could keep up 30 words per minute with a skilled operator who could work the five-key piano-style keyboard in good synchronization with the mechanical armature that read it out. This took a great deal of practice, though, and pressing keys out of synchronization with the transmitter could easily cause incorrect letters to be sent.

In 1901, during the early days of the telephone, Donald Murray developed an important enhancement to the Baudot system. He was likely informed by an older practice that had been developed for Morse telegraphs, of having an operator punch a Morse message into paper tape to be transmitted by a simple tape reader later. He did the same for Baudot code: he designed a device with an easy to use typewriter-like keyboard that punched Baudot code onto a strip of paper tape with five rows, one for each bit. The tape punch had no need to be synchronized with the other end, and the operator could type at whatever pace they were comfortable.

The invention of Murray's tape punch brought about the low-cost telegram networks that we are familiar with from the early 20th century. A clerk would take down a message and then punch it onto paper tape. Later, the paper tape would be inserted into a reader that transmitted the Baudot message in perfect synchronization with the receiver, a teleprinter that typed it onto tape as text once again. The process of encoding and decoding messages for the telegraph was now fully automated.

The total operation of the system, though, was not. For one, the output was paper tape, that had to be cut and pasted to compose a paragraph of text. For another, the transmitting and receiving equipment operated continuously, requiring operators to coordinate on the scheduling of sending messages (or they would tie up the line and waste a lot of paper tape).

In a wonderful time capsule of early 20th century industrialism, the next major evolution would come about with considerable help from the Morton Salt Company. Joy Morton, its founder, agreed to fund Frank Pearne's efforts to develop an even more practical printing telegraph. This device would use a typewriter mechanism to produce the output as normal text on a page, saving considerable effort by clerks. Even better, it would use a system of control codes to indicate the beginning and end of messages, allowing a teleprinter to operate largely unattended. This was more complex than it sounded, as it required finding a way for the two ends to establish clock synchronization before the message.

There were, it turned out, others working on the same concept. After a series of patent disputes, mergers, and negotiations, the Morkrum-Kleinschmidt Company would market this new technology. A fully automated teleprinter, lurching into life when the other end had a message to send, producing pages of text like a typewriter with an invisible typist.

In 1928, Morkrum-Kleinschmidt adopted a rather more memorable name: the Teletype Corporation. During the development of the Teletype system, the telephone network had grown into a nationwide enterprise and one of the United States' largest industrial ventures (at many points in time, the country's single largest employer). AT&T had already entered the telegraph business by leasing its lines for telegraph use, and work had already begun on telegraphs that could operate over switched telephone lines, transmitting text as if it were a phone call. The telephone was born of the telegraph but came to consume it. In 1930, the Teletype Corporation was purchased by AT&T and became part of Western Electric.

That same year, Western Electric introduced the Teletype Model 15. Receiving Baudot at 45 baud [2] with an optional tape punch and tape reader, the Model 15 became a workhorse of American communications. By some accounts, the Model 15 was instrumental in the prosecution of World War II. The War Department made extensive use of AT&T-furnished teletype networks and Model 15 teleprinters as the core of the military logistics enterprise. The Model 15 was still being manufactured as late as 1963, a production record rivaled by few other electrical devices.

It is difficult to summarize the history of the networks that teleprinters enabled. The concept of switching connections between teleprinters, as was done on the phone network, was an obvious one. The dominant switched teleprinter network was Telex, not really an organization but actually a set of standards promulgated by the ITU. The most prominent US implementation of Telex was an AT&T service called TWX, short for Teletypewriter Exchange Service. TWX used Teletype teleprinters on phone lines (in a special class of service), and was a very popular service for business use from the '40s to the '70s.

Incidentally, TWX was assigned the special purpose area codes 510, 610, 710, 810, and 910, which contained only teleprinters. These area codes would eventually be assigned to other uses, but for a long time ranked among the "unusual" NPAs.

Western Union continued to develop their telegraph network during the era of TWX, acting in many ways as a sibling or shadow of AT&T. Like AT&T, Western Union developed multiplexing schemes to make better use of their long-distance telegraph lines. Like AT&T, Western Union developed automatic switching systems to decrease operator expenses. Like AT&T, Western Union built out a microwave network to increase the capacity of their long-haul network. Telegraphy is one of the areas where AT&T struggled despite their vast network, and Western Union kept ahead of them, purchasing the TWX service from AT&T. Western Union would continue to operate the switched teleprinter network, under the Telex name, into the '80s when it largely died out in favor of the newly developed fax machine.

During the era of TWX, encoding schemes changed several times as AT&T and Western Union developed better and faster equipment (Western Union continued to make use of Western Electric-built Teletype machines among other equipment). ASCII came to replace Baudot, and so a number of ASCII teleprinters existed. There were also hybrids. For some time Western Union operated teleprinters on an ASCII variant that provided only upper case letters and some punctuation, with the benefit of requiring fewer bits. The encoding and decoding of this reduced ASCII set was implemented by the Bell 101 telephone modem, designed in 1958 to allow SAGE computers to communicate with one another and then widely included in TWX and Telex teleprinters. The Bell 101's descendants would bring about remote access to time-sharing computer systems and, ultimately, one of the major forms of long-distance computer networking.

You can see, then, that the history of teleprinters and the history of computers are naturally interleaved. From an early stage, computers operated primarily on streams of characters. This basic concept is still the core of many modern computer systems and, not coincidentally, also describes the operation of teleprinters.

When electronic computers were under development in the 1950s and 1960s, teleprinters were near the apex of their popularity as a medium for business communications. Most people working on computers probably had experience with teleprinters; most organizations working on computers already had a number of teleprinters installed. It was quite natural that teleprinter technology would be repurposed as a means of input and output for computers.

Some of the very earliest computers, for example those of Konrad Zuse, employed punched tape as an input medium. These were almost invariably repurposed or modified telegraphic punched tape systems, often in five-bit Baudot. Particularly in retrospect, as more materials have become available to historians, it is clear that much of the groundwork for digital computing was laid by WWII cryptological efforts.

Newly devised cryptographic machines like the Lorenz ciphers were essentially teleprinters with added digital logic. The machines built to attack these codes, like Colossus, are now generally recognized as the first programmable computers. The line between teleprinter and computer was not always clear. As more encoding and control logic was added, teleprinters came to resemble simple computers.

The Manchester Mark I, a pioneer of stored-program computing built in 1949, used a 5-bit code adopted from Baudot by none other than Alan Turing. The major advantage of this 5-bit encoding was, of course, that programs could be read and written using Baudot tape and standard telegraph equipment. The addition of a teleprinter allowed operators to "interactively" enter instructions into the computer and read the output, although the concept of a shell (or any other designed user interface) had not yet been developed. EDSAC, a contemporary of the Mark I and precursor to a powerful tea logistics system that would set off the development of business computing, also used a teleprinter for input and output.

Many early commercial computers limited input and output to paper tape, often 5-bit for Baudot or 8-bit for ASCII with parity, as in the early days of computing preparation of a program was an exacting process that would not typically be done "on the fly" at a keyboard. It was, of course, convenient that teleprinters with tape punches could be used to prepare programs for entry into the computer.

Business computing is most obviously associated with IBM, a company that had large divisions building both computers and typewriters. The marriage of the two was inevitable considering the existing precedent. Beginning around 1960 it was standard for IBM computers to furnish a teleprinter as the operator interface, but IBM had a distinct heritage from the telecommunications industry and, for several reasons, was intent on maintaining that distinction. IBM's teleprinter-like devices were variously called Data Communications Systems, Printer-Keyboards, Consoles, and eventually Terminals. They generally operated over proprietary serial channels.

Other computer manufacturers didn't have typewriter divisions, and typewriters and teleprinters were actually rather complex mechanical devices and not all that easy to build. As a result, they tended to buy teleprinters from established manufacturers, often IBM or Western Electric. Consider the case of a rather famous non-IBM computer, the DEC PDP-1 of 1960. It came with a CRT graphics display as standard, and many sources will act as if this was the primary operator interface, but it is important to understand that early CRT graphics displays had a hard time with text. Text is rather complex to render when you are writing point-by-point to a CRT vector display from a rather slow machine. You would be surprised how many vertices a sentence has in it.

So despite the ready availability of CRTs in the 1960s (they were, of course, well established in the television industry), few computers used them for primary text input/output. Instead, the PDP-1 was furnished with a modified IBM typewriter as its console. This scheme of paying a third-party company (Soroban Engineering) to modify IBM typewriters for teleprinter control was apparently not very practical, and later DEC PDP models tended to use Western Electric Teletypes as user terminals. These had the considerable advantage that they were already designed to operate over long telephone circuits, making it easy to install multiple terminals throughout a building for time sharing use.

Indeed, time sharing was a natural fit for teleprinter terminals. With a teleprinter and a computer with a suitable modem, you could "call in" to a time sharing computer over the telephone from a remote office. Most of the first practical "computer networks" (term used broadly) were not actually networks of computers, but a single computer with many remote terminals. This architecture evolved into the BBS and early Internet-like services such as CompuServe. The idea was surprisingly easy to implement once time sharing operating systems were developed; the necessary hardware was already available from Western Electric.

While I cannot swear to the accuracy of this attribution, many sources suggest that the term "tty" as a generic reference to a user terminal or serial I/O channel originated with DEC. It seems reasonable; DEC's software was very influential on the broader computer industry, particularly outside of IBM. UNIX originally targeted a PDP-11 with teleprinters. While I can't prove it, it seems quite believable that the tty terminology was adopted directly from RT-11 or another operating system that Bell Labs staff might have used on the PDP-11.

Computers were born of the teleprinter and would inevitably come to consume them. After all, what is a computer but a complex teleprinter? Today, displaying text and accepting it from a keyboard is among the most basic functions of computers, and computers continue to perform this task using an architecture that would be familiar to engineers in the 1970s. They would likely be more surprised by what hasn't changed than what has: many of us still spend a lot of time in graphical software pretending to be a video display terminal built for compatibility with teleprinters.

And we're still using that 7-bit ASCII code a lot, aren't we. At least Baudot died out and we get to enjoy lower case letters.

[1] Actor, singer, etc. Gene Autry had worked as a telegrapher before he began his career in entertainment. This resulted in no small number of stories of a celebrity stand-in at the telegraph office. Yes, this is about to be a local history anecdote. It is fairly reliably reported that Gene Autry once volunteered to stand in for the telegrapher and station manager at the small Santa Fe Railroad station in Socorro, New Mexico, as the telegrapher had been temporarily overwhelmed by the simultaneous arrival of a packed train and a series of telegrams. There are enough of these stories about Gene that I think he really did keep his Morse sharp well into his acting career.

[2] Baud is a somewhat confusing unit derived from Baudot. Baud refers to the number of symbols per second on the underlying communication medium. For simple binary systems (and thus many computer communications systems we encounter daily), baud rate is equivalent to bit rate (bps). For systems that employ multi-level signaling, the bit rate will be higher than the baud rate, as multiple bits are represented per symbol on the wire. Methods like QAM are useful because they result in bit rates that are many multiples of the baud rate, reducing the bandwidth on the wire.

--------------------------------------------------------------------------------

>>> 2024-02-11 the top of the DNS hierarchy

In the past (in fact two years ago, proof I have been doing this for a while now!) I wrote about the "inconvenient truth" that structural aspects of the Internet make truly decentralized systems infeasible, due to the lack of a means to perform broadcast discovery. As a result, most distributed systems rely on a set of central, semi-static nodes to perform initial introductions.

For example, Bitcoin relies on a small list of volunteer-operated domain names that resolve to known-good full nodes. Tor similarly uses a small set of central "directory servers" that provide initial node lists. Both systems have these lists hardcoded into their clients; coincidentally, both have nine trusted, central hostnames.

This sort of problem exists in basically all distributed systems that operate in environments where it is not possible to shout into the void and hope for a response. The internet, for good historic reasons, does not permit this kind of behavior. Here we should differentiate between distributed and decentralized, two terms I do not tend to select very carefully. Not all distributed systems are decentralized, indeed, many are not. One of the easiest and most practical ways to organize a distributed system is according to a hierarchy. This is a useful technique, so there are many examples, but a prominent and old one happens to also be part of the drivetrain mechanics of the internet: DNS, the domain name system.

My reader base is expanding and so I will provide a very brief bit of background. Many know that DNS is responsible for translating human-readable names like "computer.rip" into the actual numerical addresses used by the internet protocol. Perhaps a bit fewer know that DNS, as a system, is fundamentally organized around the hierarchy of these names. To examine the process of resolving a DNS name, it is sometimes more intuitive to reverse the name, and instead of "computer.rip", discuss "rip.computer" [1].

This name is hierarchical, it indicates that the record "computer" is within the zone "rip". "computer" is itself a zone and can contain yet more records, we tend to call these subdomains. But the term "subdomain" can be confusing as everything is a subdomain of something, even "rip" itself, which in a certain sense is a subdomain of the DNS root "." (which is why, of course, a stricter writing of the domain name computer.rip would be computer.rip., but as a culture we have rejected the trailing root dot).

Many of us probably know that each level of the DNS hierarchy has authoritative nameservers, operated typically by whoever controls the name (or their third-party DNS vendor). "rip" has authoritative DNS servers provided by a company called Rightside Group, a subsidiary of the operator of websites like eHow that went headfirst into the great DNS land grab and snapped up "rip" as a bit of land speculation, alongside such attractive properties as "lawyer" and "navy" and "republican" and "democrat", all of which I would like to own the "computer" subdomain of, but alas such dictionary words are usually already taken.

"computer.rip", of course, has authoritative nameservers operated by myself or my delegate. Unlike some people I know, I do not have any nostalgia for BIND, and so I pay a modest fee to a commercial DNS operator to do it for me. Some would be surprised that I pay for this; DNS is actually rather inexpensive to operate and authoritative name servers are almost universally available as a free perk from domain registrars and others. I just like to pay for this on the general feeling that companies that charge for a given service are probably more committed to its quality, and it really costs very little and changing it would take work.

To the observant reader, this might leave an interesting question. If even the top-level domains are subdomains of a secret, seldom-seen root domain ".", who operates the authoritative name servers for that zone?

And here we return to the matter of even distributed systems requiring central nodes. Bitcoin uses nine harcoded domain names for initial discovery of decentralized peers. DNS uses thirteen harcoded root servers to establish the top level of the hierarchy.

These root servers are commonly referred to as a.root-servers.net through m.root-servers.net, and indeed those are their domain names, but remember that when we need to use those root servers we have no entrypoint into the DNS hierarchy and so are not capable of resolving names. The root servers are much more meaningfully identified by their IP addresses, which are "semi-harcoded" into recursive resolves in the form of what's often called a root hints file. You can download a copy, it's a simple file in BIND zone format that BIND basically uses to bootstrap its cache.

And yes, there are other DNS implementations too, a surprising number of them, even in wide use. But when talking about DNS history we can mostly stick to BIND. BIND used to stand for Berkeley Internet Name Domain, and it is an apt rule of thumb in computer history that anything with a reference to UC Berkeley in the name is probably structurally important to the modern technology industry.

One of the things I wanted to get at, when I originally talked about central nodes in distributed systems, is the impact it has on trust and reliability. The TOR project is aware that the nine directory servers are an appealing target for attack or compromise, and technical measures have been taken to mitigate the possibility of malicious behavior. The Bitcoin project seems to mostly ignore that the DNS seeds exist, but of course the design of the Bitcoin system limits their compromise to certain types of attacks. In the case of DNS, much like most decentralized systems, there is a layer of long-lived caching for top-level domains that mitigates the impact of unavailability of the root servers, but still, in every one of these systems, there is the possibility of compromise or unavailability if the central nodes are attacked.

And so there is always a layer of policy. A trusted operator can never guarantee the trustworthiness of a central node (the node could be compromised, or the trusted operator could turn out to be the FBI), but it sure does help. Tor's directory servers are operated by the Tor project. Bitcoin's DNS seeds are operated by individuals with a long history of involvement in the project. DNS's root nodes are operated by a hodgepodge of companies and institutions that were important to the early internet.

Verisign operates two, of course. A California university operates one, of course, but amusingly not Berkeley. Three are operated by various arms of US defense. Some internet industry associations, an NCC, another university, ICANN runs one of them themselves. It's pretty random, though, and just reflects a set of organizations prominently involved in the early internet.

Some people, even some journalists I've come across, hear that there are 13 name servers and picture 13 4U boxes with a lot of blinking lights in heavily fortified data centers. Admittedly this description was more or less accurate in the early days, and a couple of the smaller root server operators did have single machines until surprisingly recently. But today, all thirteen root server IP addresses are anycast groups.

Anycast is not a concept you run into every day, because it's not really useful on local networks where multicast can be used. But it's very important to the modern internet. The idea is this: an IP address (really a subnetwork) is advertised by multiple BGP nodes. Other BGP nodes can select the advertisement they like the best, typically based on lowest hop count. As a user, you connect to a single IP address, but based on the BGP-informed routing tables of internet service providers your traffic could be directed to any number of sites. You can think of it as a form of load balancing at the IP layer, but it also has the performance benefit of users mostly connecting to nearby nodes, so it's widely used by CDNs for multiple reasons.

For DNS, though, where we often have a bootstrapping problem to solve, anycast is extremely useful as a way to handle "special" IP addresses that are used directly. For authoritative DNS servers like 192.5.5.241 [2001:500:2f::f] [2] (root server F) or recursive resolvers like 8.8.8.8 [2001:4860:4860::8888] (Google public DNS), anycast is the secret that allows a "single" address to correspond to a distributed system of nodes.

So there are thirteen DNS root servers in the sense that there are thirteen independently administered clusters of root servers (with the partial exception of A and J, both operated by Verisign, due to their acquisition of former A operator Network Solutions). Each of the thirteen root servers is, in practice, a fairly large number of anycast sites, sometimes over 100. The root server operators don't share much information about their internal implementation, but one can assume that in most cases the anycast sites consist of multiple servers as well, fronted by some sort of redundant network appliance. There may only be thirteen of them, but each of the thirteen is quite robust. For example, the root servers typically place their anycast sites in major internet exchanges distributed across both geography and provider networks. This makes it unlikely that any small number of failures would seriously affect the number of available sites. Even if a root server were to experience a major failure due to some sort of administration problem, there are twelve more.

Why thirteen, you might ask? No good reason. The number of root servers basically grew until the answer to an NS request for "." hit the 512 byte limit on UDP DNS responses. Optimizations over time allowed this number to grow (actually using single letters to identify the servers was one of these optimizations, allowing the basic compression used in DNS responses to collapse the matching root-servers.net part). Of course IPv6 blew DNS response sizes completely out of the water, leading to the development of the EDNS extension that allows for much larger responses.

13 is no longer the practical limit, but with how large some of the 13 are, no one sees a pressing need to add more. Besides, can you imagine the political considerations in our modern internet environment? The proposed operator would probably be Cloudflare or Google or Amazon or something and their motives would never be trusted. Incidentally, many of the anycast sites for root server F (operated by ISC) are Cloudflare data centers used under agreement.

We are, of course, currently trusting the motives of Verisign. You should never do this! But it's been that way for a long time, we're already committed. At least it isn't Network Solutions any more. I kind of miss when SRI was running DNS and military remote viewing.

But still, there's something a little uncomfortable about the situation. Billions of internet hosts depend on thirteen "servers" to have any functional access to the internet.

What if someone attacked them? Could they take the internet down? Wouldn't this cause a global crisis of a type seldom before seen? Should I be stockpiling DNS records alongside my canned water and iodine pills?

Wikipedia contains a great piece of comedic encyclopedia writing. In its article on the history of attacks on DNS root servers, it mentions the time, in 2012, that some-pastebin-user-claiming-to-be-Anonymous (one of the great internet security threats of that era) threatened to "shut the Internet down". "It may only last one hour, maybe more, maybe even a few days," the statement continues. "No matter what, it will be global. It will be known."

That's the end of the section. Some Wikipedia editor, no doubt familiar with the activities of Anonymous in 2012, apparently considered it self-evident that the attack never happened.

Anonymous may not have put in the effort, but others have. There have been several apparent DDoS attacks on the root DNS servers. One, in 2007, was significant enough that four of the root servers suffered---but there were nine more, and no serious impact was felt by internet users. This attack, like most meaningful DDoS, originated with a botnet. It had its footprint primarily in Korea, but C2 in the United States. The motivation for the attack, and who launched it, remains unknown.

There is a surprisingly large industry of "booters," commercial services that, for a fee, will DDoS a target of your choice. These tend to be operated by criminal groups with access to large botnets; the botnets are sometimes bought and sold and get their tasking from a network of resellers. It's a competitive industry. In the past, booters and botnet operators have sometimes been observed announcing a somewhat random target and taking it offline as, essentially, a sales demonstration. Since these demonstrations are a known behavior, any time a botnet targets something important for no discernible reason, analysts have a tendency to attribute it to a "show of force." I have little doubt that this is sometimes true, but as with the tendency to attribute monumental architecture to deity worship, it might be an overgeneralization of the motivations of botnet operators. Sometimes I wonder if they made a mistake, or maybe they were just a little drunk and a lot bored, who is to say?

The problem with this kind of attribution is evident in the case of the other significant attack on the DNS root servers, in 2015. Once again, some root servers were impacted badly enough that they became unreliable, but other root servers held on and there was little or even no impact to the public. This attack, though, had some interesting properties.

In the 2007 incident, the abnormal traffic to the root servers consisted of large, mostly-random DNS requests. This is basically the expected behavior of a DNS attack; using randomly generated hostnames in requests ensures that the responses won't be cached, making the DNS server exert more effort. Several major botnet clients have this "random subdomain request" functionality built in, normally used for attacks on specific authoritative DNS servers as a way to take the operator's website offline. Chinese security firm Qihoo 360, based on a large botnet honeypot they operate, reports that this type of DNS attack was very popular at the time.

The 2015 attack was different, though! Wikipedia, like many other websites, describes the attack as "valid queries for a single undisclosed domain name and then a different domain the next day." In fact, the domain names were disclosed, by at least 2016. The attack happened on two days. On the first day, all requests were for 336901.com. The second day, all requests were for 916yy.com.

Contemporaneous reporting is remarkably confused on the topic of these domain names, perhaps because they were not widely known, perhaps because few reporters bothered to check up on them thoroughly. Many sources make it sound like they were random domain names perhaps operated by the attacker, one goes so far as to say that they were registered with fake identities.

Well, my Mandarin isn't great, and I think the language barrier is a big part of the confusion. No doubt another part is a Western lack of familiarity with Chinese internet culture. To an American in the security industry, 336901.com would probably look at first like the result of a DGA or domain generation algorithm. A randomly-generated domain used specifically to be evasive. In China, though, numeric names like this are quite popular. Qihoo 360 is, after all, domestically branded as just 360---360.cn.

As far as I can tell, both domains were pretty normal Chinese websites related to mobile games. It's difficult or maybe impossible to tell now, but it seems reasonable to speculate that they were operated by the same company. I would assume they were something of a gray market operation, as there's a huge intersection between "mobile games," "gambling," and "target of DDoS attacks." For a long time, perhaps still today in the right corners of the industry, it was pretty routine for gray-market gambling websites to pay booters to DDoS each other.

In a 2016 presentation, security researchers from Verisign (Weinberg and Wessels) reported on their analysis of the attack based on traffic observed at Verisign root servers. They conclude that the traffic likely originated from multiple botnets or at least botnet clients with different configurations, since the attack traffic can be categorized into several apparently different types [3]. Based on command and control traffic from a source they don't disclose (perhaps from a Verisign honeynet?), they link the attack to the common "BillGates" [4] botnet. Most interestingly, they conclude that it was probably not intended as an attack on the DNS root: the choice of fixed domain names just doesn't make sense, and the traffic wasn't targeted at all root servers.

Instead, they suspect it was just what it looks like: an attack on the two websites the packets queried for, that for some reason was directed at the root servers instead of the authoritative servers for that second-level domain. This isn't a good strategy; the root servers are a far harder target than your average web hosting company's authoritative servers. But perhaps it was a mistake? An experiment to see if the root server operators might mitigate the DDoS by dropping requests for those two domains, incidentally taking the websites offline?

Remember that Qihoo 360 operates a large honeynet and was kind enough to publish a presentation on their analysis of root server attacks. Matching Verisign's conclusions, they link the attack to the BillGates botnet, and also note that they often observe multiple separate botnet C2 servers send tasks targeting the same domain names. This probably reflects the commercialized nature of modern botnets, with booters "subcontracting" operations to multiple botnet operators. It also handily explains Verisign's observation that the 2015 attack traffic seems to have come from more than one implementation a DNS DDoS.

360 reports that, on the first day, five different C2 servers tasked bots with attacking 336901.com. On the second day, three C2 servers tasked for 916yy.com. But they also have a much bigger revelation: throughout the time period of the attacks, they observed multiple tasks to attack 916yy.com using several different methods.

360 concludes that the 2015 DNS attack was most likely the result of a commodity DDoS operation that decided to experiment, directing traffic at the DNS roots instead of the authoritative server for the target to see what would happen. I doubt they thought they'd take down the root servers, but it seems totally reasonable that they might have wondered if the root server operators would filter DDoS traffic based on the domain name appearing in the requests.

Intriguingly, they note that some of the traffic originated with a DNS attack tool that had significant similarities to BillGates but didn't produce quite the same packets. Likely we will never know, but a likely explanation is that some group modified the BillGates DNS attack module or implemented a new one based on the method used by BillGates.

Tracking botnets gets very confusing very fast, there are just so many different variants of any major botnet client! BillGates originated, for example, as a Linux botnet. It was distributed to servers, not only through SSH but through vulnerabilities in MySQL and ElasticSearch. It was unusual, for a time, in being a major botnet that skipped over the most common desktop operating system. But ports of BillGates to Windows were later observed, distributed through an Internet Explorer vulnerability---classic Windows. Why someone chose to port a Linux botnet to Windows instead of using one of the several popular Windows botnets (Conficker, for example) is a mystery. Perhaps they had spent a lot of time building out BillGates C2 infrastructure and, like any good IT operation, wanted to simplify their cloud footprint.

High in the wizard's tower of the internet, thirteen elders are responsible for starting every recursive resolver on its own path to truth. There's a whole Neal Stephenson for Wired article there. But in practice it's a large and robust system. The extent of anycast routing used for the root DNS servers, to say nothing of CDNs, is one of those things that challenges are typical stacked view of the internet. Geographic load balancing is something we think of at high layers of the system, it's surprising to encounter it as a core part of a very low level process.

That's why we need to keep our thinking flexible: computers are towers of abstraction, and complexity can be added at nearly any level, as needed or convenient. Seldom is this more apparent than it is in any process called "bootstrapping." Some seemingly simpler parts of the internet, like DNS, rely on a great deal of complexity within other parts of the system, like BGP.

Now I'm just complaining about pedagogical use of the OSI model again.

[1] The fact that the DNS hierarchy is written from right-to-left while it's routinely used in URIs that are otherwise read left-to-right is one of those quirks of computer history. Basically an endianness inconsistency. Like American date order, to strictly interpret a URI you have to stop and reverse your analysis part way through. There's no particular reason that DNS is like that, there was just less consistency over most significant first/least significant first hierarchical ordering at the time and contemporaneous network protocols (consider the OSI stack) actually had a tendency towards least significant first.

[2] The IPv4 addresses of the root servers are ages old and mostly just a matter of chance, but the IPv6 addresses were assigned more recently and allowed an opportunity for something more meaningful. Reflecting the long tradition of identifying the root servers by their letter, many root server operators use IPv6 addresses where the host part can be written as the single letter of the server (i.e. root server C at [2001:500:2::c]). Others chose a host part of "53," a gesture at the port number used for DNS (i.e. root server J, [2001:7fe::53]). Others seem more random, Verisign uses 2:30 for both of their root servers (i.e. root server A, [2001:503:ba3e::2:30]), so maybe that means something to them, or maybe it was just convenient. Amusingly, the only operator that went for what I would call an address pun is the Defense Information Systems Agency, which put root server G at [2001:500:12::d0d].

[3] It really dates this story that there was some controversy around the source IPs of the attack, originating with none other than deceased security industry personality John McAfee. He angrily insisted that it was not plausible that the source IPs were spoofed. Of course botnets conducting DDoS attacks via DNS virtually always spoof the source IP, as there are few protections in place (at the time almost none at all) to prevent it. But John McAfee has always had a way of ginning up controversy where none was needed.

[4] Botnets are often bought, modified, and sold. They tend to go by various names from different security researchers and different variants. I'm calling this one "BillGates" because that's the funniest of the several names used for it.

--------------------------------------------------------------------------------
<- newer                                                                older ->