CR4 - The Engineer's Place for News and Discussion ®
Login | Register for Engineering Community (CR4)

Notes & Lines

Notes & Lines discusses the intersection of math, science, and technology with performing and visual arts. Topics include bizarre instruments, technically-minded musicians, and cross-pollination of science and art.

Snow in June, Frost in July, Ice in August

Posted May 09, 2016 10:32 AM by Jonathan Fuller

This coming summer marks the 200th anniversary of one of the most severe weather anomalies in modern history. The Year Without a Summer, as it's now commonly known, wreaked havoc on much of the Northern Hemisphere. In upstate New York and New England, in the vicinity of CR4 headquarters, snow fell in June, frosts were common from May through August, and temperatures sometimes swung violently between normal summer highs of 90° F or more to near-freezing in a matter of hours.

The climatic conditions of 1816 also resulted in unseasonably low temperatures and heavy rains as far east as China. In Europe famine was widespread and riots, looting, arson and demonstrations were common occurrences. Throughout the hemisphere, farming became nearly impossible, and grain prices increased exponentially. In an age when subsistence farming was the norm and commoners worked their hands to the bone to feed their families, crop failures often meant the possibility for starvation.

Contemporary observers were almost completely perplexed as to the disappearance of the summer of 1816, but scientists now believe it was the result of a few interrelated factors. The most significant of these was the April 1815 eruption of Mount Tambora on the Indonesian island of Sumbawa. Tambora was likely the most powerful volcanic eruption in recorded history, with a column height of over 26 miles and a tephra volume of over 38 cubic miles. Over 70,000 Indonesians were killed following the blast. The enormous amount of volcanic ash that spewed into the atmosphere reflected large quantities of sunlight and lowered Northern Hemisphere temperatures.

To compound the effects of the ash, modern scientists also believe that solar magnetic activity was at a historic low in 1816, the midpoint of a 25-year solar period known as the Dalton Minimum. By studying the presence of carbon-14 in tree rings, solar astronomers have concluded that sunspot activity was abnormally low, reducing the transmission of solar radiation to Earth. Ironically, the Tambora eruption often caused a dry fog to settle over the Northern Hemisphere, producing a reddened and dimmed Sun and causing sunspots to become visible to the naked eye. With little knowledge of the eruption, 19th-century Americans and Europeans often blamed the red, spotty Sun alone for the abnormal weather conditions, while in reality Tambora's ash played a much more significant role.

A third less-studied factor is the possibility of a solar inertial shift. These shifts, occurring every 180 years or so due to the gravitational pull of the largest planets in the Solar System, cause the Sun to wobble on its axis and possibly affect Earth's climate. Scientists point to three of these shifts--in 1632, 1811, and 1990--that correspond to major climatic events: the solar Maunder Minimum from 1645-1715, the Dalton Minimum discussed above, and the eruption of Mount Pinatubo with corresponding global cooling in 1991. This association remains largely hypothetical, however.

The Year Without a Summer produced some interesting and long-lasting cultural effects. Thousands left the American Northeast and settled in the Midwest to escape the frigid summer; Latter-day Saints founder Joseph Smith was forced move from Vermont and settle in Western New York, the first in a series of events that culminated in his writing The Book of Mormon. German inventor Karl Drais may have invited the Laufmaschine, the predecessor of the bicycle, in 1818 in response to the shortage of horses caused by the 1816 crop failure.

That summer may have influenced contemporary art as well. The high concentrations of tephra in the atmosphere led to spectacular yellow and red sunsets, which were captured by J.M.W. Turner's paintings of the 1820s. (If you've ever wondered about the vivid red sky in the more widely known painting The Scream, some modern scholars believe Edvard Munch may have viewed a similarly vivid sunset as a result of the 1883 eruption of Krakatoa.) Trapped inside their Swiss villa due to the excessive rains in June 1816, a group of English writers on holiday passed the time by seeing who could write the most frightening ghost story. Mary Shelley came up with the now-famous Frankenstein which she would finish and publish in 1818, while Lord Byron's unfinished fragment The Burial inspired John William Polidori to write The Vampyre in 1819, effectively launching the still-healthy field of romantic vampire fiction.

The advancement of agricultural technology more or less ensures that we'll never have a comparable subsistence crisis to that of 1816, despite any further severe weather anomalies. Even so, it's chilling to examine that year's events and attitudes toward them, as expressed by surviving journals and works of art.

Image credits: NOAA | Public domain

7 comments; last comment on 05/13/2016
View/add comments

Earth Day, And Problematic Predictions

Posted May 02, 2016 12:00 AM by Jonathan Fuller

There are few more polarizing issues than environmental ones: climate change, the feasibility of alternative energy, and, in recent news, Earth Day. My 2016 Earth Week was spent scrounging for recyclables for a school project in which my son built a robot statue out of milk jugs, cereal boxes, and empty yogurt cups. From where I sit Earth Day is a good thing, a message of "don't be so anthropocentric that you think you can just throw your crap anywhere and let Mother Nature take care of it."

The original 1970 Earth Day was by contrast marked by apocalyptic predictions and fear-based thinking. Americans were surrounded by grim reminders of industrial pollution, such as the 1969 Cuyahoga River fire, the labeling of Lake Erie as a "gigantic cesspool," and heavy smog in urban areas. The original Earth Day saw the prediction of mass starvations, worldwide famines, the extinction of 80% of all living animals, the reduction of ambient sunlight by 50%, 45-year lifespans, and the elimination of all crude oil, all by the year 2000.

All of these predictions fell well short, of course. The 1970 prognosticators were concerned about pollution and fossil fuels but were mostly anxious about overpopulation, a topic that arouses little fear 46 years later. If anything, our world is looking to be in better shape. We're seeing significantly higher crop yields using the same amount of land, lower staple food prices, and no more DDT; we have at least as much fossil fuel to last us for about another century, maybe; and population looks to level off within the next few decades. The Earth Day doomsters, particularly Paul Ehrlich and John Holdren, exercised faulty and outdated logic in assuming that Negative Impact = (Population)(Affluence)(Technology). What they didn't plan for is that technology has the power to boost positive inputs like food production and medicine, and we've learned effective strategies to reduce pollution along the way.

The problem with predictions is that for every correct one, there are countless more that are dead wrong, even those based on rigorous data and scientific extrapolation. So-called futurists predict like it's their job, often assigning a target date to the year, and are typically wrong. Consider, for example, Arthur C. Clarke's predictions for the 21st century. The occurrence of a technological singularity is still a hot topic in AI communities, and most thinkers--building on the assumption that Moore's Law will continue unabated--agree that self-improving artificial general intelligence will occur within the next 50 years or so. At the 2012 Singularity Summit, Stuart Armstrong acknowledged the uncertainty in predicting advanced AI by stating that his "current 80% estimate [for the singularity] is something like 5 to 100 years." Now that's how to make a prediction...

This isn't to say that long-term thinking isn't valuable or honorable. In grad school I became intrigued by the Long Now Foundation, a non-profit working to foster slower/better thinking rather than the prevailing faster/cheaper activities of modern times. Aside from hosting seminars and advocating a five-digit date structure (ie, 02016 for 2016) to anticipate the Year 10,000 problem, the group keeps a record of long-term predictions and bets made by its members for fun and accountability. They're also constructing a clock designed to run for 10,000 years with minimal maintenance using simple tools, and are working on a publicly accessible digital library of all known human languages for posterity.

While the Long Now's activities may seem radical, most examples of true long-term thinking are, given our blindly rushing existence. Shortly before his death, Kurt Vonnegut proposed a presidentially appointed US Secretary of the Future, whose sole duty is to determine an activity's impact on future generations. The Great Law of the Iroquois famously mandated that current generations make decisions that would benefit their descendants seven generations (about 140 years) into the future. The difficulty, of course, is satisfying ourselves in the now as well as in the future. As environmental skeptics point out, it's nearly impossible to plan out our fossil fuel dependency and use for the next decade, let alone for the next hundred years.

Perhaps our best bet is to view the future in terms of possibility. To paraphrase Clarke's first law of prediction: "When a distinguished but elderly scientist states that something is possible, he's almost certainly correct."

Image credit: / CC BY 2.0

6 comments; last comment on 05/10/2016
View/add comments

The Evolution of Tech Grammar, or: Language is Goofy

Posted April 18, 2016 12:00 AM by Jonathan Fuller
Pathfinder Tags: ap style grammar words

From 2008 to 2009 I worked for HSBC Bank, one of the more interesting workplaces to be in during the Great Recession. A disgruntled non-customer, whom I believe was teetering on the edge of financial oblivion like so many of us, once pointedly asked me what the hell HSBC stood for, anyway. I told him that it was just an acronym*...that the actual name of the business was HSBC Bank--nothing more, nothing less. "But you're like, a Chinese bank, right? Isn't the 'H' for Hong Kong?" I assured him it was not, making him even angrier at the situation.

While I may have been coy about (playfully) screwing with this man, HSBC is a British company; it was originally based in Hong Kong and the acronym once stood for Hong Kong and Shanghai Banking Corporation. In 1991 it reorganized and from then on legally existed as HSBC Holdings plc. I can't say I know why the company disassociated from its Far Eastern roots, but the point is that language and acronyms change for various reasons: falling in and out of common usage, to avoid certain stereotypes or associations, or just because they become too verbose or antiquated.

[*As a CR4 editor pointed out to me after reading this post, HSBC might be more accurately termed an initialism before 1991, and a pseudo-initialism since. Acronyms are "pronounced," like NATO and JPEG, while initialisms are strings of initials.]

A few weeks ago the Associated Press made a major announcement: the 2016 AP Stylebook will lowercase both "internet" and "web." In line with past stylebook changes, it's safe to assume that the AP believes that these two terms are now generic enough to merit lowercased usage. Pro-lowercase activists look to the origin of the word to make their point: the "internet" of old was simply an internetwork of smaller networks using the same protocol. So when we speak about the modern Internet--the one I'm using to research this blog post and connect remotely to my office computer--we're referring to the largest and best-known example of an internet. Also, they say lowercasing is more efficient, saving thousands of Shift-key strokes, and that capitalized nouns are a strain on the eyes, introducing roadblocks into neatly flowing text.

The other side of the battle, on which I sometimes side, takes issue with the word "the." Think about the star at the center of our solar system. A star at the center of some other distant solar system could be called its "sun," but we call the most local and best-known example to us on Earth the Sun, capitalized and all, for clarity. I know of no other significant internets other than THE Internet--if you know of one feel free to comment and enlighten me. And regarding the web, what if we're trying to describe researching spider webs online? Would we look up webs on the web? Isn't the Web clearer? Call me antiquated (my wife does on a daily basis, so I'm used to it), but I like my Internet and Web, even if I'm too lazy to click Shift and actually capitalize them most of the time.

These technologically related style changes happen pretty frequently. For example, AP changed their usage of Web site to website in 2010, and e-mail to email in 2011. These make more sense as generic terms, in my opinion: we surely no longer think of email as "electronic mail." With the slow demise of postal mail, perhaps email will one day be referred to as just "mail," and postal mail will become oldmail or cismail, maybe.

The fluidity of technical terminology is also easily seen in anacronyms, or words that were formerly acronyms but have fallen into common usage. Lasers were originally "light amplification by stimulated emission of radiation," for example. Treating "laser" as a common noun allowed us to back-form the verb "to lase," meaning to produce laser light. Ironically for me as a technical writer and editor, even the verb "to edit" was back-formed from "editor," the original term.

The possibility for confusing variation and evolution in the English language is endless. Who knows? Maybe in 50 years our descendants will just switch on their computers and internet.

Image credit: Stinging Eyes / CC BY-SA 2.0

21 comments; last comment on 05/17/2016
View/add comments

Combining Optics and Audio to Save Historical Recordings

Posted March 31, 2016 12:00 AM by Jonathan Fuller
Pathfinder Tags: archive radio recordings

My sister-in-law works as an archivist, and from what I hear her daily work is pretty much what you'd expect of the job. She spends a lot of time in dark basements, has frequent attacks of dust-triggered sinusitis, sometimes wears white gloves, and most importantly preserves and catalogs old books and papers so they can be accessed by future researchers.

Preserving physically readable materials like books is relatively straightforward, but archivists have run into well-documented problems preserving system-dependent materials like computer files or sounds. In the case of the latter, the earliest examples of recorded sound are becoming more and more difficult to access and play back. Disc records are now generally limited to hi-fi enthusiasts, and maybe 0.5% of the population has ever seen a cylinder phonograph in person, so archivists have been concerned that early recordings may be lost forever.

The US Library of Congress is fighting against that tide thanks in part to IRENE, a device developed at Berkeley Lab by researchers recycling particle physics methodologies. IRENE uses high-res optical technologies to take millions of images of a grooved recording medium and converts the grooves into a sonic waveform. Using optical rather than audio technology has two primary advantages: avoiding further wear on 100+ year old grooves by limiting contact, and the ability to reconstruct sound from broken or unplayable discs or cylinders.

IRENE's name is derived from the first audio extraction performed, a Weavers recording of "Goodnight, Irene," but its name has since become a backronym for "Image, Reconstruct, Erase Noise, Etc." The machine made a splash in 2008 when it reconstructed audio from an 1860 phonautogram recording of the French folk song "Au Clair de la Lune." Prior to this discovery, researchers figured Edison recordings of the 1870s to be the earliest surviving recorded sounds. (True to internet fashion, the entire experimental discography of Édouard-Léon Scott de Martinville, who invented the phonautograph, is on YouTube.)

IRENE has been successfully employed in extracting audio from a wide variety of media since 2008, including Alexander Graham Bell's Volta Labs experiments. The beauty of using optical technology is seen in the last of these examples, an artifact consisting of a wax disc still attached to a primitive recording machine. Researchers simply placed the scanner's beam over the disc and used an external drive to rotate the machine, preserving both the disc and machine.

In a more recent sound preservation effort, the Library of Congress held a Radio Preservation Task Force symposium in late February, part of a larger collaborative effort to preserve early radio recordings. That conference was inspired by a 2013 LoC report that found that many important historical broadcasts were either untraceable or had been destroyed entirely, and that unlike other archival areas, "little is known of what still exists, where it is stored, and in what condition." Seeing as how radio was once the dominant medium for real-time news broadcasts and discussion about niche topics, rediscovery of historic recordings, although it rarely occurs, is a big deal.

Archivists have had perhaps more pressing issues on the digital front as well. Although digital files take up significantly less physical space, they're prone to system compatibility issues resulting from the exponential growth of computing equipment. Whether it's wax cylinders, radio broadcasts, or digital files, sound archivists continue to dutifully perform important, and often thankless, preservation work.

Image credit: Library of Congress Blog

2 comments; last comment on 04/03/2016
View/add comments

Organ Pipes: An Enduring Application for Lead

Posted March 08, 2016 9:52 AM by Jonathan Fuller
Pathfinder Tags: lead organ Pipe spotted metal

The water crisis in Flint, Michigan has thrown lead contamination (as well as poor government oversight and possibly corruption) into the public spotlight. While lead was once common in numerous products and situations, its associated hazards are now universally well-known and it's rarely used except in specialized applications.

One of these applications is organ pipes. Pipes manufactured in J.S. Bach's time were (supposedly) pure lead, but premium modern ones are made of a mixture of lead and tin known as "spotted metal." Pipe manufacturers use a tin/lead mixture for both tonal and practical reasons. Lead is pliable and prone to greater vibration when an air column passes through a pipe, resulting in a warm sound, but a pure lead pipe of even a short length of eight feet or less will collapse under its own weight. Tin provides the pipe with mechanical stability and lends a balanced brightness to the tone as well. Because each pipe is handmade and hand-voiced, the tin-lead composition is also soft enough that it can be easily cut and manipulated.

Pipes take on a spotted appearance when the tin:lead ratio exceeds 45% or so, due to the different melting temperatures of the two metals. As the liquid metal passes through its eutectic point, the metals separate and crystallize into small pools on the surface. (This video provides a nice basic overview of the manufacturing process.) These spots become more prominent as the amount of tin increases. Whereas spotted metal is the Rolls-Royce of pipe metals in terms of tone and stability, organ builders use other ratios and metals as well. "Common metal" pipes are also made of tin and lead but with tin concentrations of less than 45%, so that spots do not form. These pipes are cheaper due to the lower tin concentration, but don't sound quite as pure as spotted metal ones.

Organ pipes are often made using pure metals as well. Pure tin pipes are often used on audience-facing façade pipes because they boast the best aesthetic appearance and a bright sound. However, tin pest, a deteriorative condition affecting tin at temperatures lower than around 13° C, can spoil pipes if proper climate conditions aren't maintained. Pure zinc is strong and cheap and is used for long, low-pitched pipes, which consume more material than higher-pitched ones. It's generally accepted that zinc sounds duller than other metals, but its physical characteristics and low cost have made it useful to the present day.

As mentioned above, pure lead pipes were relatively common in many ancient organs, but even large ones have held up to this day. In the late 1970s organ builder John Brombaugh got his hands on some pure lead pipes from a Dutch organ manufactured in 1539. Surprisingly, his shop's analysis found that 16th and 17th century European lead contained impurities comprised of about 1% tin, .75% antimony, and trace amounts of copper and bismuth. These impurities provided enough stability to make the pipe feasible and enabled the rich, warm sound of an almost-pure lead pipe.

While they tend to get short shrift among some modern music lovers, pipe organs are marvels of engineering, most of them using antique technology with the vast majority of parts made and assembled by hand. Large organs contain thousands of pipes and a vast array of mechanical, pneumatic, and electrical control systems. Stay tuned for more organ discussion in future Notes & Lines posts.

Image credit:

10 comments; last comment on 03/12/2016
View/add comments

Guitar Tone, Part 3: Reverb, Delay, and Echo

Posted March 01, 2016 10:19 AM by Jonathan Fuller
Pathfinder Tags: delay echo guitar effects reverb

A continuing discussion about the technical aspects of guitar effects, like this post on distortion and this one on modulation effects, wouldn't be complete without touching on reverb and a few related phenomena. It's easy to assume that any musician--a pianist, wind player, guitarist, whatever--would prefer doing most of their playing in a space that sounds acoustically good, whether that's an auditorium, Boston's Symphony Hall, or the best natural reverberation chamber, a cave. Unfortunately most of us don't have access to these resources, so we're forced to make do with artificial reverb instead.

Physicist Wallace Clement Sabine, who helped design the glorious acoustics of Symphony Hall, gave the science of reverberation a major shove forward in the late 19th century. By lugging around a portable set of organ pipes and windchest, he tested different rooms and measured the time from when the audio source ceased to the inaudibility of all sound, a drop of about 60 dB. This figure is now known as reverberation time or RT60. Sabine found that RT is affected by the size of the room and the amount of total absorption present from certain fabrics, belongings, and people in the room.

Sabine used his observations to develop an empirical formula, shown here, to calculate a room's reverberation. (Vf is the room's volume in cubic feet, S is the total surface area of the room, and ā is the coefficient of absorption.)

Electric and electronic instruments, like all others, only sound as good as the room they're played in, so it didn't take long after the invention of the amplifier for players to start recreating reverb effects. Spring reverb was one of the earliest techniques, and Laurens Hammond (a musical engineer covered in a previous Notes & Lines post) received a patent for a spring reverb unit in 1939. Spring units are small amplifiers separate from the main amp. The devices move a guitar signal through a tube circuit (later a transistor circuit) and a small output transformer, eventually reaching a long spring instead of a loudspeaker. The signal vibrates the spring, is picked up by a transducer at the spring's opposite end, and is then mixed back into the original signal by a user-controlled degree.

Plate reverb (image at right) was a more expensive technique that represents the other half of common analog reverb units. These systems use an electromechanical transducer to create a vibration in a large piece of sheet metal. A nearby pickup captures the vibration and outputs it as an audio signal. A damping pad made of acoustical tiles controls the reverb time. Because plate reverb units weighed hundreds of pounds, they were only feasible in the studio but became a popular effect. These hulking cabinets are no longer produced and are now highly sought after as collectors' items. (This video provides a nice overview of plate vs. spring reverb.)

Modern digital reverb units take a completely different approach: they use bucket brigade device (BBD) technology to create a delayed version of an original signal. These circuits are similar to those used to produce guitar echo, delay, chorus, phaser, and flanger effects--they're simply manipulated in different ways to approximate reverb. Plate and spring reverb units don't sound like playing in a spacious concert hall and have become desirable for their own unique sound, but modern 16- to 24-bit digital reverb units come much closer to the "natural room" sound.

A discussion about digital reverb wouldn't be complete without mentioning delay and echo. Both of these effects use digital-reverbesque technology to feed a reproduced guitar signal into the amp at a slight time delay. "Echo" effects typically refer to a single delayed signal, known as a "slap," while delay effects may include multiple intricately repeated signals.

Image credit: Grebe / CC BY-SA 3.0

Add a comment

Previous in Blog: Guitar Tone, Part 2: Phasing, Flanging, and Chorus  
Show all Blog Entries in this Blog


New Privacy Policy

We have adopted new policies. Please read each one carefully.