CR4 - The Engineer's Place for News and Discussion ®
Login | Register for Engineering Community (CR4)



Notes & Lines

Notes & Lines discusses the intersection of math, science, and technology with performing and visual arts. Topics include bizarre instruments, technically-minded musicians, and cross-pollination of science and art.

The Evolution of Tech Grammar, or: Language is Goofy

Posted April 18, 2016 12:00 AM by Jonathan Fuller
Pathfinder Tags: ap style grammar words

From 2008 to 2009 I worked for HSBC Bank, one of the more interesting workplaces to be in during the Great Recession. A disgruntled non-customer, whom I believe was teetering on the edge of financial oblivion like so many of us, once pointedly asked me what the hell HSBC stood for, anyway. I told him that it was just an acronym*...that the actual name of the business was HSBC Bank--nothing more, nothing less. "But you're like, a Chinese bank, right? Isn't the 'H' for Hong Kong?" I assured him it was not, making him even angrier at the situation.

While I may have been coy about (playfully) screwing with this man, HSBC is a British company; it was originally based in Hong Kong and the acronym once stood for Hong Kong and Shanghai Banking Corporation. In 1991 it reorganized and from then on legally existed as HSBC Holdings plc. I can't say I know why the company disassociated from its Far Eastern roots, but the point is that language and acronyms change for various reasons: falling in and out of common usage, to avoid certain stereotypes or associations, or just because they become too verbose or antiquated.

[*As a CR4 editor pointed out to me after reading this post, HSBC might be more accurately termed an initialism before 1991, and a pseudo-initialism since. Acronyms are "pronounced," like NATO and JPEG, while initialisms are strings of initials.]

A few weeks ago the Associated Press made a major announcement: the 2016 AP Stylebook will lowercase both "internet" and "web." In line with past stylebook changes, it's safe to assume that the AP believes that these two terms are now generic enough to merit lowercased usage. Pro-lowercase activists look to the origin of the word to make their point: the "internet" of old was simply an internetwork of smaller networks using the same protocol. So when we speak about the modern Internet--the one I'm using to research this blog post and connect remotely to my office computer--we're referring to the largest and best-known example of an internet. Also, they say lowercasing is more efficient, saving thousands of Shift-key strokes, and that capitalized nouns are a strain on the eyes, introducing roadblocks into neatly flowing text.

The other side of the battle, on which I sometimes side, takes issue with the word "the." Think about the star at the center of our solar system. A star at the center of some other distant solar system could be called its "sun," but we call the most local and best-known example to us on Earth the Sun, capitalized and all, for clarity. I know of no other significant internets other than THE Internet--if you know of one feel free to comment and enlighten me. And regarding the web, what if we're trying to describe researching spider webs online? Would we look up webs on the web? Isn't the Web clearer? Call me antiquated (my wife does on a daily basis, so I'm used to it), but I like my Internet and Web, even if I'm too lazy to click Shift and actually capitalize them most of the time.

These technologically related style changes happen pretty frequently. For example, AP changed their usage of Web site to website in 2010, and e-mail to email in 2011. These make more sense as generic terms, in my opinion: we surely no longer think of email as "electronic mail." With the slow demise of postal mail, perhaps email will one day be referred to as just "mail," and postal mail will become oldmail or cismail, maybe.

The fluidity of technical terminology is also easily seen in anacronyms, or words that were formerly acronyms but have fallen into common usage. Lasers were originally "light amplification by stimulated emission of radiation," for example. Treating "laser" as a common noun allowed us to back-form the verb "to lase," meaning to produce laser light. Ironically for me as a technical writer and editor, even the verb "to edit" was back-formed from "editor," the original term.

The possibility for confusing variation and evolution in the English language is endless. Who knows? Maybe in 50 years our descendants will just switch on their computers and internet.

Image credit: Stinging Eyes / CC BY-SA 2.0

20 comments; last comment on 04/25/2016
View/add comments

Combining Optics and Audio to Save Historical Recordings

Posted March 31, 2016 12:00 AM by Jonathan Fuller
Pathfinder Tags: archive radio recordings

My sister-in-law works as an archivist, and from what I hear her daily work is pretty much what you'd expect of the job. She spends a lot of time in dark basements, has frequent attacks of dust-triggered sinusitis, sometimes wears white gloves, and most importantly preserves and catalogs old books and papers so they can be accessed by future researchers.

Preserving physically readable materials like books is relatively straightforward, but archivists have run into well-documented problems preserving system-dependent materials like computer files or sounds. In the case of the latter, the earliest examples of recorded sound are becoming more and more difficult to access and play back. Disc records are now generally limited to hi-fi enthusiasts, and maybe 0.5% of the population has ever seen a cylinder phonograph in person, so archivists have been concerned that early recordings may be lost forever.

The US Library of Congress is fighting against that tide thanks in part to IRENE, a device developed at Berkeley Lab by researchers recycling particle physics methodologies. IRENE uses high-res optical technologies to take millions of images of a grooved recording medium and converts the grooves into a sonic waveform. Using optical rather than audio technology has two primary advantages: avoiding further wear on 100+ year old grooves by limiting contact, and the ability to reconstruct sound from broken or unplayable discs or cylinders.

IRENE's name is derived from the first audio extraction performed, a Weavers recording of "Goodnight, Irene," but its name has since become a backronym for "Image, Reconstruct, Erase Noise, Etc." The machine made a splash in 2008 when it reconstructed audio from an 1860 phonautogram recording of the French folk song "Au Clair de la Lune." Prior to this discovery, researchers figured Edison recordings of the 1870s to be the earliest surviving recorded sounds. (True to internet fashion, the entire experimental discography of Édouard-Léon Scott de Martinville, who invented the phonautograph, is on YouTube.)

IRENE has been successfully employed in extracting audio from a wide variety of media since 2008, including Alexander Graham Bell's Volta Labs experiments. The beauty of using optical technology is seen in the last of these examples, an artifact consisting of a wax disc still attached to a primitive recording machine. Researchers simply placed the scanner's beam over the disc and used an external drive to rotate the machine, preserving both the disc and machine.

In a more recent sound preservation effort, the Library of Congress held a Radio Preservation Task Force symposium in late February, part of a larger collaborative effort to preserve early radio recordings. That conference was inspired by a 2013 LoC report that found that many important historical broadcasts were either untraceable or had been destroyed entirely, and that unlike other archival areas, "little is known of what still exists, where it is stored, and in what condition." Seeing as how radio was once the dominant medium for real-time news broadcasts and discussion about niche topics, rediscovery of historic recordings, although it rarely occurs, is a big deal.

Archivists have had perhaps more pressing issues on the digital front as well. Although digital files take up significantly less physical space, they're prone to system compatibility issues resulting from the exponential growth of computing equipment. Whether it's wax cylinders, radio broadcasts, or digital files, sound archivists continue to dutifully perform important, and often thankless, preservation work.

Image credit: Library of Congress Blog

2 comments; last comment on 04/03/2016
View/add comments

Organ Pipes: An Enduring Application for Lead

Posted March 08, 2016 9:52 AM by Jonathan Fuller
Pathfinder Tags: lead organ Pipe spotted metal

The water crisis in Flint, Michigan has thrown lead contamination (as well as poor government oversight and possibly corruption) into the public spotlight. While lead was once common in numerous products and situations, its associated hazards are now universally well-known and it's rarely used except in specialized applications.

One of these applications is organ pipes. Pipes manufactured in J.S. Bach's time were (supposedly) pure lead, but premium modern ones are made of a mixture of lead and tin known as "spotted metal." Pipe manufacturers use a tin/lead mixture for both tonal and practical reasons. Lead is pliable and prone to greater vibration when an air column passes through a pipe, resulting in a warm sound, but a pure lead pipe of even a short length of eight feet or less will collapse under its own weight. Tin provides the pipe with mechanical stability and lends a balanced brightness to the tone as well. Because each pipe is handmade and hand-voiced, the tin-lead composition is also soft enough that it can be easily cut and manipulated.

Pipes take on a spotted appearance when the tin:lead ratio exceeds 45% or so, due to the different melting temperatures of the two metals. As the liquid metal passes through its eutectic point, the metals separate and crystallize into small pools on the surface. (This video provides a nice basic overview of the manufacturing process.) These spots become more prominent as the amount of tin increases. Whereas spotted metal is the Rolls-Royce of pipe metals in terms of tone and stability, organ builders use other ratios and metals as well. "Common metal" pipes are also made of tin and lead but with tin concentrations of less than 45%, so that spots do not form. These pipes are cheaper due to the lower tin concentration, but don't sound quite as pure as spotted metal ones.

Organ pipes are often made using pure metals as well. Pure tin pipes are often used on audience-facing façade pipes because they boast the best aesthetic appearance and a bright sound. However, tin pest, a deteriorative condition affecting tin at temperatures lower than around 13° C, can spoil pipes if proper climate conditions aren't maintained. Pure zinc is strong and cheap and is used for long, low-pitched pipes, which consume more material than higher-pitched ones. It's generally accepted that zinc sounds duller than other metals, but its physical characteristics and low cost have made it useful to the present day.

As mentioned above, pure lead pipes were relatively common in many ancient organs, but even large ones have held up to this day. In the late 1970s organ builder John Brombaugh got his hands on some pure lead pipes from a Dutch organ manufactured in 1539. Surprisingly, his shop's analysis found that 16th and 17th century European lead contained impurities comprised of about 1% tin, .75% antimony, and trace amounts of copper and bismuth. These impurities provided enough stability to make the pipe feasible and enabled the rich, warm sound of an almost-pure lead pipe.

While they tend to get short shrift among some modern music lovers, pipe organs are marvels of engineering, most of them using antique technology with the vast majority of parts made and assembled by hand. Large organs contain thousands of pipes and a vast array of mechanical, pneumatic, and electrical control systems. Stay tuned for more organ discussion in future Notes & Lines posts.

Image credit: Freefoto.com

10 comments; last comment on 03/12/2016
View/add comments

Guitar Tone, Part 3: Reverb, Delay, and Echo

Posted March 01, 2016 10:19 AM by Jonathan Fuller
Pathfinder Tags: delay echo guitar effects reverb

A continuing discussion about the technical aspects of guitar effects, like this post on distortion and this one on modulation effects, wouldn't be complete without touching on reverb and a few related phenomena. It's easy to assume that any musician--a pianist, wind player, guitarist, whatever--would prefer doing most of their playing in a space that sounds acoustically good, whether that's an auditorium, Boston's Symphony Hall, or the best natural reverberation chamber, a cave. Unfortunately most of us don't have access to these resources, so we're forced to make do with artificial reverb instead.

Physicist Wallace Clement Sabine, who helped design the glorious acoustics of Symphony Hall, gave the science of reverberation a major shove forward in the late 19th century. By lugging around a portable set of organ pipes and windchest, he tested different rooms and measured the time from when the audio source ceased to the inaudibility of all sound, a drop of about 60 dB. This figure is now known as reverberation time or RT60. Sabine found that RT is affected by the size of the room and the amount of total absorption present from certain fabrics, belongings, and people in the room.

Sabine used his observations to develop an empirical formula, shown here, to calculate a room's reverberation. (Vf is the room's volume in cubic feet, S is the total surface area of the room, and ā is the coefficient of absorption.)

Electric and electronic instruments, like all others, only sound as good as the room they're played in, so it didn't take long after the invention of the amplifier for players to start recreating reverb effects. Spring reverb was one of the earliest techniques, and Laurens Hammond (a musical engineer covered in a previous Notes & Lines post) received a patent for a spring reverb unit in 1939. Spring units are small amplifiers separate from the main amp. The devices move a guitar signal through a tube circuit (later a transistor circuit) and a small output transformer, eventually reaching a long spring instead of a loudspeaker. The signal vibrates the spring, is picked up by a transducer at the spring's opposite end, and is then mixed back into the original signal by a user-controlled degree.

Plate reverb (image at right) was a more expensive technique that represents the other half of common analog reverb units. These systems use an electromechanical transducer to create a vibration in a large piece of sheet metal. A nearby pickup captures the vibration and outputs it as an audio signal. A damping pad made of acoustical tiles controls the reverb time. Because plate reverb units weighed hundreds of pounds, they were only feasible in the studio but became a popular effect. These hulking cabinets are no longer produced and are now highly sought after as collectors' items. (This video provides a nice overview of plate vs. spring reverb.)

Modern digital reverb units take a completely different approach: they use bucket brigade device (BBD) technology to create a delayed version of an original signal. These circuits are similar to those used to produce guitar echo, delay, chorus, phaser, and flanger effects--they're simply manipulated in different ways to approximate reverb. Plate and spring reverb units don't sound like playing in a spacious concert hall and have become desirable for their own unique sound, but modern 16- to 24-bit digital reverb units come much closer to the "natural room" sound.

A discussion about digital reverb wouldn't be complete without mentioning delay and echo. Both of these effects use digital-reverbesque technology to feed a reproduced guitar signal into the amp at a slight time delay. "Echo" effects typically refer to a single delayed signal, known as a "slap," while delay effects may include multiple intricately repeated signals.

Image credit: Grebe / CC BY-SA 3.0

Add a comment

Guitar Tone, Part 2: Phasing, Flanging, and Chorus

Posted February 19, 2016 12:00 AM by Jonathan Fuller
Pathfinder Tags: chorus effects flanging guitar phasing

Last week's Notes & Lines post dove into the somewhat accidental discovery of the most basic guitar tone alterations: distortion, overdrive, and fuzz. In this post, the second of the guitar tone series, we'll take a look at some common modulation effects that might be lurking in your music collection.

Modulation involves changing one or more parameters--typically amplitude, frequency, and phase--of a signal. Techniques include the addition of a carrier wave to the original signal, or splitting the signal into two parts, changing one or both of them, and mixing them again. In guitar terms, modulation can add increased depth, wider dimensions, and subtle movement to an output tone.

Phasing, or phase-shifting, is a common modulation effect. Phasers split the signal into two parts and shift one of them out of phase with the original. When the signals are recombined, they create a "notch" at the point at which they're completely out of phase. By employing an oscillator, the phaser can vary the frequency of this notch point, creating a whooshing, Doppler-like effect. Most phasers contain more than one shifting stage, with the capacity to add multiple notches and peaks within the same time period, multiplying the Doppler effect.

Much like guitar distortion and fuzz, phasing had a somewhat accidental birth. The Uni-Vibe, an early guitar footpedal effects unit, was introduced in the 1960s to allow guitarists to achieve a rotary speaker effect that was popular on electronic organs at the time. Its tone more or less failed to emulate a rotary speaker, but the unit was a pioneering four-stage phase shifter that popularized the phasing sound. The Uni-Vibe used four light bulb circuits to achieve the initial phase shifting and an LFO to shift the notches around. Later pedals replaced the light bulbs with FETs, with certain models employing opamps with variable resistors. Almost all phasers allow the player to control at least the speed of the phasing, while others allow greater control over the waveform shape and resonance.

Flanging is often viewed as a companion effect to phasing, but its origin story is quite different. Analog flanging requires recording a signal on two separate tapes, then periodically slowing one of the tapes down by pressing a finger on the edge (or "flange") of the reel. When the signals are mixed together, a comb filter effect occurs to create a characteristic "jet plane" or "spaceship" sound. While flanging was allegedly used as early as 1959, and was definitely used by The Ventures for a few brief moments in a 1962 recording, many believe that George Martin and John Lennon coined the verb "flange," which was originally nonsense.

Like the analog version, solid-state flanging relies on mixing a signal with an exact time-delayed copy of itself, creating peaks and troughs in the waveform. Flanging sounds a lot like a more severe version of phasing; guitar flangers use similar technology as phasers but require significantly more control over phasing parameters. Flangers require hundreds of phase-shifting stages rather than a handful, and solid-state devices weren't possible until the development of suitably powerful ICs in the '70s. The 1977 A/DA Flanger, probably the first successful commercially available device, was made possible by the development of the SAD-1024 chip, a bucket-brigade device (BBD) IC with 512 stages.

Finally, a third related modulation effect is chorusing. Chorus effects use the double-signal flanging technique, except that the delays between signals are extremely short and narrow, making the signals slightly out of tune with each other and resulting in a quivering, spacious version of the original sample. The effect is theoretically similar to (and designed to emulate) a violin section or a 12-string guitar: when a number of strings are playing together, a few of them are likely to be just slightly out of tune and produce a more resonant sound with natural reverb. Like flangers, solid-state chorus devices didn't appear until the late '70s, when short-delay ICs became affordable. One of the first--and maybe best-known--songs to use solid-state chorus was The Police's 1979 hit "Message in a Bottle", and in later years Nirvana used chorus on their breakthrough album Nevermind, notably in the verses of "Smells Like Teen Spirit" and throughout most of "Come As You Are". Most modern chorus effects use digital technology, which simply adds delay and pitch modulation to a doubled signal. This results in greater capability and range, so much so that it can make listeners a little dizzy...

Image credits: Public domain | P.B. Rage / CC BY SA 2.0

7 comments; last comment on 04/24/2016
View/add comments

The Happy Accident of Fuzz Guitar

Posted February 12, 2016 12:00 AM by Jonathan Fuller
Pathfinder Tags: distortion effects fuzz guitar

Mark Twain supposedly said, "Name the greatest of all inventors: accident." This quote holds true for many scientific and technological inventions: pacemakers, microwaves, radioactivity, even the little blue pill that crops up so often here on CR4. In many ways guitar distortion, which has been omnipresent in many styles of pop music for decades, evolved in the same way. In this post, the first in a series on guitar tones, we'll dig into the history of distorted guitar and a few engineering attempts to recreate and control it.

Many an 11-year-old aspiring headbanger has rushed to their basement, plugged in their new Christmas present, and uttered "Hey, what gives? This doesn't sound like [Metallica/Nirvana/Led Zeppelin/any heavy-sounding band of your liking]!" The problem, of course, is that an unadultered guitar run through an unadultered amp produces a bell-like "clean" tone--the sound most electric guitars were originally designed to emulate. Our young metalhead would need to somehow distort his guitar's tone to achieve the sound he's looking for.

In most signal processing scenarios, distortion's a bad thing, and unruly gain and noise can cause an original signal to become corrupted to the point of unrecognition. The original electric guitars of the 1930s and '40s were in the same boat, but early tube amps were so lo-fi that cranking them up even a little past their ideal volume would "overdrive" their circuits, causing the sound to become dirty and unclear. Most tried to avoid this effect, but some early rock and blues artists--including Elmore James, Buddy Guy, and to a greater degree Chuck Berry--deliberately used overdriven sounds to produce the racous sound they desired. By the 1950s, rock guitarists began punching holes in speaker cones and partially dislodging vacuum tubes to more reliably distort their sound.

This now-classic "valve distortion" method uses clipping, a form of waveform distortion. A clipped sine wave, for example, loses the positive and negative extremes of its wave, appearing "squared off" on paper. Early guitarists found that the triode tubes used in their amps soft-clipped the output signal when the input voltage became excessive, producing a desirable, warm sound. Conversely, a guitarist could theoretically produce a harsh gritty sound by running their valve circuit at excessively low voltages, known as a "starved plate" effect.

The discovery of fuzz guitar, a companion effect to distortion and overdrive, was truly accidental. In 1961, Grady Martin attempted to lay down a six-string bass guitar track for the country-western Marty Robbins tune "Don't Worry" through a faulty preamplifier, resulting in an unfocused, fuzzy, trombone-like tone. Although Martin was horrified, Robbins liked it and released the tune, which became a #1 Billboard hit and spurred demand for fuzzy guitar tones from surf rockers and others.

Martin and Robbins' recording engineer, Glenn Snoddy, found that the fuzz had been caused by a faulty transformer with an open primary coil, which most likely developed a small arc during recording. Martin used the bad transformer to record a weird fuzz-based single before it promptly quit working altogether. Martin and Robbins worked together to develop a solid-state circuit that would clip the signal to reproduce the fuzz. Gibson Guitar Corp., one of the most innovative manufacturers at the time, bought up the design and in 1962 produced it as the Maestro FZ-1 Fuzz-Tone footswitch, the first guitar effects pedal. There are now hundreds of guitar pedal variants like those seen on this page, many of which induce preamp-like solid-state distortion.

The Fuzz-Tone failed to sell for several years--legend has it that Gibson only shipped three from 1963 to 1964. In 1965, a young English guitarist named Keith bought one and used it to emulate a trombone on a rough scratch track intended to guide a future horn section. His band's manager liked the guitar riff as it was and left it in. "Satisfaction" was released and went straight to #1 on the international charts, paving the way for the dirty, grungy guitar sound that many of us have come to desire.

Image credits:

Christian Feuersänger / CC BY 2.5 | Public domain

14 comments; last comment on 02/16/2016
View/add comments


Previous in Blog: David Bowie: Technophile  
Show all Blog Entries in this Blog

Advertisement

New Privacy Policy

We have adopted new policies. Please read each one carefully.

OK