CR4 - The Engineer's Place for News and Discussion ®
Login | Register for Engineering Community (CR4)



Notes & Lines

Notes & Lines discusses the intersection of math, science, and technology with performing and visual arts. Topics include bizarre instruments, technically-minded musicians, and cross-pollination of science and art.

Is Digital Music Making You Sad?

Posted December 15, 2016 12:00 AM by Jonathan Fuller
Pathfinder Tags: audio compression

John Philip Sousa—composer of the well-known “Stars and Stripes Forever” march and namesake of the sousaphone—famously hated recorded sound. He almost always refused to conduct his band when recording equipment was present, and in 1906 he said this during a congressional hearing: “These talking machines are going to ruin the artistic development of music in this country. When I was a boy...in front of every house in the summer evenings, you would find young people together singing the songs of the day or old songs. Today you hear these infernal machines going night and day. We will not have a vocal cord left...”

Today, as Sousa predicted, recorded sound has rendered most forms of live music-making irrelevant, or at least not strictly necessary. And while digital music is worlds away from the wax cylinders he griped about, its sound is still very different from live performances.

An open-source article published this month in the Journal of the Audio Engineering Society attempted to examine the relationship between audio compression and emotions. A group of four researchers from Hong Kong University of Science and Technology played a group of subjects audio samples of eight different orchestral instruments using three varying degrees of audio compression. Subjects voted samples into ten emotional categories, from happy and heroic to scary and angry. The study found that reducing a sample from 112 Kbps to 32 Kbps resulted in a reduction in “positive” emotions and an increase in negative ones. It also found that the trumpet samples were affected most, while the horn was least affected by far, probably because of the difference in overtones between the two instruments.

Several audiophile blogs and websites have already picked up on the study, claiming that it corroborates their belief that vinyl’s still the ideal listening format. The research is problematic in a few ways, however. The group postulated that 32 Kbps audio compression adds a background “growl” that caused respondents to negatively reclassify their samples. As demonstrated in the video below, the difference between 112 Kbps and 32 Kbps bit rates is monumental, and I’d call the “growl” they're referring to straight-up distortion. The bit rates chosen by the team are surprisingly low in 2016 terms, and even the 10-year-old inexpensive digital recorder I use at home records at 320 Kbps. Also, studies examining emotional responses are often highly subjective to each participant. To be fair, the researchers consistently describe the study as an introductory one that could lead to further, more substantial investigation.

Lossy audio compression—the technology that makes it possible for hundreds or thousands of songs to fit onto a matchbook-sized MP3 player—has changed the sound of recorded music dramatically. Tech-savvy listeners who are more concerned with sound quality can use lossless codecs to convert their music, but these tracks are likely to be up to five times larger than ones that used lossy compression. I find it astounding that, only 110 years after Sousa’s comments, one can access millions of hours of free, high-quality music on YouTube at any time of day in any part of the world (with Wi-Fi). But I’d be interested to see more studies on the relationship between compressed music and user emotion; it would likely prove that there’s still no convenient substitute for hearing it live.

28 comments; last comment on 01/05/2017
View/add comments

Why's It So Hard to Build A Decent Music Hall?

Posted November 21, 2016 12:00 AM by Jonathan Fuller

On October 31 construction work on the Hamburg Elbphilarmonie concert hall officially ended. The magnificent glass façade sits atop an abandoned warehouse on the river Elbe and is now the tallest inhabited building in Hamburg. The 2,039-seat hall will officially open in early January.

Despite its grandiose appearance, the Elbphi (as it’s popularly called) was something of a budgetary disaster. It took over nine years to build, six more than planned. While construction was initially estimated to cost €77 million in 2007, the final cost of €789 million was 10 times the original figure. The project was subject to some blistering media criticism and public scorn as a result of these setbacks.

Anyone well-versed in modern music hall construction would expect nothing less from such a project. Building a concert hall is characteristically expensive and time-consuming, often beginning with reasonable estimates that quickly spiral out of control. For example, the famed Sydney Opera House broke ground in 1959 and could have been finished as early as 1964 at a cost of $7 million Australian dollars. Due to a variety of setbacks, including weather, site drainage and miscommunication, the hall was opened in 1973 after construction costs ran to $102 million AUD, nearly a billion AUD in 2016. In Los Angeles, the Frank Gehry-designed Walt Disney Hall took 12 years to build because of the need for a $110 million underground parking garage and stalled funding in the mid-1990s.

Why are concert halls so difficult and expensive? Many critics point to the demands levied by famous conductors and classical music impresarios; indeed, all three halls discussed so far are iconic, breathtaking spaces worthy of hosting great music (and huge egos). Others point to the often faulty economics of selling classical music: a hall must have enough seats to justify its cost, and perhaps be architecturally grandiose enough to attract people to fill them.

Taking a hall’s acoustics into account creates an interesting interplay between architecture and seating capacity. Most of the concert halls judged to sound the best—including Vienna’s Musikverein, Amsterdam’s Concertgebouw, and the Birmingham (UK) Symphony Hall—incorporate a “shoebox” design in which their auditoriums are simple rectangles with the performing group stationed at one end. In a shoebox hall, the first sound perceived by a listener has reflected off either wall, so the sound heard in each ear is subtly different. Because the sound from each wall takes slightly longer to reach the far ear, the signal attenuates to bend around the head and gives the listener the impression of being enveloped with sound. One perceived drawback of shoeboxes is that listeners in the very back, farthest from the performers, experience a less satisfying aural and visual performance.

The alternate design pioneered in Berlin in 1963 is the “vineyard” hall in which the orchestra is effectively enveloped by their audience. This design is sort of the musical version of a theater in the round, with terraced groups of listeners spread throughout the space, including behind the orchestra. Vineyard halls have a few cool advantages, including the ability to see the conductor’s face from your seat behind the orchestra, but acoustics are not among them. To begin with, many instruments (including trombones, trumpets, clarinets and oboes) project their sound straight out from a bell or pipe. Those seated behind the ensemble would perceive much less of these instruments but much more of the horns, whose bells face backward. Most vineyard halls have angled walls, so they don’t experience the same enveloping reflections of shoebox types. To make matters worse, vineyard halls were originally conceived to increase the amount of seats (and tickets sold). Bodies and clothing act as natural dampers, so the larger the audience, the quieter and duller the sound.

A glance at London’s famous halls shows that designing an economically successful hall that sounds “good” has always been a difficult proposition. The famed Royal Albert Hall, which hosts the popular BBC Proms, is typically judged an acoustical failure due to its enormous, 5,000-seat size. (Granted, it was built in 1871, before acoustic design was a true science.) London’s Royal Festival Hall was built within a reasonable time and on-budget (18 months and for £2 million) using state-of-the-art acoustical figures in 1951, but the designers failed to take audience absorption into account, so it’s now judged as one of the driest—and worst-sounding—halls in Europe.

Sir Simon Rattle will become conductor of the London Symphony Orchestra next year, and it’s no surprise that he’s already lobbying for a brand new £100-200 million concert hall to replace the vineyard-style Barbican Centre. If he gets his wish it’s likely that the new building will be an architectural behemoth, hopefully with just enough seats to satisfy the orchestra and the audience.

Image credits: a as archictecture | Santa Fe University of Art and Design

18 comments; last comment on 12/14/2016
View/add comments

Musical AI Takes Off

Posted July 28, 2016 12:00 AM by Jonathan Fuller

Music is a no-brainer when it comes to AI research. It has a finite set of rules, a relatively limited scale, and pretty strict limits as to what “sounds good,” at least based on a researcher’s subjectivity. The concept of musical AI is also at least 50 years old: well-known futurist Ray Kurzweil appeared with a music-writing computer of his own invention on I’ve Got a Secret in 1965. However outlandish a music-writing computer might’ve been in the 1960s, Ray unfortunately stumped only one panelist.

This summer has seen a bumper crop of headlines about musical AI. The most notable example is probably IBM’s Watson, the Jeopardy-winning supercomputer. IBM is leveraging Watson to create Watson Beat, a new app designed to boost a musician’s creativity. A user feeds the app a musical snippet and a list of desired instrument sounds, and Watson more or less remixes and alters the sample, choosing its own tempo and doing its own orchestration. Richard Daskas, a composer working on the Beat project, says the app could be helpful for a DJ or composer experiencing “musician’s block” in that it “generate[s] ideas to create something new and different.” An IBM researcher working on the project says Watson Beat should be available commercially by the end of the year.

If there’s a developing tech-related area, how can we not expect Google to be in the ring? A few months ago the tech giant released a 90-second melody created by its Magenta program. Google is using Magenta, a project they first announced in May, to apply its machine learning systems to creating art and music. Similar to Watson Beat, a user feeds Magenta a series of notes, and the program expands them into a longer, more complex sample. Magenta relies on a trained neural network, an AI technique inspired by the brain, to remix its inputs. Google’s efforts at neural networks have already tackled visual artistic development: its DeepDream algorithm was the basis for a visual gallery show early in 2016.

Recent research from Baidu, the Chinese tech giant, takes a different tack and combines AI, visual art, and music. The company’s experimental machine learning algorithms analyze patterns in visual artworks and map them to musical patterns, creating a kind of “soundtrack.” (Check out this video, in which the AI Composer tackles Van Gogh’s Starry Night and other images.) Baidu says the program first attempts to identify known objects such as animals or human faces within the image, and analyzes colors for perceived moods (red=passion, yellow=warmth, etc.). AI Composer contains a large musical library categorized by “feel,” and it draws upon these musical samples to piece together an original composition in the mood of the image.

A large grain of salt is necessary when evaluating AI developments, at least in my opinion. It’s exciting to see artificial neural networks doing their thing, but even considering the subjective nature of art and music, it’s hard to see how Watson, Magenta, or AI Composer have produced anything “good” or worth listening to. Granted, they’re all in the early stages, so who knows? Maybe we’ll see a day when composers come up with basic ideas and let computers do the rest. I for one hope that day’s far off over the horizon.

Add a comment

A Portable Pipe Organ: the International Touring Organ

Posted July 05, 2016 1:25 PM by BestInShow

Jonathan Fuller describes how the digital organ grew from the desire of a father to give his son a decent instrument to play at home. The desire of a 21st century organ virtuoso for a virtual pipe organ, a personal instrument that would travel with him, led to the development of the International Touring Organ, perhaps the most sophisticated application of digital organ technology – and organ-building ingenuity – to come to concert venues yet. The ITO story, like that of the Allen digital organ, combines the vision of a performer with pioneering technology to produce a stupendous, if sometimes controversial, musical instrument.

Traditional pipe organ. Image credit: Wikimedia Commons.

The artist

Cameron Carpenter, a prodigiously-talented, Julliard-educated organist, is out to change the conception that pipe organs are just for sacred music (with exceptions for baseball parks and movie theaters). His repertoire ranges from Bach to Bernstein to Leonard Cohen. Carpenter began campaigning for a portable pipe organ ten years or so ago. "Unlike a violinist who plays at 28 the instrument they played at eight, I have to go every night and play an instrument I don't know," he said. “What one doesn't see is the countless hours of rehearsals which are spent, not on the music, they're spent on (the organ).” Pipe organs aren’t portable. And they are intricately complex; no two are alike, and a performer typically needs hours to get acquainted with each instrument prior to a performance. Moreover, an instrument fixed in place limits concerts to locations with organs. Carpenter understood that applying the digital technology powering smaller organs to instruments with a grander scale would give him the portable instrument he craved.

The organ builders

Marshall and Ogletree Organ Builders (M and O) is the brainchild of two highly-respected concert organists, Douglas Marshall and David Ogletree. Founded in 2002, the company’s purpose is to build digital organs that meet – or surpass – the sound of the greatest pipe organs in the world. Since its inception the company has built a dozen organs. Each organ’s sounds are based on sound samples from pipe organs, using M and O’s patented process based on the one described in Fuller’s article. The company works in close cooperation with each instrument’s commissioners. Marshall and Ogletree undertook a daring proof of concept: the company built an electronic organ based solely on the sound of one pipe organ, an 1897 George Hutchings organ at Boston's Basilica of Our Lady of Perpetual Help. “Blind” comparisons of the sound of the original organ with the sound of the M and O organ demonstrated the fidelity of the new instrument to the original. Follow the link to do your own comparison.

The collaboration

In 2013, Carpenter commissioned M and O to build his touring organ, an instrument equal in sound to the finest pipe organs in the world that is transportable to concert venues. To create the sounds the artist wanted, M and O synthesized sounds from Carpenter’s favorite organs worldwide, including at least one Wurlitzer and organs he knew from his childhood.

The complexity of this commission goes beyond creating the organ sound; M and O developed the sound system and all of the internal components that make this organ work. Eighteen cases of components fill the truck that accompanies Carpenter to his concerts:

  • The six-manual console
  • Ten cases of speakers
  • Eight cases of subwoofers
  • A supercomputer/amplifier system

Supercomputers – three of them – with extremely fast processing speed manage the conversation between the organ’s manuals and the stored sounds. The flexible sound system can operate on two channels, for a set of headphones, up through 48 channels to fill a large concert venue.

The ingenuity involved in making this whole system portable is just about as impressive as engineering the digital organ sounds. The console breaks down into six pieces. The speakers, subwoofers, and console pieces fit into purpose-designed cases that protect them from the rigors of travel. According to M and O, “the entire organ, including console, racks and 48 channel audio system can be set up from dock to operational in about two hours, and broken down in about the same.” This video shows a greatly speeded-up version of organ set–up.

The result

Carpenter debuted his new International Touring Organ (ITO) March 9, 2014, in New York City’s Alice Tully Hall. The verdict of New York Times music critic Anthony Tommasini on the organ’s sound: “quite terrific.” Later in 2014, Mark Swed in the Los Angeles Times dubbed the ITO “… a genuine dream organ, a fantastically versatile electronic instrument with magnificent sounds…” A more recent Boston Classical review (July 11, 2015) characterizes the ITO as “an orchestra unto itself, capable of shimmering wah-wah effects.” Writing when the ITO made its debut, the artist himself pronounced that “… [The ITO] dares me to make good on my talent …”

Cameron Campbell and the ITO. Image credit: Marshall and Ogletree

Perhaps the most significant technical breakthrough for M and O organ technology is the firm’s ability to build instruments finely tuned to the commissioner’s liking, to fit both the venue(s) and the performer’s repertoire. The ITO’s portability, while authentic, is still limited to locations that its truck can access – and with enough power to pump into its circuits. I’m personally thankful that this somewhat lumpy portability brought Campbell and the ITO to Tanglewood, the Boston Symphony Orchestra’s summer home, last July. I can confirm that Campbell and the ITO do indeed sound quite terrific.

References

http://www.cameroncarpenter.com/

http://www.marshallandogletree.com/

https://en.wikipedia.org/wiki/Cameron_Carpenter

http://www.theverge.com/2014/5/22/5741570/cameron-carpenter-international-touring-organ

http://www.metroweekly.com/2013/10/pipe-dreams/

Add a comment

Snow in June, Frost in July, Ice in August

Posted May 09, 2016 10:32 AM by Jonathan Fuller

This coming summer marks the 200th anniversary of one of the most severe weather anomalies in modern history. The Year Without a Summer, as it's now commonly known, wreaked havoc on much of the Northern Hemisphere. In upstate New York and New England, in the vicinity of CR4 headquarters, snow fell in June, frosts were common from May through August, and temperatures sometimes swung violently between normal summer highs of 90┬░ F or more to near-freezing in a matter of hours.

The climatic conditions of 1816 also resulted in unseasonably low temperatures and heavy rains as far east as China. In Europe famine was widespread and riots, looting, arson and demonstrations were common occurrences. Throughout the hemisphere, farming became nearly impossible, and grain prices increased exponentially. In an age when subsistence farming was the norm and commoners worked their hands to the bone to feed their families, crop failures often meant the possibility for starvation.

Contemporary observers were almost completely perplexed as to the disappearance of the summer of 1816, but scientists now believe it was the result of a few interrelated factors. The most significant of these was the April 1815 eruption of Mount Tambora on the Indonesian island of Sumbawa. Tambora was likely the most powerful volcanic eruption in recorded history, with a column height of over 26 miles and a tephra volume of over 38 cubic miles. Over 70,000 Indonesians were killed following the blast. The enormous amount of volcanic ash that spewed into the atmosphere reflected large quantities of sunlight and lowered Northern Hemisphere temperatures.

To compound the effects of the ash, modern scientists also believe that solar magnetic activity was at a historic low in 1816, the midpoint of a 25-year solar period known as the Dalton Minimum. By studying the presence of carbon-14 in tree rings, solar astronomers have concluded that sunspot activity was abnormally low, reducing the transmission of solar radiation to Earth. Ironically, the Tambora eruption often caused a dry fog to settle over the Northern Hemisphere, producing a reddened and dimmed Sun and causing sunspots to become visible to the naked eye. With little knowledge of the eruption, 19th-century Americans and Europeans often blamed the red, spotty Sun alone for the abnormal weather conditions, while in reality Tambora's ash played a much more significant role.

A third less-studied factor is the possibility of a solar inertial shift. These shifts, occurring every 180 years or so due to the gravitational pull of the largest planets in the Solar System, cause the Sun to wobble on its axis and possibly affect Earth's climate. Scientists point to three of these shifts--in 1632, 1811, and 1990--that correspond to major climatic events: the solar Maunder Minimum from 1645-1715, the Dalton Minimum discussed above, and the eruption of Mount Pinatubo with corresponding global cooling in 1991. This association remains largely hypothetical, however.

The Year Without a Summer produced some interesting and long-lasting cultural effects. Thousands left the American Northeast and settled in the Midwest to escape the frigid summer; Latter-day Saints founder Joseph Smith was forced move from Vermont and settle in Western New York, the first in a series of events that culminated in his writing The Book of Mormon. German inventor Karl Drais may have invited the Laufmaschine, the predecessor of the bicycle, in 1818 in response to the shortage of horses caused by the 1816 crop failure.

That summer may have influenced contemporary art as well. The high concentrations of tephra in the atmosphere led to spectacular yellow and red sunsets, which were captured by J.M.W. Turner's paintings of the 1820s. (If you've ever wondered about the vivid red sky in the more widely known painting The Scream, some modern scholars believe Edvard Munch may have viewed a similarly vivid sunset as a result of the 1883 eruption of Krakatoa.) Trapped inside their Swiss villa due to the excessive rains in June 1816, a group of English writers on holiday passed the time by seeing who could write the most frightening ghost story. Mary Shelley came up with the now-famous Frankenstein which she would finish and publish in 1818, while Lord Byron's unfinished fragment The Burial inspired John William Polidori to write The Vampyre in 1819, effectively launching the still-healthy field of romantic vampire fiction.

The advancement of agricultural technology more or less ensures that we'll never have a comparable subsistence crisis to that of 1816, despite any further severe weather anomalies. Even so, it's chilling to examine that year's events and attitudes toward them, as expressed by surviving journals and works of art.

Image credits: NOAA | Public domain

7 comments; last comment on 05/13/2016
View/add comments


Previous in Blog: Earth Day, And Problematic Predictions  
Show all Blog Entries in this Blog

Advertisement