CR4® - The Engineer's Place for News and Discussion®



Left2MyOwnDevices

The new stories of social computing are shared here. We're exploring mobile devices, embedded computing, wireless sensor networks, and social business from the perspectives of technology, business, and societal changes.

About Don Dingee

An experienced strategic marketer and editorial professional, and an engineer by education, Don is currently a blogger, speaker, and author on social computing topics, and a marketing strategy consultant. He's had previous gigs at Embedded Computing Design magazine, Motorola, and General Dynamics.

Trillions and Trillions of Devices Everywhere

Posted November 25, 2013 12:00 AM by dondingee

Somewhere in my font of near-useless TV trivia sticks a moment that perfectly defines marketing the Internet of Things. Searching IMDb quickly revealed the source of the memory as "Shadow Chasers", a short-lived 1985 TV series about a team searching for the paranormal.

In the opening minutes of the first episode, we meet tabloid reporter Edgar 'Benny' Benedek (portrayed by Dennis Dugan of "Happy Gilmore" fame) who has a high-concept pitch for a story: Elvis Presley as extraterrestrial. His editor says he needs some kind of proof to run with it, and he needs it fast to make deadline. Benny thinks for a few seconds, and then picks up the phone.

He calls Carl Sagan, astronomer and author and host of "Cosmos", during the 1980s the most widely watched series in the history of public television. We only hear Benny's side of the conversation asking Sagan point-blank about the possibility that Elvis was an alien, and verifying Sagan's response saying he was astonished that anyone would call him and ask such a ludicrous question. The resulting headline emblazoned across the next edition:

Elvis Was An Alien, Sagan Astonished

I'm reminded of that every time I see the beloved "hockey stick". Hyperbolic technology growth projections extrapolated from a shred of fact and voiced with conviction by industry leaders draw in more participants. Take this headline, inspired by the recent TSensors Summit:

Millions are boring, billions are interesting, but trillions get people all worked up if the story is good enough. This assertion is completely plausible even if actual growth doesn't exactly match the hyperbole, as charted below in various projections of worldwide sensor population.

Continue reading this post.

Editor's Note: CR4 would like to thank Don Dingee for sharing this post. You can read the original on his blog.

13 comments; last comment on 03/13/2014
View/add comments

With a Bluetooth Beacon, and a Coupon, and a Mom in the Aisle

Posted November 07, 2013 12:00 AM by dondingee

In a recent conversation about the Internet of Things (IoT), we got on the topic of monetizing data from devices. Someone asserted there is little value in one social stream - unless it happens to be that of Ashton Kutcher or similar - but instead the value is in millions of streams. I strongly disagreed with that; when one person armed with a mobile device and a social app meets a few of the right embedded devices, a lot can happen.

I know, social is a fad, or for the young, or just a time waster, at least according to many of the old guard I hear from. It's time to expand the vision. If there is an expert on networking as a business, it is John Chambers, CEO of Cisco. In his recent keynote at Interop, Chambers characterized the Internet of Everything as the fourth phase of development of the Internet, following connectivity, e-commerce, and mobile plus social technology.

Most technologists see the value of the IoT in industrial settings laden with sensors, a big-data connectivity problem. Mobile devices are a natural fit for the IoT, because they offer connections to sensors via Wi-Fi and Bluetooth and a gateway to the Internet, allowing personal clusters to be created. But how does social technology fit in? The clue lies in how the combination of technology in the progression Chambers outlined creates possibilities.

Continue reading this post.

Editor's Note: CR4 would like to thank Don Dingee for sharing this post. You can read the original on his blog.

1 comments; last comment on 11/08/2013
View/add comments

Interrupting the Disruption and Getting Back to Innovation

Posted August 22, 2013 12:00 AM by dondingee

One of the quintessential reads for technology strategists is "The Innovator's Dilemma" by Clayton Christensen. It portrays the concept of disruptive innovation, an unexpected change opening up new markets and ecosystems, unlocking value difficult for most to access in an old market.

Technologists (me included) became enamored with the concept. Christensen captured what we saw occuring around us for decades, starting with the birth of the transistor and everything it spawned. The arrival of the microprocessor and the innovation of the personal computer defined a generation, and made heroes and fortunes.

But somewhere along the path, we - not Christensen, but the rest of us - confused disruptive innovation with plain old disruption.

Disruption is a familiar mechanism in storytelling. Take Star Trek II, The Wrath of Khan. After driving into the nebula to equalize the odds by rendering sensors mostly useless, Kirk predictably becomes impatient with the game of hide and seek. Wanting to regain tactical advantage, he wonders where the next attack will come from. Spock offers a tactical assessment of the opponent.

He is intelligent, but inexperienced.
His pattern indicates … two dimensional thinking.

The next command is Z minus 10,000 meters, engineering speak for a three dimensional move. Disruption for the win: a harmless flyover, a move to six o'clock position, fire all weapons, listen as villain quotes Moby Dick, flee from weapon of mass destruction, realize the costs, bury a hero, live to play again.

Clichés for disruption have sprung up all over business literature: "change the game", "think out of the box", "break the mold", "leapfrog the competition" and other rallying cries have been heard in every conference room.

A prime example is Amazon, who surveyed the landscape and aggregated choices into a massive disruption that eventually wiped out bookstore chains, damaged electronics retailers, shifted a market from "web hosting" to "cloud computing", and messed up publishing to the point where the disrupter can now afford to buy the disruptee. Some would say this is disruptive innovation, but it fails the Christensen test: it didn't improve market access in most cases, with the exception of Amazon Web Services and EC2. It just redirected the ecosystem and money towards Amazon.

It paid to be disruptive, for a while, until everyone was trying to do it. If everyone is constantly attempting to disrupt each other in a confused melee, the narrative becomes less like Star Trek with a decisive victory and more like Sons of Anarchy: a lot of activity, but few lasting outcomes - just more instability.

Jeff Bezos may have come to exactly the same conclusion I have: we can't afford to disrupt everything anymore. A newspaper, even one in the seat of political power, seems an odd purchase for an online magnate. But if there is anyone who understands this disruption, it should be the guy who created it. Reviving an institution like the Washington Post, and solving the dilemma of how print and online news can work in harmony, is an immense challenge. I hope he succeeds in the effort, and in the process finds a way to stabilize the publishing industry.

Continue reading this post.

Editor's Note: CR4 would like to thank Don Dingee for sharing this post. You can read the original on his blog.

1 comments; last comment on 08/22/2013
View/add comments

That’s What I Want, Big Fat Data

Posted August 08, 2013 12:00 AM by dondingee
Pathfinder Tags: Big Data

For generations, analysts have struggled with the term "statistical significance." Traditionally, this meant we didn't have enough data to make a useful inference. Most data starts to take on a discernible shape of predictability - what statisticians refer to as a distribution - after 30 samples. We know small sample size can lead to bad conclusions, an easy trap to be lured into.

Big data lies ahead with 30 million or 30 billion samples, and we now have the opposite problem: too much data to make sense out of. Why did we want big data, again? Pop culture heroes like Nate Silver and the NSA will produce headlines with it. For the rest of us, the answer lies in the search for significance.

Data has always been out there, coming and going somewhere, but the connection, storage, and retrieval mechanisms were limited. Our personal, direct encounters and recollection plus what we could painstakingly store and retrieve on paper or celluloid helped. The huge breakthrough cane in real-time data broadcast from across the world, primarily from newspapers, radio, and TV. We anxiously awaited new data daily to expand, alert, and entertain. The content wheel was spinning.

News was at first significant, but entertainment turned out to be easiest to monetize. With the real-time channels opened by the media outlets, bigger pipes were needed for more content. Networking, telecom, search engine, and storage firms prospered as technology found ways to store, access, transport, and deliver everything people could ever want.

The idea of significance shifted again. Real-time turned into on-demand, and people view the stream of incoming data differently. Breaking news most of the time isn't, and grabbing and holding our attention with so-called news is getting more and more difficult. In the background, everything is captured and stored somewhere, searchable and ready for playback. Data becomes significant when we are ready to consume it, not necessarily when the producer provides it.

An interesting example comes from baseball, which has shifted from a game of skill and intuition to a business of analysis and prediction. The 43rd SABR Convention, the annual gathering of practitioners of sabermetrics, this week produced a fascinating observation:

What does a 50x increase in available data do for baseball? For fantasy baseball, all that data is lifeblood. For those tasked with evaluating and acquiring talent, more data can help. The game itself is still played between the lines, with the unpredictability of human behavior, and the element of chance - there are no guarantees, but data points to the likely possibilities.

The funny part about big data is if someone else has it, and we don't, we could be at a disadvantage. Fear propels us to try to gather data and define uncertainty, even if the attempt opens a giant can of jumping beans. Borrowing the lost words of Sammy Hagar in Van Halen's "Big Fat Money", updated to today's situation where data is money:

"Where's it gonna come from? Who's it gonna go to?

I ain't beatin', but I'm bein' eaten by data, oh yeah, big, big data.

Now, gimme, gimme, gimme , gimme, gimme some of that big data."

The whole point of statistics is to improve our chances of making the right choice at the right moment. Winning means blending our experience with the correct and timely analysis of all that stored data combined with a rapid and accurate evaluation of new data coming in, much of which is increasingly routine and thereby insignificant.

We are now building the Internet of Things, generating even bigger data - so much data, we won't ever be able to look at it all. Billions of devices will produce a non-stop stream of stuff, much of it whisked off to some disk drive, and data scientists will be bringing bigger and bigger data analytics tools and algorithms to assess it. Significance will come from exception handling, spotting new samples that don't fit an understood trend of goodness, and alerting us to do something.

In the age of zettabytes and beyond, big data isn't what we're looking for. It's mostly "telef***in' teletrash" as Hagar put it, if no analysis is applied. We are looking for small, somewhere in the big being saved and statistically studied. We are searching for one piece of information that could tell us our health is changing and we should seek help, or we should change speed or direction to avoid a collision, or when to prevent a brownout by dialing back a few thousand air conditioners a couple of degrees, or how to spot some malfeasants based on who they have been calling or texting.

The Internet of Things will cause another shift, back to little bits of otherwise big data being significant when they say there is a problem developing - whether we notice it, or not. Big data, applied across billions of devices and a timeline of similar stored experiences, will bring little data back to us when and where we need it.

Editor's Note: CR4 would like to thank Don Dingee for sharing this blog entry. You can see the original version of this post here.

Add a comment

Smooth Seas Do Not Make Skillful Automatons

Posted July 18, 2013 12:00 AM by dondingee

We usually associate technological mishaps with extenuating circumstances: bad weather, mechanical or electronic failure, poor decision making by software or humans. We tend to seek identification of a single, overriding root cause, thinking if that were isolated and dealt with, system failure would be avoided.

It has been demonstrated time and time again that major accidents are typically the end result of a sequence of smaller incidents. Individually, these incidents are often handled without consequence, but when strung together in rapid-fire fashion they accumulate and amplify into catastrophic trouble. The difficulty is this: humans generally trust the machine until the unthinkable worst-case scenario is joined, already in progress.

If a process is well understood and follows a fixed decision tree, it can be described by mathematics and thereby controlled. In the domain of traditional industrial automation and process control, automatons - simple control mechanisms, intelligent machines, robots, and the like - excel because they are good at:

  • Repeating a programmed sequence;
  • Ignoring or compensating for variations in inputs;
  • Maintaining a steady-state process;
  • Deciding quickly and following a defined course of action.

Smooth seas present little challenge, but automatons don't do as well when they encounter dark seas, rough roads, or violent skies. As science fiction teaches, there several methods to increase the stress level and defeat most automatons:

  • Introduce exceptions simultaneously;
  • Remove external references of measurement;
  • Deprive the machine of communication or power;
  • Change the rules of the game entirely.

When too many things go wrong, automatons are lost. In these conditions, humans excel because they can recall prior experience, adapt, infer, extrapolate, and operate on "best guess" information to fill gaps. This is why automating fluid situations like healthcare, combat, emergency response, and others generally falls short.

Transportation is a gray area for automation, rife with outside perturbations and variable sequences that can push automatons over the edge, leaving a mess for the humans who intervene to try and correct. Systems with a notion of "traffic control", external resources available to aid both the machines and humans, have achieved remarkable rates of safe operation overall, spectacular failures notwithstanding.

Short-haul rail systems have seen success in automation by eliminating unpredictability as much as possible. A good example is elevating a train and allowing only pre-programmed stops, such as the PHX Sky Train at Phoenix Sky Harbor Airport. This works well for short routes with specific tasks on a clock: move from station A to station B, open the doors for N seconds, proceed to the next stop. With minimal risk of things like people, cars, or opposing train traffic to present a hazard, the automaton can do what it does best: keep the schedule.

Longer rail routes have a rule-breaker: high-speed trains do not stop easily. Even if brakes are applied and work properly, trains often continue rolling for up to a mile. Some say we have the technology for automated trains, but recognizing problems and reacting quickly enough can be challenging even for human operators. With hundreds or thousands of miles of track, often in remote areas, monitoring is difficult and expensive. Trains are also prone to run away due to failure or improper procedure, which this weekend's rail disaster in Lac-Megantic, Quebec illustrates.

Air travel has become highly automated, with an extensive system of traffic control, monitoring, and communication. We also have the technology for remotely piloted aircraft, appropriate for situations where putting humans in harm's way is risky and unnecessary. But the sky is an unforgiving place, and the dynamics of larger aircraft mean they don't always do what they are asked to do immediately by pilots or automated control systems. The theory of "big sky, little plane" usually holds up in level flight out in airspace, but eventually air traffic comes together at an airport where problems occur quickly and tend to accelerate. A great post describing airliner instrument approaches illustrates both the complexity of the tasks a pilot faces and the incredible range of things that must be accounted for in automation and traffic control.

Glide slope antenna array, Runway 09R, Hannover, GER - courtesy Wikimedia Commons

In another tragic example from this weekend, we have the "hard landing" of Asiana Airlines flight 214. One fact that has emerged is the instrument landing system (ILS) glide slope transmitter serving runway 28L at SFO was out of service for several weeks during construction. This suggests the pilot was on a manual approach without an external reference to provide warnings, which should not have been a problem given pristine weather. Evidence is already coming in showing the pilot may have realized several issues in progress - low airspeed, not enough throttle, bad angle of attack, excessive descent rate - too late to create enough of a reaction to save the aircraft, but might have averted a larger disaster by taking action seconds before impact and barely clearing the seawall. One wonders if an automated approach would have had different results.

The quest for the autonomous car is gathering steam, with tech companies like Google in the mix - but the reality differs from the headlines. Contests like the DARPA Grand Challenge have shown that the basic elements of technology are possible, if not yet repeatable or affordable. Conceivably, with technology like navigation, collision avoidance and intelligent spacing, freeway traffic may benefit from automation, even if the ride is unnerving.

Relatively wide open interstate traffic is very different from rush hour, and completely different from congested neighborhood traffic. Distractions ranging from smartphones to pedestrians, dogs, and cats - not to mention other drivers - abound, and predictability is near zero. This is likely to be an important battleground for technology in coming years. As Google points out, navigation issues are solvable and technology is the same as in Google Maps, the reason they are investing so heavily. Navigation is only a small portion of the technology challenge, however. We are already seeing a significant increase in warning and avoidance systems, powered by advances in embedded vision reducing size and cost.

In a stark contrast to the other modes of transportation, the highways and byways lack any notion of "traffic control" beyond simplistic stop lights, passive video, and police patrols. There is no overseeing agency that controls and routes capacity, similar to the air or rail traffic control system - the roads are pretty much a free-for-all with a few toll-collecting exceptions. Services like Inrix are making headway in traffic measurement, but there will have to be major advances in infrastructure allowing cars and roadways to communicate directly, pervasively, and instantaneously. Speed, acceleration, direction, road conditions, visibility, and many other variables will factor in to even a straightforward scenario. This is a huge leap from the infotainment and navigation systems currently available. In fact, those toll roads may be the first places we see some autonomous infrastructure introduced, since cars can be identified and tracked using technology already in place.

All in all, we have amazingly safe transportation systems today given the volume of traffic that moves daily. We have the technology to do some incredible things in controlled conditions, but the leap in autonomous transportation and other areas will take a much deeper understanding of how to recognize and react quickly and safely to hazards and sudden changes, not leaving out-of-control situations for humans to save themselves.

Editor's Note: CR4 would like to thank Don Dingee for sharing this blog entry. You can see the original version of this post here.

3 comments; last comment on 07/22/2013
View/add comments


Previous in Blog: When the Open API Changes, Look Out  
Show all Blog Entries in this Blog

Advertisement