The new stories of social computing are shared here. We're exploring mobile devices, embedded computing, wireless sensor networks, and social business from the perspectives of technology, business, and societal changes.

About Don Dingee

An experienced strategic marketer and editorial professional, and an engineer by education, Don is currently a blogger, speaker, and author on social computing topics, and a marketing strategy consultant. He's had previous gigs at Embedded Computing Design magazine, Motorola, and General Dynamics.

Trillions and Trillions of Devices Everywhere

Posted November 25, 2013 12:00 AM by dondingee

Somewhere in my font of near-useless TV trivia sticks a moment that perfectly defines marketing the Internet of Things. Searching IMDb quickly revealed the source of the memory as "Shadow Chasers", a short-lived 1985 TV series about a team searching for the paranormal.

In the opening minutes of the first episode, we meet tabloid reporter Edgar 'Benny' Benedek (portrayed by Dennis Dugan of "Happy Gilmore" fame) who has a high-concept pitch for a story: Elvis Presley as extraterrestrial. His editor says he needs some kind of proof to run with it, and he needs it fast to make deadline. Benny thinks for a few seconds, and then picks up the phone.

He calls Carl Sagan, astronomer and author and host of "Cosmos", during the 1980s the most widely watched series in the history of public television. We only hear Benny's side of the conversation asking Sagan point-blank about the possibility that Elvis was an alien, and verifying Sagan's response saying he was astonished that anyone would call him and ask such a ludicrous question. The resulting headline emblazoned across the next edition:

Elvis Was An Alien, Sagan Astonished

I'm reminded of that every time I see the beloved "hockey stick". Hyperbolic technology growth projections extrapolated from a shred of fact and voiced with conviction by industry leaders draw in more participants. Take this headline, inspired by the recent TSensors Summit:

Millions are boring, billions are interesting, but trillions get people all worked up if the story is good enough. This assertion is completely plausible even if actual growth doesn't exactly match the hyperbole, as charted below in various projections of worldwide sensor population.

Continue reading this post.

Editor's Note: CR4 would like to thank Don Dingee for sharing this post. You can read the original on his blog.

13 comments; last comment on 03/13/2014
View/add comments

With a Bluetooth Beacon, and a Coupon, and a Mom in the Aisle

Posted November 07, 2013 12:00 AM by dondingee

In a recent conversation about the Internet of Things (IoT), we got on the topic of monetizing data from devices. Someone asserted there is little value in one social stream - unless it happens to be that of Ashton Kutcher or similar - but instead the value is in millions of streams. I strongly disagreed with that; when one person armed with a mobile device and a social app meets a few of the right embedded devices, a lot can happen.

I know, social is a fad, or for the young, or just a time waster, at least according to many of the old guard I hear from. It's time to expand the vision. If there is an expert on networking as a business, it is John Chambers, CEO of Cisco. In his recent keynote at Interop, Chambers characterized the Internet of Everything as the fourth phase of development of the Internet, following connectivity, e-commerce, and mobile plus social technology.

Most technologists see the value of the IoT in industrial settings laden with sensors, a big-data connectivity problem. Mobile devices are a natural fit for the IoT, because they offer connections to sensors via Wi-Fi and Bluetooth and a gateway to the Internet, allowing personal clusters to be created. But how does social technology fit in? The clue lies in how the combination of technology in the progression Chambers outlined creates possibilities.

Continue reading this post.

Editor's Note: CR4 would like to thank Don Dingee for sharing this post. You can read the original on his blog.

1 comments; last comment on 11/08/2013
View/add comments

Interrupting the Disruption and Getting Back to Innovation

Posted August 22, 2013 12:00 AM by dondingee

One of the quintessential reads for technology strategists is "The Innovator's Dilemma" by Clayton Christensen. It portrays the concept of disruptive innovation, an unexpected change opening up new markets and ecosystems, unlocking value difficult for most to access in an old market.

Technologists (me included) became enamored with the concept. Christensen captured what we saw occuring around us for decades, starting with the birth of the transistor and everything it spawned. The arrival of the microprocessor and the innovation of the personal computer defined a generation, and made heroes and fortunes.

But somewhere along the path, we - not Christensen, but the rest of us - confused disruptive innovation with plain old disruption.

Disruption is a familiar mechanism in storytelling. Take Star Trek II, The Wrath of Khan. After driving into the nebula to equalize the odds by rendering sensors mostly useless, Kirk predictably becomes impatient with the game of hide and seek. Wanting to regain tactical advantage, he wonders where the next attack will come from. Spock offers a tactical assessment of the opponent.

He is intelligent, but inexperienced.
His pattern indicates … two dimensional thinking.

The next command is Z minus 10,000 meters, engineering speak for a three dimensional move. Disruption for the win: a harmless flyover, a move to six o'clock position, fire all weapons, listen as villain quotes Moby Dick, flee from weapon of mass destruction, realize the costs, bury a hero, live to play again.

Clich├ęs for disruption have sprung up all over business literature: "change the game", "think out of the box", "break the mold", "leapfrog the competition" and other rallying cries have been heard in every conference room.

A prime example is Amazon, who surveyed the landscape and aggregated choices into a massive disruption that eventually wiped out bookstore chains, damaged electronics retailers, shifted a market from "web hosting" to "cloud computing", and messed up publishing to the point where the disrupter can now afford to buy the disruptee. Some would say this is disruptive innovation, but it fails the Christensen test: it didn't improve market access in most cases, with the exception of Amazon Web Services and EC2. It just redirected the ecosystem and money towards Amazon.

It paid to be disruptive, for a while, until everyone was trying to do it. If everyone is constantly attempting to disrupt each other in a confused melee, the narrative becomes less like Star Trek with a decisive victory and more like Sons of Anarchy: a lot of activity, but few lasting outcomes - just more instability.

Jeff Bezos may have come to exactly the same conclusion I have: we can't afford to disrupt everything anymore. A newspaper, even one in the seat of political power, seems an odd purchase for an online magnate. But if there is anyone who understands this disruption, it should be the guy who created it. Reviving an institution like the Washington Post, and solving the dilemma of how print and online news can work in harmony, is an immense challenge. I hope he succeeds in the effort, and in the process finds a way to stabilize the publishing industry.

Continue reading this post.

Editor's Note: CR4 would like to thank Don Dingee for sharing this post. You can read the original on his blog.

1 comments; last comment on 08/22/2013
View/add comments

Smooth Seas Do Not Make Skillful Automatons

Posted July 18, 2013 12:00 AM by dondingee

We usually associate technological mishaps with extenuating circumstances: bad weather, mechanical or electronic failure, poor decision making by software or humans. We tend to seek identification of a single, overriding root cause, thinking if that were isolated and dealt with, system failure would be avoided.

It has been demonstrated time and time again that major accidents are typically the end result of a sequence of smaller incidents. Individually, these incidents are often handled without consequence, but when strung together in rapid-fire fashion they accumulate and amplify into catastrophic trouble. The difficulty is this: humans generally trust the machine until the unthinkable worst-case scenario is joined, already in progress.

If a process is well understood and follows a fixed decision tree, it can be described by mathematics and thereby controlled. In the domain of traditional industrial automation and process control, automatons - simple control mechanisms, intelligent machines, robots, and the like - excel because they are good at:

  • Repeating a programmed sequence;
  • Ignoring or compensating for variations in inputs;
  • Maintaining a steady-state process;
  • Deciding quickly and following a defined course of action.

Smooth seas present little challenge, but automatons don't do as well when they encounter dark seas, rough roads, or violent skies. As science fiction teaches, there several methods to increase the stress level and defeat most automatons:

  • Introduce exceptions simultaneously;
  • Remove external references of measurement;
  • Deprive the machine of communication or power;
  • Change the rules of the game entirely.

When too many things go wrong, automatons are lost. In these conditions, humans excel because they can recall prior experience, adapt, infer, extrapolate, and operate on "best guess" information to fill gaps. This is why automating fluid situations like healthcare, combat, emergency response, and others generally falls short.

Transportation is a gray area for automation, rife with outside perturbations and variable sequences that can push automatons over the edge, leaving a mess for the humans who intervene to try and correct. Systems with a notion of "traffic control", external resources available to aid both the machines and humans, have achieved remarkable rates of safe operation overall, spectacular failures notwithstanding.

Short-haul rail systems have seen success in automation by eliminating unpredictability as much as possible. A good example is elevating a train and allowing only pre-programmed stops, such as the PHX Sky Train at Phoenix Sky Harbor Airport. This works well for short routes with specific tasks on a clock: move from station A to station B, open the doors for N seconds, proceed to the next stop. With minimal risk of things like people, cars, or opposing train traffic to present a hazard, the automaton can do what it does best: keep the schedule.

Longer rail routes have a rule-breaker: high-speed trains do not stop easily. Even if brakes are applied and work properly, trains often continue rolling for up to a mile. Some say we have the technology for automated trains, but recognizing problems and reacting quickly enough can be challenging even for human operators. With hundreds or thousands of miles of track, often in remote areas, monitoring is difficult and expensive. Trains are also prone to run away due to failure or improper procedure, which this weekend's rail disaster in Lac-Megantic, Quebec illustrates.

Air travel has become highly automated, with an extensive system of traffic control, monitoring, and communication. We also have the technology for remotely piloted aircraft, appropriate for situations where putting humans in harm's way is risky and unnecessary. But the sky is an unforgiving place, and the dynamics of larger aircraft mean they don't always do what they are asked to do immediately by pilots or automated control systems. The theory of "big sky, little plane" usually holds up in level flight out in airspace, but eventually air traffic comes together at an airport where problems occur quickly and tend to accelerate. A great post describing airliner instrument approaches illustrates both the complexity of the tasks a pilot faces and the incredible range of things that must be accounted for in automation and traffic control.

Glide slope antenna array, Runway 09R, Hannover, GER - courtesy Wikimedia Commons

In another tragic example from this weekend, we have the "hard landing" of Asiana Airlines flight 214. One fact that has emerged is the instrument landing system (ILS) glide slope transmitter serving runway 28L at SFO was out of service for several weeks during construction. This suggests the pilot was on a manual approach without an external reference to provide warnings, which should not have been a problem given pristine weather. Evidence is already coming in showing the pilot may have realized several issues in progress - low airspeed, not enough throttle, bad angle of attack, excessive descent rate - too late to create enough of a reaction to save the aircraft, but might have averted a larger disaster by taking action seconds before impact and barely clearing the seawall. One wonders if an automated approach would have had different results.

The quest for the autonomous car is gathering steam, with tech companies like Google in the mix - but the reality differs from the headlines. Contests like the DARPA Grand Challenge have shown that the basic elements of technology are possible, if not yet repeatable or affordable. Conceivably, with technology like navigation, collision avoidance and intelligent spacing, freeway traffic may benefit from automation, even if the ride is unnerving.

Relatively wide open interstate traffic is very different from rush hour, and completely different from congested neighborhood traffic. Distractions ranging from smartphones to pedestrians, dogs, and cats - not to mention other drivers - abound, and predictability is near zero. This is likely to be an important battleground for technology in coming years. As Google points out, navigation issues are solvable and technology is the same as in Google Maps, the reason they are investing so heavily. Navigation is only a small portion of the technology challenge, however. We are already seeing a significant increase in warning and avoidance systems, powered by advances in embedded vision reducing size and cost.

In a stark contrast to the other modes of transportation, the highways and byways lack any notion of "traffic control" beyond simplistic stop lights, passive video, and police patrols. There is no overseeing agency that controls and routes capacity, similar to the air or rail traffic control system - the roads are pretty much a free-for-all with a few toll-collecting exceptions. Services like Inrix are making headway in traffic measurement, but there will have to be major advances in infrastructure allowing cars and roadways to communicate directly, pervasively, and instantaneously. Speed, acceleration, direction, road conditions, visibility, and many other variables will factor in to even a straightforward scenario. This is a huge leap from the infotainment and navigation systems currently available. In fact, those toll roads may be the first places we see some autonomous infrastructure introduced, since cars can be identified and tracked using technology already in place.

All in all, we have amazingly safe transportation systems today given the volume of traffic that moves daily. We have the technology to do some incredible things in controlled conditions, but the leap in autonomous transportation and other areas will take a much deeper understanding of how to recognize and react quickly and safely to hazards and sudden changes, not leaving out-of-control situations for humans to save themselves.

Editor's Note: CR4 would like to thank Don Dingee for sharing this blog entry. You can see the original version of this post here.

3 comments; last comment on 07/22/2013
View/add comments

When the Open API Changes, Look Out

Posted May 17, 2013 12:00 AM by dondingee
Pathfinder Tags: api Apps social media Twitter

"We're opening our API." Four words guaranteed to get developers really excited and get instant press coverage. By allowing programmers to freely access the application programming interface for something, a whole wide world of applications and data sharing opens up when a vendor opens their API.

I don't think it's overstated to say the entire social media revolution is powered by open APIs. The openness of Facebook, Google, Salesforce, Twitter and other platforms has enabled those services to become ingrained into many creative applications.

Embedded devices have the same opportunity. The entire Android movement means many devices, not just smartphones, have access to the operating system and applications developed for it. Specifications like OpenCV allow applications for vision to be developed quickly on a variety of platforms. Vendors can also choose to open up a specific device for developers - for instance, the folks at Jawbone are in the process of opening their Up API, allowing their data to be cross pollinated with other applications.

With popularity comes a price, however. An open API is an easy target for attackers, and security issues begin to arise as use widens. Innovation begins to suffer as well, because while both the vendor and the developer community would like to do the next cool thing, breaking the API and existing applications can be a problem. Managing the change forward so everyone doesn't freak out is a delicate exercise.

Freaking out is exactly what Twitter users are doing right now.

Editor's Note: CR4 would like to thank Don Dingee for sharing this blog entry. You can finish reading this post here.

3 comments; last comment on 05/19/2013
View/add comments

Previous in Blog: The Long Range Un-plan  
Show all Blog Entries in this Blog