Wearable technology is in its infancy. But don’t be fooled: the advent of wearables will fundamentally change the job of the application developer. Here’s how.

There’s no doubt about it: wearable technology is picking up steam. But as wearables gain traction with consumers and businesses, application developers will need to tackle a huge, new challenge, namely: context.

What do I mean by ‘context’? It’s the notion – unique to wearable technology – that applications will need to be authored to be aware of and respond to the situation of the wearer. Just received a new email message? Great. But do you want to splash an alert to your user if she’s hurtling down a crowded city street on her bicycle? Text message? Great – but do you want to buzz your users watch if the heart rate monitor suggests that he’s asleep?

These kinds of conundrums are a new consideration for application developers accustomed to writing for devices – ‘endpoints’ that are presumed to be objects that are distinct from their owner and, often, stationary.

Google has already called attention to this in its developer previews of Android Wear – that company’s attempt to extend its Android mobile phone OS to wearables. Google has encouraged wearable developers to be “good citizens.” “With great power comes great responsibility,” Google’s Justin Koh reminds would-be developers in a Google video.

“Its extremely important that you be considerate of when and how you notify a user….” Developers are strongly encouraged to make notifications and other interactions between the wearable device and its wearer as ‘contextually relevant as possible.’ Google has provided APIs (application program interfaces) to help with this. For example, Koh notes that developers can use APIs in Google Play Services to set up a geo-fence that will make sure the wearer is in a specific location (i.e. “home”) before displaying certain information.

Or, motion detection APIs for Wear can be used to front or hide notifications when the wearer is performing certain actions, like bicycling. Google is having fun with that; a promotional video shows a watch prompting its dancing wearer to look up the name of the song she’s dancing to. But it’s likely that the activity detection APIs will be just as important as a safety feature of Android Wear devices.

The problem, of course, is that considerations like these require a much deeper understanding about how humans behave in a much wider range of contexts than just ‘sitting at a desk.’ Anyone who has had the experience of pulling up behind a car whose driver is engaged in a cell phone conversation or (God forbid) texting appreciates the dangers posed by portable devices—the design of which doesn’t take context into consideration.

In the very near future, application design decisions will need to do a much better job of balancing feature development against an almost limitless range of use contexts as well as considerations of personal safety. Sensors will no longer be simply an excuse for extending features – they’ll be the developer’s lifelines to the wearer: a source of real-time information about the context that the user is in. That data will (or should) affect the behavior of the wearable application.

It’s also likely that wearable device makers will need to give some thought to fields such as cognitive science and even sociology in designing their products. Google Glass is a hugely important development: the first commercially available consumer technology that attempts to break down the wall between the device and the wearer. But recent stories about Glass wearers (derisively referred to as “Glassholes”) being harassed and even attacked by irate, privacy minded crowds suggests that maybe the public isn’t ready to embrace the ‘everyone is filming everyone all the time’ model of human social interaction. That matters.

Or, consider that most wearable devices have settled on dings and vibrations to notify users of events (new email, calendar appointment, etc.). But that’s a factor of the technology that can be miniaturized and implanted in a small device, not the wearers’ feelings about how to best inform them of something. Shouldn’t we at least start with an idea of what customers want – even if that’s different from what has come before? We’ve learned a lot since the days of Clippy, Microsoft’s hateful talking paperclip. But we haven’t learned everything.

To be clear: wearable tech is still in its infancy. For all the hype, Google Wear is just a platform for relaying alerts and other data from your Android phone to a compatible Android watch. That’s cool – but hardly earth-shattering. But it’s a mistake to discount the movement toward wearable tech as a fad, or wearable devices as the mobile phone’s poor cousin. The migration to wearables will change the way we live, work, and play. But it’s a change that requires some thought and planning by the software development community to get right. It’s far from clear that will happen.

Paul Roberts is an experienced technology writer and editor that has spent the last decade covering hacking, cyber threats, and information technology security, including senior positions as a writer, editor and industry analyst. His work has appeared on NPR’s Marketplace Tech Report, The Boston Globe, Salon.com, Fortune Small Business, as well as ZDNet, Computerworld, InfoWorld, eWeek, CIO , CSO and ITWorld.com. He was, yes, a guest on The Oprah Show — but that’s a long story. You can follow Paul on Twitter here or visit his website The Security Ledger.



contact menu