The Responsive Future

I recently attended Canvas Conference 2013, an event for developers, designers and digital marketers which took place here in Birmingham. The event featured a wide variety of speakers covering a huge range of topics: everything from physical product development to new and emerging tools, as well as your slightly more “traditional” subjects related to web development and marketing.

After a few days spent absorbing the huge amount of information I was exposed to, I noticed a common thread running through quite a few of the talks: that of the future of the responsive web.

Beyond the Breakpoint

My takeaway was that what we refer to as “responsive” is quickly moving beyond things like creating layouts that reflow over different breakpoints, providing support for touch vs non-touch, or designing for mobile/tablet/desktop.

Websites and apps are increasingly becoming responsive towards all sorts of additional factors: connection speed and availability, location, characteristics of the user’s environment (lighting, ambient audio and so on) – as well as responding, content-wise, to the tastes and personality of the user. New methods of input are becoming more common: speech recognition, motion control and video input. Perhaps the next advance will be thought control.

These factors are becoming increasingly important as we march towards a future of wearable devices, smart cities and the oft-foreseen Internet of Things. Your next project could be running on a wristwatch, smart television or a car dashboard, and each of these environments mandate different requirements.

Inspiration

Quite a few talks touched upon this responsive future:

  • The Guardian‘s Matt Andrews led the audience through the development of their new responsive website, which uses feature and connection sniffing in addition to other techniques in order to deliver content to more than 300 browsers across 6000+ devices. This is the current state of responsive taken to the extreme: almost any visitor is able to access its content quickly and easily – important in certain parts of the world, where people may not have the luxury of fast connections and expensive devices. (Matt has a written version of his talk here – a recommend read).
  • Alice Bartlett of BERG described in great detail the development process behind Little Printer, a tiny web-connected printer which pulls a variety of user-sourced content via BERG Cloud, their custom-built data platform. This cute product is a lovely example of an Internet Thing: a connected, independent data-driven device which delivers content personalised to its user.
  • Microsoft’s Martin Beeby gave a whistle-stop tour of present and future methods of interaction between humans and devices, highlighting topics such as touch, motion control and speech recognition. He showed refinements to current technologies – such as altering the sizes of touch targets depending on how likely they’ll be touched next – as well as talking about more offbeat ideas, such as a motion control system which can detect your hands by the sound they make moving through the air. Most importantly, he introduced me to the Google Web Speech API – more on this in just a minute…
  • Harry Hirst of Qubit talked about personalisation: enabling websites  – in this case, retailers – to track and build detailed profiles about their visitors. It lets them analyse and predict the tastes and movements of individuals, to display particular content, products or pop-ups which might be more likely to lead to a conversion. So sophisticated is their technology that apparently it can predict exit patterns, throwing up a special offer or other such distraction in order to prevent a potential customer from bouncing away.

It was Harry’s talk in particular which planted the seed of an idea in my mind.

Swearing at Modal Boxes

I agree with the idea of personalisation – there are many good use cases, such as a reference site being able to suggest links which might be most useful to look at next. However, when it comes to retail I find personalisation intrusive and irritating.

I understand that a lot of people probably appreciate retail personalisation, whether they’re aware it’s happening or not. But, surely it’s possible to detect the behavioural patterns of people who don’t want to be personalised? The retailer could make more money, due to the experience for that group of people being cleaner and easier than that of a competitor. It might be that someone does this already – I just haven’t realised.

My pet hate is sites which display an overlaid modal box presenting an offer, advert or otherwise – particularly halfway through reading – and I often find myself swearing at them in frustration. To this end, I’ve produced a simple demo which uses the Google Web Speech API to detect when you’re swearing at a modal box, which closes should that be the case. Find it here (and, find the code here on GitHub).

There are some caveats to the current implementation:

  • It only works in Chrome 25+, at time of writing.
  • At the beginning of every session, you have to enable the microphone – though this also ensures websites aren’t silently recording everything you’re saying. Maybe at some point people might have whitelists of sites which are trusted to respond via voice (note, according to this article, pages served via HTTPS only have to request the use of a microphone once).
  • Each session only lasts for 60 seconds in total, and if no speech is detected within 8 – 10 seconds then the session terminates.
  • The speech recognition is currently a bit poor.

Though this demo is neither practical nor usable in a real-world sense, it does give a glimpse into a future where a website or app could respond to a user’s ambient behaviour, tailoring both content as well as UX to the tastes of that user in an entirely transparent manner.

Also, messing about with new technologies like the Web Speech API is awesome fun. If you’d like to have a play yourself then I’d recommend this and this as good starting points.

tweet me your thoughts