From the Front Lines of Multi-Device Web Design by Luke Wroblewski, #wcsf14

Notes from Luke Wroblewski‘s talk on “From the Front Lines of Multi-Device Web Design” at the 2014 WordCamp conference in San Francisco. Official description:

It’s hard enough to design a great mobile or Web site but what about experiences that span these devices and more? Join Luke for a set of lessons learned designing Web products that attempt to embrace simultaneous and sequential multi-device use. What worked and, more importantly, what didn’t?

My notes:

  • SmartphoneHe started at NCSA in 1996, exciting times, because lots of PC sales and people came online then.
  • We built sites because people could access it through their PCs
  • Now PC sales are done, bigger then dotcom bust
  • Smartphone sales go way up
  • Out of 7B people 4.5B is literate. Most of them have a mobile device. Next three years most of them will have smartphones
  • It is no longer a PC driven world.
  • Size goes to very big: eye/palm to TV
  • Surveys (done by Luke) can look good on smartphone/screensize
  • Watch results in realtime
  • Multi-device design
  • They deal with input (not just output) and posture: how people relate to their devices
  • Input: how people interact with their devices doesn’t just depend in screensize
  • Small screen: single hand: designing for one thumb interaction is important. People hold it with one hand, less doing the “Crackberry prayer” (results from 1300 participant)
  • 72% involves thumb – primary interaction paradigm
  • Tapping video example: you cannot tap on it you will give up
  • Dual panel UI interafce Luke built works for thumb. Swiping, help, gestures, pull-down… all work
  • Tablet: scrolling panel on left: horizontal use is 65%, vertical 35% use
  • Touch-based laptops: allow for keyboard activity: pop up explaining who use keyboards
  • You can’t really tell who is using touch-laptops with keyboard.
  • Support touch everywhere. Above certain size you pop-up tips to help understand how you use the interface.
  • Center empty, content on the side for tablets: unusual, just the opposite of common design.
  • People complained about scrolling
  • People use devices while watching TV: Voice command smartphone to “put a webpage on my XBox screen”. Audio command to bring content /programming on TV screen. Use each device what it is good for.
  • Screensize is not enough data point for making assumptions. TV and smart phone may have the same resolution.
  • Shouldn’t use a mobile-optimized site for big TV
  • What’s being used to watch Netflix? There are 500 different SKUs.
  • They use human ergonomics, viewing distance: then it is not 500 device, but 4 different distances only.
  • Same UI element: mobile (12 inch away) 1 inch box,  tablet (18 inch away) 1.5 inch size, desktop (24 inch away) 2-3 inches, TV (10 foot away) 5 1/4 UI element
  • Designing for Google Glass: sizing down to 960px or 640px, but usual responsive design wouldn’t work, gets too crowded.
  • Posture: how screen fits into its environment?
  • He built a prototype for Glass. No keyboard, have to use audio commands. It’s more complicated.
  • Inverting contrast to comfort the environment. Often low light environment: make the background dark.
  • Navigation app does it through ambient light censor.
  • Software is responsive to your environment.
  • Media Queries level 4: we can query a lot of stuff, including light level; not yet available in browsers yet.
  • Summary:
    • 1. Output: mobile first, responsive web design
    • 2. Input: support all input types (keyboard, voice, thumb…); communicate what’s possible.
    • 3. Posture: viewing distance, environment and more
  • QA:
  • See this talk as a blogpost here.

Leave Comment

Your email address will not be published. Required fields are marked *