The Old World Display

Maybe a decade ago, a web-designer and friend of mine told me a classic “client from hell” story. The details have since become fuzzy, but the crux of the story revolved around a particular design the client wouldn’t approve. There was this one detail that was off, a particular element that just wouldn’t center properly in the layout (it was insisted). Thankfully the client had come up with a seemingly simple fix: just draw half a pixel! Who would’ve guessed that just a decade later, “The Apple Retina Display” would herald the arrival of just that: the halfpixel?

While the term “retina” is mainly marketing chatter meant to imply that you can’t see the pixels on the screen, it’s not just about making sure the display is arbitrarily high resolution. In fact, it’s pixel-doubling. The 1024×768 iPad doubled to 2048×1536 when going retina, and while the implicit goal of this was clarity and crispness, the exact doubling of screen dimensions means UIs elements scale perfectly: 1 pixel simply becomes 4 pixels. It’s quite brilliant, and there’s only one pitfall.

For a designer it’s more than easy to get carried away with quadruple the canvas. In the plebean resolution of yore, tiny UI elements had little room for error: icons had to be perfectly aligned to even pixels, and ideally their stem weight would be a whole pixel-value. The atom of the display — the pixel — was the tiniest unit of information possible, and if it wasn’t mindfully placed it would blur and artifact under heavy antialiasing. In The Old World we had a term for this: pixel-perfect.

However inside the retina display lives the halfpixel siren, and her song is alluring. She’ll make you forget about The Old World. She’ll draw tiny tiny pixels for you and she’ll make your designs feel pixel-perfect, even when they’re not. It’s an irresistable New World.

David Pierce reviewing the new iMac 5K for The Verge:

I drove an Audi and never looked at my Saturn the same way again. Remember the first time you used a capacitive touchscreen, threw your 56k modem out the window and switched to broadband, or switched from standard-def TV to 1080p?

It only took about ten minutes of using Apple’s new iMac with Retina display to make me wonder how I’m ever supposed to go back. Back to a world where pixels are visible on any screen, even one this big.

It’s a good life in The New World. It’s a good life here in the first world. It’s so true: no-one should have to endure the pin-pricking misery of looking at old-world 1x screens! (Actually, my 35 year old eyes aren’t good enough to see pixels on my daughters etch-a-sketch, but I can still totally empathize with how completely unbearable the pain must be).

It gets worse — are you sitting down? Here it comes: most screens aren’t retina. One wild guess puts the population of non-retina desktop users (or old-worlders as I call them) at 98.9%.

That’s not good enough, and it’s time to fight for the halfpixel. We’re at a fateful crucible in history, and I can see only two possible paths going forward. Either we start a “retina displays for oil” program to bring proper high resolutions to the overwhelming majority of people who even have computers, or we just have to live with ourselves knowing that these old-worlders will have to endure not only disgustingly low resolutions, but indeed all of the added extra blur and artifacts that will result of future computer graphics being designed on retina screens and not even tested for crispness on 1x screens1.

Oh well, I suppose there’s a third option. I suppose we can all wake up from this retina-induced bong haze and maybe just once in a while take one damn look at our graphics on an old world display.

  1. Hey Medium… We need to talk.  

The One Platform Is Dead

I used to strongly believe the future of apps would be rooted in web-technologies such as HTML5. Born cross-platform, they’d be really easy to build, and bold new possiblities were just around the corner. I still believe webapps will be part of the future, but recently I’ve started to think it’s going to be a bit more muddled than that. If you’ll indulge me the explanation will be somewhat roundabout.

The mobile era in computing, more than anything, helped propel interface design patterns ahead much faster than decades of desktop operating systems did. We used to discuss whether your app should use native interface widgets or if it was okay to style them. While this is often still a good idea, dwelling on it would be navelgazing, as it’s no longer the day and night indicator whether an app is good or not. In fact we’re starting to see per-app design languages that cross not only platforms, but codebases too. Most interestingly, these apps don’t suck! You see it with Google rolling out Material Design across Android and web-apps. Microsoft under Satya Nadella is rolling out their flatter-than-flat visual language across not only their own Windows platforms, but iOS and Android as well. Apple just redesigned OSX to look like iOS.

It feels like we’re at a point where traditional usability guidelines should be digested and analyzed for their intent, rather than taken at dogmatic face value. If it looks like a button, acts like a button, or both, it’s probably a button. What we’re left with is a far simpler arbiter for success: there are good designs and there are bad designs. It’s as liberatingly simple as not wearing pants.

dogma (noun)

a principle or set of principles laid down by an authority as incontrovertibly true

The dogma of interface design has been left by the wayside. Hired to take its place is a sense of good taste. Build what works for you and keep testing, iterating and responding to feedback. Remembering good design patterns will help you take shortcuts, but once in a while we have to invent something. It either works or it doesn’t, and then you can fix it.

It’s a bold new frontier, and we already have multiple tools to build amazing things. No one single technology or platform will ever “win”, because there is no winning the platform game. The operating system is increasingly taking a back seat to the success of ecosystems that live in the cloud. Platform install numbers will soon become a mostly useless metric for divining who’s #winning this made-up war of black vs. white. The ecosystem is the new platform, and because of it it’s easier than ever to switch from Android to iOS.

It’s a good time to build apps. Come up with a great idea, then pick an ecosystem. You’ll be better equipped to decide what type of code you’ll want to write: does your app only need one platform, multiple, or should it be crossplatform? It’s only going to become easier: in a war of ecosystems, the one that’s the most open and spans the most platforms will be the most successful. It’ll be in the interest of platform vendors to run as many apps as possible, whether through multiple runtimes or just simplified porting. It won’t matter if you wrote your app in HTML5, Java, or C#: on a good platform it’ll just work. Walled gardens will stick around, of course, but it’ll be a strategy that fewer and fewer companies can support.

No, dear reader, I have not forgotten about Jobs’ Thoughts on Flash. Jobs was right: apps built on Flash were bad. That’s why today is such an exciting time. People don’t care about the code behind the curtain.

If it’s good, it’s good.

Mobile

The future of computing is mobile, they say, and they’re not referring to that lovely city in Alabama. Being a fan of smartphones and their UI design, I’ve considered myself mostly in the loop with where things were going, but recently it’s dawned on me I might as well have been lodged in a cave for a couple of years, only to just emerge and see the light. The future of computing is more mobile than I thought, and it’s not actually the future. (It’s now — I hate cliffhangers).

I had a baby a few years ago. That does things to you. As my friend put it, parenthood is not inaccurately emulated by sitting down and getting up again immediately. It’s a mostly constant state of activity. Either you’re busy playing with the child, caring for it, planning for how you’re going to care for the child next, or you’re worrying. There’s very little downtime, and so that becomes a valuable commodity in itself.

One thing parents do — or at least this parent — is use the smartphone. I put things in my calendar and to-do list because if it’s not in my calendar or to-do list I won’t remember it. I don’t email very much, because we don’t do that, but I keep Slack in my pocket. I take notes constantly. I listen to podcasts and music on the device, I control what’s on my television with it, and now I’m also doing my grocery shopping online. (I’d argue that last part isn’t laziness if you have your priorities straight, and having kids comes with an overwhelming sense that you do — it’s powerful chemistry, man.)

So what do I need my laptop for? Well I’m an interface designer, so I need a laptop for work. But when I’m not working I barely use it at all. To put it differently, when I’m not working, I don’t need a laptop at all, and if I were in a different business I’d probably never pick one up. There’s almost nothing important I can’t do on my phone instead, and often times the mobile experience is better, faster, simpler. By virtue of there being less realestate, there’s just not room for clutter. It requires the use of design patterns and a focus on usability like never before. Like when a sculptor chips away every little piece that doesn’t resemble, a good mobile experience has to simplify until only what’s important remains.

It’s only in the past couple of years that the scope of this shift has become clear to me, and it’s not just about making sure your website works well on a small screen. Computers have always been doing what they were told, but the interaction has been shackled by lack of portability and obtuse interfacing methods. Not only can mobile devices physically unshackle us from desks, but their limitations in size and input has required the industry to think of new ways to divine intent and translate your thoughts into bits. Speaking, waving at, swiping, tapping, throwing and winking all save us from repetitive injuries, all the while being available to us on our own terms.

I’m in. The desktop will be an afterthought for me from now on — and the result will probably be better for it. I joked on Twitter the other day about watch-first design. I’ve now slept on it, and I’m rescinding the joke part. My approach from now on is tiny-screen first and then graceful scaling. Mobile patterns have already standardized a plethora of useful and recognizable layouts, icons and interactions that can benefit us outside of only the small screens. The dogma of the desktop interface is now a thing of the past, and mobile is heralding a future of drastically simpler and better UI. The net result is not only more instagrams browsed, it’s more knowledge shared and learned. The fact that I can use my voice to tell my television to play more Knight Rider is just a really, really awesome side-effect.

The Plot To Kill The Desktop

As a fan of interface design, operating systems — Android, iOS, Windows — have always been a tremendous point of fascination for me. We spend hours in them every day, whether cognizant about that fact or not. And so any paradigm shifts in this field intrigue me to no end. One such paradigm shift that appears to be happening, is the phasing out of the desktop metaphor, the screen you put a wallpaper and shortcuts on.

Windows 8 was Microsofts bold attempt to phase out the desktop. Instead of the traditional desktop being the bottom of it all — the screen that was beneath all of your apps which you would get to if you closed or minimized them — there’s now the Start screen, a colorful bunch of tiles. Aside from the stark visual difference, the main difference between the traditional desktop and the Start screen, is that you can’t litter it with files. You’ll have to either organize your documents or adopt the mobile pattern of not worrying about where files are stored at all.

Apple created iOS without a desktop. The bottom screen here was Springboard, a sort of desktop-in-looks-only, basically an app-launcher with rudimentary folder-support. Born this way, iOS has had pretty much universal appeal among adopters. There was no desktop to get used to, so no lollipop to have taken away. While sharing files between apps on iOS is sort of a pain, it hasn’t stopped people from appreciating the otherwise complete lack of file-management. I suppose if you take away the need to manage files, you don’t really need a desktop to clutter up. You’d think this was the plan all along. (Italic text means wink wink, nudge nudge, pointing at the nose, and so on.)

For the longest time, Android seems to have tried to do the best of both worlds. The bottom screen of Android is a place to see your wallpaper and apps pinned to your dock. You can also put app shortcuts and even widgets here. Through an extra tap (so not quite the bottom of the hierarchy) you can access all of your installed apps, which unlike iOS have to manually be put on your homescreen if so desired. You can actually pin document shortcuts here as well, though it’s a cumbersome process and like with iOS you can’t save a file there. Though not elegant, the Android homescreen works reasonably well and certainly appeals to power-users with its many customization options.

Microsoft and Apple both appear to consider the desktop (and file-management as a subset) an interface relic to be phased out. Microsoft tried and mostly failed to do so, while Apple is taking baby-steps with iOS. If recent Android leaks are to be believed, and if I’m right in my interpretation of said leaks, Android is about to take it a step beyond even homescreens/app-launchers.

One such leak suggests Google is about to bridge the gap between native apps and web-apps, in a project dubbed “Hera” (after the mythological goddess of marriage). The mockups posted suggest apps are about to be treated more like cards than ever. Fans of WebOS1 will quickly recognize this concept fondly:

The card metaphor that Android is aggressively pushing is all about units of information, ideally contextual. The metaphor, by virtue of its physical counterpart, suggests it holding a finite amount of information after which you’re done with the card and can swipe it away. Like a menu at a restaurant, it stops being relevant the moment you know what to order. Similarly, business cards convey contact information and can then be filed away. Cards as an interface design metaphor is about divining what the user wants to do and grouping the answers together.

We’ve seen parts of this vision with Android Wear. The watch can’t run apps and instead relies on rich, interactive notification cards. Android phones have similar (though less rich) notifications, but are currently designed around traditional desktop patterns. There’s a homescreen at the bottom of the hierarchy, then you tap in and out of apps: home button, open Gmail, open email, delete, homescreen.

I think it’s safe to assume Google wants you to be able to do the same (and more) on an Android phone as you can on an Android smartwatch, and not have them use two widely different interaction mechanisms. So on the phone side, something has to give. The homescreen/desktop, perchance?

The more recent leak suggests just that. Supposedly Google is working to put “OK Google” everywhere. The little red circle button you can see in the Android Wear videos, when invoked, will scale down the app you’re in, show it as a card you can apply voice actions on. Presumably the already expansive list of Google Now commands would also be available; “OK Google, play some music” to start up an instant mix.

The key pattern I take note of here, is the attempt to de-emphasize individual apps and instead focus on app-agnostic actions. Matias Duarte recently suggested that mobile is dead and that we should approach design by thinking about problems to solve on a range of different screen sizes. That notion plays exactly into this. Probably most users approach their phone with particular tasks in mind: send an email, take a photo. Having to tap a home button, then an app drawer, then an app icon in order to do this seems almost antiquated compared to the slick Android Wear approach of no desktop/homescreen, no apps. Supposedly Google may remove the home button, relegating the homescreen to be simply another card in your multi-tasking list. Perhaps the bottom card?

I’ll be waiting with bated breath to see how successful Google can be in this endeavour. The homescreen/desktop metaphor represents, to many people, a comforting starting point. A 0,0,0 coordinate in a stressful universe. A place I can pin a photo of my baby girl, so I can at least smile when pulling out the smartphone to confirm that, in fact, nothing happened since last I checked 5 minutes ago.

  1. Matias Duarte, current Android designer, used to work on WebOS  

Icon Fonts Are Ruining Your Markup

Icon fonts are great. They’re more scalable than PNGs, they’re re-colorable in CSS, and it’s easier than ever to create them. But most of us are using them wrong, and it’s ruining your markup. Recognize this?

<span class="icon icon-calendar"></span>

This is fast becoming the de-facto standard syntax for inserting icons in your webdesigns. No need to fiddle with weird glyphs. No CSS needed to insert icons, even.

No CSS.

Hold up. The promise of CSS was that we could separate presentation from markup. We could create standardized, semantic and sensible markup, that could then be completely re-skinned solely by replacing a stylesheet. That was the whole point of the CSS Zen Garden. By using nonsense spans with verbose classes, that’s out the window, all in the name of convenience. I too have been bitten by the icon-font bug. I’m into them. I think it’s so great that I can re-color icons with a line of CSS. I like how they zoom, and how they have broader support than SVGs.

But they’re not SVGs. They’re not images. We shouldn’t pretend they were, or that they could ever be as accessible as images are.

Theoretically an icon from a font could be inserted in a semi-accessible way by outputting the actual glyph in the code to ensure copy/paste-ability. You’d also have to make sure to only use an icon that already existed in the unicode-table so screen-readers could make sense of them, thus severely limiting your options as a designer. Need a hamburger menu icon? Sorry, doesn’t exist in the unicode table. Pretty much all the benefits of using an icon font would be out the window at this point.

Rewind for a bit. Take a deep breath and think. Why are you using an icon font in the first place? Easy HiDPI and CSS colorability? Easy to show at multiple sizes? Fair enough.

Now pretend icon fonts didn’t exist, what would you do instead? Use a PNG or GIF as a CSS background, right? You’d treat the graphics as presentational elements. Visual aids. You’d keep it separate from your markup. You’d be able to reskin your whole site with a single stylesheet. You’d keep the markup as simple and semantic as that of the CSS Zen Garden. Hopefully you’d be a good person and worry about accessability where it mattered most: in the structure of your markup.

You can still use icon-fonts and have sensible markup while keeping the presentation separate. But doing so means you can’t rely on those bundled CSS helper classes. You have to do it manually; put in the work. Don’t treat icon-fonts like images. Pretend they’re sprites and keep them in your stylesheet. You’ll thank me.

A Chromecast with a Remote

The internet is a series of tubes.

Last week Android TV leaked on The Verge. The leak was conveniently timed right after the Amazon Fire TV release, and featured unusually clear screenshots that were perfectly front facing but appeared lightly filtered, almost as if to make them appear as though they were unintentionally leaked. Regardless of intent, it gave us an insight into the set-top box that Google is supposedly building.

Just a couple of months ago I bought into the Google Chromecast, a headless HDMI dongle that streams the internet to your TV. The Chromecast is as simple as can be: it requires you to use your handset or tablet to control it, so there are no “apps” per se. In fact, in order for Netflix to support the Chromecast, it has to offer its content — movies, TV shows, poster art, box art — as URLs. Because the Chromecast can read nothing else.

That’s where it gets interesting. The article in The Verge suggests an obvious question, why is Google making a set-top box that requires apps when its first successful TV device required none? Thankfully, GigaOM filled in the blanks in their article on the technology behind. If I’m reading the tea-leaves correctly, Google have indeed cracked it, and the Android TV doesn’t really require apps — not in the way we’re used to:

I’ve been told that Google’s new approach wants to do away with those differences by replacing these custom interfaces with standardized templates. Publishers wouldn’t need to come up with their own user interface, but instead would develop apps that provide data feeds to the Android TV platform.

Read it this way: you don’t have to make an app for the Android TV, your content just has to be URL accessible. In fact, if a service is already Chromecast ready, putting it on Android TV will probably require very little work. It’s quite clever; just expose the content-tube endpoint and  you have the best of the internet in a native experience, like an RSS feed for television.

Android TV is just a bigger Chromecast, with a remote-control and an interface, should you prefer that. Ted Stevens was right all along.