The Plot To Kill The Desktop

As a fan of interface design, operating systems — Android, iOS, Windows — have always been a tremendous point of fascination for me. We spend hours in them every day, whether cognizant about that fact or not. And so any paradigm shifts in this field intrigue me to no end. One such paradigm shift that appears to be happening, is the phasing out of the desktop metaphor, the screen you put a wallpaper and shortcuts on.

Windows 8 was Microsofts bold attempt to phase out the desktop. Instead of the traditional desktop being the bottom of it all — the screen that was beneath all of your apps which you would get to if you closed or minimized them — there’s now the Start screen, a colorful bunch of tiles. Aside from the stark visual difference, the main difference between the traditional desktop and the Start screen, is that you can’t litter it with files. You’ll have to either organize your documents or adopt the mobile pattern of not worrying about where files are stored at all.

Apple created iOS without a desktop. The bottom screen here was Springboard, a sort of desktop-in-looks-only, basically an app-launcher with rudimentary folder-support. Born this way, iOS has had pretty much universal appeal among adopters. There was no desktop to get used to, so no lollipop to have taken away. While sharing files between apps on iOS is sort of a pain, it hasn’t stopped people from appreciating the otherwise complete lack of file-management. I suppose if you take away the need to manage files, you don’t really need a desktop to clutter up. You’d think this was the plan all along. (Italic text means wink wink, nudge nudge, pointing at the nose, and so on.)

For the longest time, Android seems to have tried to do the best of both worlds. The bottom screen of Android is a place to see your wallpaper and apps pinned to your dock. You can also put app shortcuts and even widgets here. Through an extra tap (so not quite the bottom of the hierarchy) you can access all of your installed apps, which unlike iOS have to manually be put on your homescreen if so desired. You can actually pin document shortcuts here as well, though it’s a cumbersome process and like with iOS you can’t save a file there. Though not elegant, the Android homescreen works reasonably well and certainly appeals to power-users with its many customization options.

Microsoft and Apple both appear to consider the desktop (and file-management as a subset) an interface relic to be phased out. Microsoft tried and mostly failed to do so, while Apple is taking baby-steps with iOS. If recent Android leaks are to be believed, and if I’m right in my interpretation of said leaks, Android is about to take it a step beyond even homescreens/app-launchers.

One such leak suggests Google is about to bridge the gap between native apps and web-apps, in a project dubbed “Hera” (after the mythological goddess of marriage). The mockups posted suggest apps are about to be treated more like cards than ever. Fans of WebOS1 will quickly recognize this concept fondly:

The card metaphor that Android is aggressively pushing is all about units of information, ideally contextual. The metaphor, by virtue of its physical counterpart, suggests it holding a finite amount of information after which you’re done with the card and can swipe it away. Like a menu at a restaurant, it stops being relevant the moment you know what to order. Similarly, business cards convey contact information and can then be filed away. Cards as an interface design metaphor is about divining what the user wants to do and grouping the answers together.

We’ve seen parts of this vision with Android Wear. The watch can’t run apps and instead relies on rich, interactive notification cards. Android phones have similar (though less rich) notifications, but are currently designed around traditional desktop patterns. There’s a homescreen at the bottom of the hierarchy, then you tap in and out of apps: home button, open Gmail, open email, delete, homescreen.

I think it’s safe to assume Google wants you to be able to do the same (and more) on an Android phone as you can on an Android smartwatch, and not have them use two widely different interaction mechanisms. So on the phone side, something has to give. The homescreen/desktop, perchance?

The more recent leak suggests just that. Supposedly Google is working to put “OK Google” everywhere. The little red circle button you can see in the Android Wear videos, when invoked, will scale down the app you’re in, show it as a card you can apply voice actions on. Presumably the already expansive list of Google Now commands would also be available; “OK Google, play some music” to start up an instant mix.

The key pattern I take note of here, is the attempt to de-emphasize individual apps and instead focus on app-agnostic actions. Matias Duarte recently suggested that mobile is dead and that we should approach design by thinking about problems to solve on a range of different screen sizes. That notion plays exactly into this. Probably most users approach their phone with particular tasks in mind: send an email, take a photo. Having to tap a home button, then an app drawer, then an app icon in order to do this seems almost antiquated compared to the slick Android Wear approach of no desktop/homescreen, no apps. Supposedly Google may remove the home button, relegating the homescreen to be simply another card in your multi-tasking list. Perhaps the bottom card?

I’ll be waiting with bated breath to see how successful Google can be in this endeavour. The homescreen/desktop metaphor represents, to many people, a comforting starting point. A 0,0,0 coordinate in a stressful universe. A place I can pin a photo of my baby girl, so I can at least smile when pulling out the smartphone to confirm that, in fact, nothing happened since last I checked 5 minutes ago.

  1. Matias Duarte, current Android designer, used to work on WebOS  

Icon Fonts Are Ruining Your Markup

Icon fonts are great. They’re more scalable than PNGs, they’re re-colorable in CSS, and it’s easier than ever to create them. But most of us are using them wrong, and it’s ruining your markup. Recognize this?

<span class="icon icon-calendar"></span>

This is fast becoming the de-facto standard syntax for inserting icons in your webdesigns. No need to fiddle with weird glyphs. No CSS needed to insert icons, even.

No CSS.

Hold up. The promise of CSS was that we could separate presentation from markup. We could create standardized, semantic and sensible markup, that could then be completely re-skinned solely by replacing a stylesheet. That was the whole point of the CSS Zen Garden. By using nonsense spans with verbose classes, that’s out the window, all in the name of convenience. I too have been bitten by the icon-font bug. I’m into them. I think it’s so great that I can re-color icons with a line of CSS. I like how they zoom, and how they have broader support than SVGs.

But they’re not SVGs. They’re not images. We shouldn’t pretend they were, or that they could ever be as accessible as images are.

Theoretically an icon from a font could be inserted in a semi-accessible way by outputting the actual glyph in the code to ensure copy/paste-ability. You’d also have to make sure to only use an icon that already existed in the unicode-table so screen-readers could make sense of them, thus severely limiting your options as a designer. Need a hamburger menu icon? Sorry, doesn’t exist in the unicode table. Pretty much all the benefits of using an icon font would be out the window at this point.

Rewind for a bit. Take a deep breath and think. Why are you using an icon font in the first place? Easy HiDPI and CSS colorability? Easy to show at multiple sizes? Fair enough.

Now pretend icon fonts didn’t exist, what would you do instead? Use a PNG or GIF as a CSS background, right? You’d treat the graphics as presentational elements. Visual aids. You’d keep it separate from your markup. You’d be able to reskin your whole site with a single stylesheet. You’d keep the markup as simple and semantic as that of the CSS Zen Garden. Hopefully you’d be a good person and worry about accessability where it mattered most: in the structure of your markup.

You can still use icon-fonts and have sensible markup while keeping the presentation separate. But doing so means you can’t rely on those bundled CSS helper classes. You have to do it manually; put in the work. Don’t treat icon-fonts like images. Pretend they’re sprites and keep them in your stylesheet. You’ll thank me.

A Chromecast with a Remote

The internet is a series of tubes.

Last week Android TV leaked on The Verge. The leak was conveniently timed right after the Amazon Fire TV release, and featured unusually clear screenshots that were perfectly front facing but appeared lightly filtered, almost as if to make them appear as though they were unintentionally leaked. Regardless of intent, it gave us an insight into the set-top box that Google is supposedly building.

Just a couple of months ago I bought into the Google Chromecast, a headless HDMI dongle that streams the internet to your TV. The Chromecast is as simple as can be: it requires you to use your handset or tablet to control it, so there are no “apps” per se. In fact, in order for Netflix to support the Chromecast, it has to offer its content — movies, TV shows, poster art, box art — as URLs. Because the Chromecast can read nothing else.

That’s where it gets interesting. The article in The Verge suggests an obvious question, why is Google making a set-top box that requires apps when its first successful TV device required none? Thankfully, GigaOM filled in the blanks in their article on the technology behind. If I’m reading the tea-leaves correctly, Google have indeed cracked it, and the Android TV doesn’t really require apps — not in the way we’re used to:

I’ve been told that Google’s new approach wants to do away with those differences by replacing these custom interfaces with standardized templates. Publishers wouldn’t need to come up with their own user interface, but instead would develop apps that provide data feeds to the Android TV platform.

Read it this way: you don’t have to make an app for the Android TV, your content just has to be URL accessible. In fact, if a service is already Chromecast ready, putting it on Android TV will probably require very little work. It’s quite clever; just expose the content-tube endpoint and  you have the best of the internet in a native experience, like an RSS feed for television.

Android TV is just a bigger Chromecast, with a remote-control and an interface, should you prefer that. Ted Stevens was right all along.

Ode to Any.do

Oh Any.do, how I love you.
Except when servers go down. Makes me feel like a clown.

The web-app is bangin’, except when it’s hanging.
I’m in to your phone app, it syncs in a snap.

Your graphics are blurry, but not to worry;
Any.do Moment is swell, so all is well.

I’m not into Cal, but mostly I’m happy,
show it again and your rating be crappy.

You’re doing something right,
‘cos no other to-do app can put up a fight.

Penicillin

My baby has an inner ear infection. Often times these ailments disappear on their own. Other times they get real bad. Thankfully we have Penicillin, which fixes it right up.

For now.

One day in 1928 — it was a Friday — the scotsman Alexander Fleming went about his daily business at St. Mary’s Hospital in London. He was working in his laboratory when he discovered he’d forgotten to close up a petri dish of bacteria from the night before. What he noticed would change the world: a mould had grown in that petridish, and in a halo around that mould the bacteria had stopped growing. What Alexander Fleming had discovered would save tens of millions of lives in the century to come: this natural mould exuded a substance that had antibiotic properties. Not a decade later we had Penicillin, and on this Friday in 2014, Penicillin is helping cure my baby girl. Thank you, Alexander Fleming.

There’s a problem, though. Penicillin is a wonderful drug, but bacteria — just like humans —  evolve and grow stronger. Put a drop of Penicillin in a petridish of bacteria and the bacteria will die. Probably. There’s a tiny chance some of those bacteria will survive due to a random Penicillin-resistant mutation. Those lucky few survivers might reproduce and migrate. Repeat this process for a century and you’re bound to have a couple of strains of bacteria to which even the strongest of Penicillins are useless.

We knew this would happen. Yet still to this day, Penicillin is used on a grand scale in meat-production of all things. When cattle have particularly bad living conditions, when too many cows are huddled up in too little space, they’ll inflict little scratches on each other, wounds that might heal naturally on a green field of grass. But if your living quarters are also where you go to the toilet, no such luck. Hey, thought the meat industry, we can just pump the cattle full of Penicillin and no bacteria will grow in those wounds!

The way we treat our cattle is troublesome enough, but the inevitable consequences should be alarming. Those dirty farms and cattle transports are evolutionary crucibles for resistant bacteria. The strong bacteria will survive and require stronger Penicillins. It’s an evolutionary arms race and we’re losing. We always knew bacteria would evolve to be Penicillin-resistant eventually, but if we’d been smart about our Penicillin usage, we might’ve had enough time to research functional alternatives. As it stands, I’m worried about a future dad and his daughter battling an infection maybe just ten years from now. I hope she’ll be alright, man.

So I guess here’s another reason you should eat organic meat. Or no meat, that works too.