Archive, Don’t Delete

I’m one of the lucky … actually I have no idea how many or few have Google Inbox. In any case, I was graciously sent an invite, and have been using it on the web and on my Android phone since then. I love almost everything about it. I particularly love the fact that Inbox seems to be able to divine what archetype an email has. Is it spam? Don’t show it to me. Is it travel-related? Bundle it up. Same with purchases, social network notifications, promos, etc. It even does a good job of prioritizing each bundle, and only showing notifications when it thinks it’s urgent — configurable of course. It’s pretty great.

I don’t love how hard it is to delete an item. You have to dive down deeply into an overflow menu on a particular email to find the “Trash” button. I wish it was more easily accessible — I don’t know man, I guess I’m a deleter. I remember buying a 320mb harddrive called “Bigfoot” because it was so humongous, but even then I had to manage my space in order to fit everything. So I can’t help but feel like this is a generational issue, and I’m now a relic of the past. It had to happen eventually, and I’m getting a really strong vibe that the ceremonial burial of the trash button was very much intentional. It’s behaviorism: teaching you not to delete, because archiving is faster and safer.

The crux of the Inbox app is the embracing of the idea that an email is a task. This is contrary to a very popular notion that you should very much separate those two paradigms as much as you can, so it’s very interesting to see Google leaning into it. Combined with their concept of “bundles”, I think it makes it work.

Let’s walk through it: it’s Monday morning and you just arrived at the office to open up your email. You received a couple of promos from Spotify and Amazon in one bundle, an unbundled email from mom, 9 bundled Facebook notifications, and two shipping notifications in a bundle. The one email worth looking at is immediately obvious, so you can either tap “Done” on the “Promos”, “Purchases” and “Social” bundles to end up with only the one email, or you can pin moms email and tap the “Sweep” button. Everything but the email that needs your attention is archived and marked “Done”, and it took seconds.

This is how Inbox is supposed to work. You archive tasks you’re done with, you don’t delete. If something important did happen to be in one of the tasks you quickly marked done, it’s still there, accessible via a quick search. If you get a lot of email, I really do believe that embracing Inbox will take away stress from your daily life. All it asks is that you let go of your desire to manage your archive. You have to accept that there are hundreds of useless Facebook notification emails in your archive, emails you’d previously delete. It’s okay, they’re out of sight, out of mind, and no you won’t run out of space because of them. Checking 9 boxes and then picking the delete button, as opposed to simply clicking one “Done” button — the time you spend adds up, and you need to let go.

I know this. I understand this. As a webdesigner myself, I think there are profound reasons for hiding the delete button. It’s about letting machines do the work for you, so you can do more important things instead, like spending time with your family. It’s the right thing to do. And I’m not quite ready for it yet. Can I have the trash button be a primary action again, please, Google?

Atheism is not a religion

Every once in a while, the topic of religion (or lack there-of) comes up in discussion among me and my friends. I often try to explain what atheism is, or actually what it isn’t, and almost like clockwork it comes up: sounds a lot like religion. It’s an innocent statement, but it also means my explanation failed yet again. It’s a rousing topic full of nuance and passion, no matter the religion, agnosticism or atheism of the participant in the discussion. And it fascinates me so because it’s supposed to be simple! After all, it’s just semantics:

atheism, noun
disbelief or lack of belief in the existence of God or gods

religion, noun
the belief in and worship of a superhuman controlling power, especially a personal God or gods.

Clearly just by looking at the dictionary, one seems incompatible with the other. All the delicious nuance stems from the fact that the term “god” is part of both definitions.

Quick intermezzo before we get into the weeds: I have many friends with a multitude of different religions, people whom I love and respect deeply. I’m not here to take anyones faith away. This is not about whether religion is a force for good or not, there are far more intelligent debates to be had elsewhere. I just like discussing semantic nuances.

What makes it so difficult to pin down is the fact that atheism is really just a word in the dictionary. We’re not even very protective about such words, so we change its meaning from time to time. New information comes to light! The term evolves and mutates and comes to include more meaning still. Looking broadly, though, the definition of atheism forks in two general directions. One direction has it defined mainly as a disbelief in a god or gods, while the other considers it a lack of belief in a god or gods. Did you catch the difference between the two? It’s quite subtle, yet substantial.

Disbelief means you believe there are no gods. You’ve put your two and two together, and decided hey — it just doesn’t make sense to me. This is unlike religion in a couple of obvious ways, first of all the fact that there’s no holy text that describes what you must or must not believe. There’s no promise of an afterlife or lack thereof if you don’t, err, not believe in god. There’s no codex of laws you have to follow to be a “true” atheist. And there are no places you can go to to meet other atheist to, uh, not pray with. (Actually you can still say a prayer if you want to, it’s not like The Atheist Police comes knocking on your door if you do).

The absence of belief, on the other hand, is a bit trickier to pin down. If for whatever reason you never learned about god, well, then you are without belief in god. How could you believe in something you never heard of? Take my daughter for instance. She’s 3, and she’s only talked for the past year or so. I don’t think anyone has told her about religion, not that I know of at least. So she is, by definition, without belief in god. Literally atheos — greek for “without god(s)”. It wasn’t her choice, how could she even make one? I’m not even sure she’d understand what I was talking about if I tried — she’d probably ask for her juicebox and crayons. From this perspective, being an atheist is, in many ways, a default position. It’s what you’re born as. Even if you later in life find solace and happiness in religion, until you found that religion you were for all intents and purposes, an atheist. There’s no shame in that, it’s just a word.

I half expect some readers (thanks for reading 737 words discussing semantics by the way) to ask me: why so defensive, are you sure you’re not describing a religion? Sure, once in a while you’ll encounter someone who takes their atheism so seriously it borders on being a religious experience for them. But that’s fine, they can call themselves atheists too. It’s not like you get a badge at the door. Atheism isn’t organized behind a hashtag, and it’s not about ethics in games journalism.

You are an atheist until you choose not to be, and there’s room for all of us.

The Old World Display

Maybe a decade ago, a web-designer and friend of mine told me a classic “client from hell” story. The details have since become fuzzy, but the crux of the story revolved around a particular design the client wouldn’t approve. There was this one detail that was off, a particular element that just wouldn’t center properly in the layout (it was insisted). Thankfully the client had come up with a seemingly simple fix: just draw half a pixel! Who would’ve guessed that just a decade later, “The Apple Retina Display” would herald the arrival of just that: the halfpixel?

While the term “retina” is mainly marketing chatter meant to imply that you can’t see the pixels on the screen, it’s not just about making sure the display is arbitrarily high resolution. In fact, it’s pixel-doubling. The 1024×768 iPad doubled to 2048×1536 when going retina, and while the implicit goal of this was clarity and crispness, the exact doubling of screen dimensions means UIs elements scale perfectly: 1 pixel simply becomes 4 pixels. It’s quite brilliant, and there’s only one pitfall.

For a designer it’s more than easy to get carried away with quadruple the canvas. In the plebean resolution of yore, tiny UI elements had little room for error: icons had to be perfectly aligned to even pixels, and ideally their stem weight would be a whole pixel-value. The atom of the display — the pixel — was the tiniest unit of information possible, and if it wasn’t mindfully placed it would blur and artifact under heavy antialiasing. In The Old World we had a term for this: pixel-perfect.

However inside the retina display lives the halfpixel siren, and her song is alluring. She’ll make you forget about The Old World. She’ll draw tiny tiny pixels for you and she’ll make your designs feel pixel-perfect, even when they’re not. It’s an irresistable New World.

David Pierce reviewing the new iMac 5K for The Verge:

I drove an Audi and never looked at my Saturn the same way again. Remember the first time you used a capacitive touchscreen, threw your 56k modem out the window and switched to broadband, or switched from standard-def TV to 1080p?

It only took about ten minutes of using Apple’s new iMac with Retina display to make me wonder how I’m ever supposed to go back. Back to a world where pixels are visible on any screen, even one this big.

It’s a good life in The New World. It’s a good life here in the first world. It’s so true: no-one should have to endure the pin-pricking misery of looking at old-world 1x screens! (Actually, my 35 year old eyes aren’t good enough to see pixels on my daughters etch-a-sketch, but I can still totally empathize with how completely unbearable the pain must be).

It gets worse — are you sitting down? Here it comes: most screens aren’t retina. One wild guess puts the population of non-retina desktop users (or old-worlders as I call them) at 98.9%.

That’s not good enough, and it’s time to fight for the halfpixel. We’re at a fateful crucible in history, and I can see only two possible paths going forward. Either we start a “retina displays for oil” program to bring proper high resolutions to the overwhelming majority of people who even have computers, or we just have to live with ourselves knowing that these old-worlders will have to endure not only disgustingly low resolutions, but indeed all of the added extra blur and artifacts that will result of future computer graphics being designed on retina screens and not even tested for crispness on 1x screens1.

Oh well, I suppose there’s a third option. I suppose we can all wake up from this retina-induced bong haze and maybe just once in a while take one damn look at our graphics on an old world display.

  1. Hey Medium… We need to talk.  

The One Platform Is Dead

I used to strongly believe the future of apps would be rooted in web-technologies such as HTML5. Born cross-platform, they’d be really easy to build, and bold new possiblities were just around the corner. I still believe webapps will be part of the future, but recently I’ve started to think it’s going to be a bit more muddled than that. If you’ll indulge me the explanation will be somewhat roundabout.

The mobile era in computing, more than anything, helped propel interface design patterns ahead much faster than decades of desktop operating systems did. We used to discuss whether your app should use native interface widgets or if it was okay to style them. While keeping them unstyled is often still a good idea, dwelling on it would be navelgazing, as it’s no longer the day and night indicator whether an app is good or not. In fact we’re starting to see per-app design languages that cross not only platforms, but codebases too. Most interestingly, these apps don’t suck! You see it with Google rolling out Material Design across Android and web-apps. Microsoft under Satya Nadella is rolling out their flatter-than-flat visual language across not only their own Windows platforms, but iOS and Android as well. Apple just redesigned OSX to look like iOS.

It feels like we’re at a point where traditional usability guidelines should be digested and analyzed for their intent, rather than taken at dogmatic face value. If it looks like a button, acts like a button, or both, it’s probably a button. What we’re left with is a far simpler arbiter for success: there are good designs and there are bad designs. It’s as liberatingly simple as not wearing pants.

dogma (noun)
a principle or set of principles laid down by an authority as incontrovertibly true

The dogma of interface design has been left by the wayside. Hired to take its place is a sense of good taste. Build what works for you and keep testing, iterating and responding to feedback. Remembering good design patterns will help you take shortcuts, but once in a while we have to invent something. It either works or it doesn’t, and then you can fix it.

It’s a bold new frontier, and we already have multiple tools to build amazing things. No one single technology or platform will ever “win”, because there is no winning the platform game. The operating system is increasingly taking a back seat to the success of ecosystems that live in the cloud. Platform install numbers will soon become a mostly useless metric for divining who’s #winning this made-up war of black vs. white. The ecosystem is the new platform, and because of it it’s easier than ever to switch from Android to iOS.

It’s a good time to build apps. Come up with a great idea, then pick an ecosystem. You’ll be better equipped to decide what type of code you’ll want to write: does your app only need one platform, multiple, or should it be crossplatform? It’s only going to become easier: in a war of ecosystems, the one that’s the most open and spans the most platforms will be the most successful. It’ll be in the interest of platform vendors to run as many apps as possible, whether through multiple runtimes or just simplified porting. It won’t matter if you wrote your app in HTML5, Java, or C#: on a good platform it’ll just work. Walled gardens will stick around, of course, but it’ll be a strategy that fewer and fewer companies can support.

No, dear reader, I have not forgotten about Jobs’ Thoughts on Flash. Jobs was right: apps built on Flash were bad. That’s why today is such an exciting time. People don’t care about the code behind the curtain.

If it’s good, it’s good.

Mobile

The future of computing is mobile, they say, and they’re not referring to that lovely city in Alabama. Being a fan of smartphones and their UI design, I’ve considered myself mostly in the loop with where things were going, but recently it’s dawned on me I might as well have been lodged in a cave for a couple of years, only to just emerge and see the light. The future of computing is more mobile than I thought, and it’s not actually the future. (It’s now — I hate cliffhangers).

I had a baby a few years ago. That does things to you. As my friend put it, parenthood is not inaccurately emulated by sitting down and getting up again immediately. It’s a mostly constant state of activity. Either you’re busy playing with the child, caring for it, planning for how you’re going to care for the child next, or you’re worrying. There’s very little downtime, and so that becomes a valuable commodity in itself.

One thing parents do — or at least this parent — is use the smartphone. I put things in my calendar and to-do list because if it’s not in my calendar or to-do list I won’t remember it. I don’t email very much, because we don’t do that, but I keep Slack in my pocket. I take notes constantly. I listen to podcasts and music on the device, I control what’s on my television with it, and now I’m also doing my grocery shopping online. (I’d argue that last part isn’t laziness if you have your priorities straight, and having kids comes with an overwhelming sense that you do — it’s powerful chemistry, man.)

So what do I need my laptop for? Well I’m an interface designer, so I need a laptop for work. But when I’m not working I barely use it at all. To put it differently, when I’m not working, I don’t need a laptop at all, and if I were in a different business I’d probably never pick one up. There’s almost nothing important I can’t do on my phone instead, and often times the mobile experience is better, faster, simpler. By virtue of there being less realestate, there’s just not room for clutter. It requires the use of design patterns and a focus on usability like never before. Like when a sculptor chips away every little piece that doesn’t resemble, a good mobile experience has to simplify until only what’s important remains.

It’s only in the past couple of years that the scope of this shift has become clear to me, and it’s not just about making sure your website works well on a small screen. Computers have always been doing what they were told, but the interaction has been shackled by lack of portability and obtuse interfacing methods. Not only can mobile devices physically unshackle us from desks, but their limitations in size and input has required the industry to think of new ways to divine intent and translate your thoughts into bits. Speaking, waving at, swiping, tapping, throwing and winking all save us from repetitive injuries, all the while being available to us on our own terms.

I’m in. The desktop will be an afterthought for me from now on — and the result will probably be better for it. I joked on Twitter the other day about watch-first design. I’ve now slept on it, and I’m rescinding the joke part. My approach from now on is tiny-screen first and then graceful scaling. Mobile patterns have already standardized a plethora of useful and recognizable layouts, icons and interactions that can benefit us outside of only the small screens. The dogma of the desktop interface is now a thing of the past, and mobile is heralding a future of drastically simpler and better UI. The net result is not only more instagrams browsed, it’s more knowledge shared and learned. The fact that I can use my voice to tell my television to play more Knight Rider is just a really, really awesome side-effect.

The Plot To Kill The Desktop

As a fan of interface design, operating systems — Android, iOS, Windows — have always been a tremendous point of fascination for me. We spend hours in them every day, whether cognizant about that fact or not. And so any paradigm shifts in this field intrigue me to no end. One such paradigm shift that appears to be happening, is the phasing out of the desktop metaphor, the screen you put a wallpaper and shortcuts on.

Windows 8 was Microsofts bold attempt to phase out the desktop. Instead of the traditional desktop being the bottom of it all — the screen that was beneath all of your apps which you would get to if you closed or minimized them — there’s now the Start screen, a colorful bunch of tiles. Aside from the stark visual difference, the main difference between the traditional desktop and the Start screen, is that you can’t litter it with files. You’ll have to either organize your documents or adopt the mobile pattern of not worrying about where files are stored at all.

Apple created iOS without a desktop. The bottom screen here was Springboard, a sort of desktop-in-looks-only, basically an app-launcher with rudimentary folder-support. Born this way, iOS has had pretty much universal appeal among adopters. There was no desktop to get used to, so no lollipop to have taken away. While sharing files between apps on iOS is sort of a pain, it hasn’t stopped people from appreciating the otherwise complete lack of file-management. I suppose if you take away the need to manage files, you don’t really need a desktop to clutter up. You’d think this was the plan all along. (Italic text means wink wink, nudge nudge, pointing at the nose, and so on.)

For the longest time, Android seems to have tried to do the best of both worlds. The bottom screen of Android is a place to see your wallpaper and apps pinned to your dock. You can also put app shortcuts and even widgets here. Through an extra tap (so not quite the bottom of the hierarchy) you can access all of your installed apps, which unlike iOS have to manually be put on your homescreen if so desired. You can actually pin document shortcuts here as well, though it’s a cumbersome process and like with iOS you can’t save a file there. Though not elegant, the Android homescreen works reasonably well and certainly appeals to power-users with its many customization options.

Microsoft and Apple both appear to consider the desktop (and file-management as a subset) an interface relic to be phased out. Microsoft tried and mostly failed to do so, while Apple is taking baby-steps with iOS. If recent Android leaks are to be believed, and if I’m right in my interpretation of said leaks, Android is about to take it a step beyond even homescreens/app-launchers.

One such leak suggests Google is about to bridge the gap between native apps and web-apps, in a project dubbed “Hera” (after the mythological goddess of marriage). The mockups posted suggest apps are about to be treated more like cards than ever. Fans of WebOS1 will quickly recognize this concept fondly:

The card metaphor that Android is aggressively pushing is all about units of information, ideally contextual. The metaphor, by virtue of its physical counterpart, suggests it holding a finite amount of information after which you’re done with the card and can swipe it away. Like a menu at a restaurant, it stops being relevant the moment you know what to order. Similarly, business cards convey contact information and can then be filed away. Cards as an interface design metaphor is about divining what the user wants to do and grouping the answers together.

We’ve seen parts of this vision with Android Wear. The watch can’t run apps and instead relies on rich, interactive notification cards. Android phones have similar (though less rich) notifications, but are currently designed around traditional desktop patterns. There’s a homescreen at the bottom of the hierarchy, then you tap in and out of apps: home button, open Gmail, open email, delete, homescreen.

I think it’s safe to assume Google wants you to be able to do the same (and more) on an Android phone as you can on an Android smartwatch, and not have them use two widely different interaction mechanisms. So on the phone side, something has to give. The homescreen/desktop, perchance?

The more recent leak suggests just that. Supposedly Google is working to put “OK Google” everywhere. The little red circle button you can see in the Android Wear videos, when invoked, will scale down the app you’re in, show it as a card you can apply voice actions on. Presumably the already expansive list of Google Now commands would also be available; “OK Google, play some music” to start up an instant mix.

The key pattern I take note of here, is the attempt to de-emphasize individual apps and instead focus on app-agnostic actions. Matias Duarte recently suggested that mobile is dead and that we should approach design by thinking about problems to solve on a range of different screen sizes. That notion plays exactly into this. Probably most users approach their phone with particular tasks in mind: send an email, take a photo. Having to tap a home button, then an app drawer, then an app icon in order to do this seems almost antiquated compared to the slick Android Wear approach of no desktop/homescreen, no apps. Supposedly Google may remove the home button, relegating the homescreen to be simply another card in your multi-tasking list. Perhaps the bottom card?

I’ll be waiting with bated breath to see how successful Google can be in this endeavour. The homescreen/desktop metaphor represents, to many people, a comforting starting point. A 0,0,0 coordinate in a stressful universe. A place I can pin a photo of my baby girl, so I can at least smile when pulling out the smartphone to confirm that, in fact, nothing happened since last I checked 5 minutes ago.

  1. Matias Duarte, current Android designer, used to work on WebOS  

Icon Fonts Are Ruining Your Markup

Icon fonts are great. They’re more scalable than PNGs, they’re re-colorable in CSS, and it’s easier than ever to create them. But most of us are using them wrong, and it’s ruining your markup. Recognize this?

<span class="icon icon-calendar"></span>

This is fast becoming the de-facto standard syntax for inserting icons in your webdesigns. No need to fiddle with weird glyphs. No CSS needed to insert icons, even.

No CSS.

Hold up. The promise of CSS was that we could separate presentation from markup. We could create standardized, semantic and sensible markup, that could then be completely re-skinned solely by replacing a stylesheet. That was the whole point of the CSS Zen Garden. By using nonsense spans with verbose classes, that’s out the window, all in the name of convenience. I too have been bitten by the icon-font bug. I’m into them. I think it’s so great that I can re-color icons with a line of CSS. I like how they zoom, and how they have broader support than SVGs.

But they’re not SVGs. They’re not images. We shouldn’t pretend they were, or that they could ever be as accessible as images are.

Theoretically an icon from a font could be inserted in a semi-accessible way by outputting the actual glyph in the code to ensure copy/paste-ability. You’d also have to make sure to only use an icon that already existed in the unicode-table so screen-readers could make sense of them, thus severely limiting your options as a designer. Need a hamburger menu icon? Sorry, doesn’t exist in the unicode table. Pretty much all the benefits of using an icon font would be out the window at this point.

Rewind for a bit. Take a deep breath and think. Why are you using an icon font in the first place? Easy HiDPI and CSS colorability? Easy to show at multiple sizes? Fair enough.

Now pretend icon fonts didn’t exist, what would you do instead? Use a PNG or GIF as a CSS background, right? You’d treat the graphics as presentational elements. Visual aids. You’d keep it separate from your markup. You’d be able to reskin your whole site with a single stylesheet. You’d keep the markup as simple and semantic as that of the CSS Zen Garden. Hopefully you’d be a good person and worry about accessability where it mattered most: in the structure of your markup.

You can still use icon-fonts and have sensible markup while keeping the presentation separate. But doing so means you can’t rely on those bundled CSS helper classes. You have to do it manually; put in the work. Don’t treat icon-fonts like images. Pretend they’re sprites and keep them in your stylesheet. You’ll thank me.