The future of computing is mobile, they say, and they’re not referring to that lovely city in Alabama. Being a fan of smartphones and their UI design, I’ve considered myself mostly in the loop with where things were going, but recently it’s dawned on me I might as well have been lodged in a cave for a couple of years, only to just emerge and see the light. The future of computing is more mobile than I thought, and it’s not actually the future. (It’s now — I hate cliffhangers).

I had a baby a few years ago. That does things to you. As my friend put it, parenthood is not inaccurately emulated by sitting down and getting up again immediately. It’s a mostly constant state of activity. Either you’re busy playing with the child, caring for it, planning for how you’re going to care for the child next, or you’re worrying. There’s very little downtime, and so that becomes a valuable commodity in itself.

One thing parents do — or at least this parent — is use the smartphone. I put things in my calendar and to-do list because if it’s not in my calendar or to-do list I won’t remember it. I don’t email very much, because we don’t do that, but I keep Slack in my pocket. I take notes constantly. I listen to podcasts and music on the device, I control what’s on my television with it, and now I’m also doing my grocery shopping online. (I’d argue that last part isn’t laziness if you have your priorities straight, and having kids comes with an overwhelming sense that you do — it’s powerful chemistry, man.)

So what do I need my laptop for? Well I’m an interface designer, so I need a laptop for work. But when I’m not working I barely use it at all. To put it differently, when I’m not working, I don’t need a laptop at all, and if I were in a different business I’d probably never pick one up. There’s almost nothing important I can’t do on my phone instead, and often times the mobile experience is better, faster, simpler. By virtue of there being less realestate, there’s just not room for clutter. It requires the use of design patterns and a focus on usability like never before. Like when a sculptor chips away every little piece that doesn’t resemble, a good mobile experience has to simplify until only what’s important remains.

It’s only in the past couple of years that the scope of this shift has become clear to me, and it’s not just about making sure your website works well on a small screen. Computers have always been doing what they were told, but the interaction has been shackled by lack of portability and obtuse interfacing methods. Not only can mobile devices physically unshackle us from desks, but their limitations in size and input has required the industry to think of new ways to divine intent and translate your thoughts into bits. Speaking, waving at, swiping, tapping, throwing and winking all save us from repetitive injuries, all the while being available to us on our own terms.

I’m in. The desktop will be an afterthought for me from now on — and the result will probably be better for it. I joked on Twitter the other day about watch-first design. I’ve now slept on it, and I’m rescinding the joke part. My approach from now on is tiny-screen first and then graceful scaling. Mobile patterns have already standardized a plethora of useful and recognizable layouts, icons and interactions that can benefit us outside of only the small screens. The dogma of the desktop interface is now a thing of the past, and mobile is heralding a future of drastically simpler and better UI. The net result is not only more instagrams browsed, it’s more knowledge shared and learned. The fact that I can use my voice to tell my television to play more Knight Rider is just a really, really awesome side-effect.

The Plot To Kill The Desktop

As a fan of interface design, operating systems — Android, iOS, Windows — have always been a tremendous point of fascination for me. We spend hours in them every day, whether cognizant about that fact or not. And so any paradigm shifts in this field intrigue me to no end. One such paradigm shift that appears to be happening, is the phasing out of the desktop metaphor, the screen you put a wallpaper and shortcuts on.

Windows 8 was Microsofts bold attempt to phase out the desktop. Instead of the traditional desktop being the bottom of it all — the screen that was beneath all of your apps which you would get to if you closed or minimized them — there’s now the Start screen, a colorful bunch of tiles. Aside from the stark visual difference, the main difference between the traditional desktop and the Start screen, is that you can’t litter it with files. You’ll have to either organize your documents or adopt the mobile pattern of not worrying about where files are stored at all.

Apple created iOS without a desktop. The bottom screen here was Springboard, a sort of desktop-in-looks-only, basically an app-launcher with rudimentary folder-support. Born this way, iOS has had pretty much universal appeal among adopters. There was no desktop to get used to, so no lollipop to have taken away. While sharing files between apps on iOS is sort of a pain, it hasn’t stopped people from appreciating the otherwise complete lack of file-management. I suppose if you take away the need to manage files, you don’t really need a desktop to clutter up. You’d think this was the plan all along. (Italic text means wink wink, nudge nudge, pointing at the nose, and so on.)

For the longest time, Android seems to have tried to do the best of both worlds. The bottom screen of Android is a place to see your wallpaper and apps pinned to your dock. You can also put app shortcuts and even widgets here. Through an extra tap (so not quite the bottom of the hierarchy) you can access all of your installed apps, which unlike iOS have to manually be put on your homescreen if so desired. You can actually pin document shortcuts here as well, though it’s a cumbersome process and like with iOS you can’t save a file there. Though not elegant, the Android homescreen works reasonably well and certainly appeals to power-users with its many customization options.

Microsoft and Apple both appear to consider the desktop (and file-management as a subset) an interface relic to be phased out. Microsoft tried and mostly failed to do so, while Apple is taking baby-steps with iOS. If recent Android leaks are to be believed, and if I’m right in my interpretation of said leaks, Android is about to take it a step beyond even homescreens/app-launchers.

One such leak suggests Google is about to bridge the gap between native apps and web-apps, in a project dubbed “Hera” (after the mythological goddess of marriage). The mockups posted suggest apps are about to be treated more like cards than ever. Fans of WebOS1 will quickly recognize this concept fondly:

The card metaphor that Android is aggressively pushing is all about units of information, ideally contextual. The metaphor, by virtue of its physical counterpart, suggests it holding a finite amount of information after which you’re done with the card and can swipe it away. Like a menu at a restaurant, it stops being relevant the moment you know what to order. Similarly, business cards convey contact information and can then be filed away. Cards as an interface design metaphor is about divining what the user wants to do and grouping the answers together.

We’ve seen parts of this vision with Android Wear. The watch can’t run apps and instead relies on rich, interactive notification cards. Android phones have similar (though less rich) notifications, but are currently designed around traditional desktop patterns. There’s a homescreen at the bottom of the hierarchy, then you tap in and out of apps: home button, open Gmail, open email, delete, homescreen.

I think it’s safe to assume Google wants you to be able to do the same (and more) on an Android phone as you can on an Android smartwatch, and not have them use two widely different interaction mechanisms. So on the phone side, something has to give. The homescreen/desktop, perchance?

The more recent leak suggests just that. Supposedly Google is working to put “OK Google” everywhere. The little red circle button you can see in the Android Wear videos, when invoked, will scale down the app you’re in, show it as a card you can apply voice actions on. Presumably the already expansive list of Google Now commands would also be available; “OK Google, play some music” to start up an instant mix.

The key pattern I take note of here, is the attempt to de-emphasize individual apps and instead focus on app-agnostic actions. Matias Duarte recently suggested that mobile is dead and that we should approach design by thinking about problems to solve on a range of different screen sizes. That notion plays exactly into this. Probably most users approach their phone with particular tasks in mind: send an email, take a photo. Having to tap a home button, then an app drawer, then an app icon in order to do this seems almost antiquated compared to the slick Android Wear approach of no desktop/homescreen, no apps. Supposedly Google may remove the home button, relegating the homescreen to be simply another card in your multi-tasking list. Perhaps the bottom card?

I’ll be waiting with bated breath to see how successful Google can be in this endeavour. The homescreen/desktop metaphor represents, to many people, a comforting starting point. A 0,0,0 coordinate in a stressful universe. A place I can pin a photo of my baby girl, so I can at least smile when pulling out the smartphone to confirm that, in fact, nothing happened since last I checked 5 minutes ago.

  1. Matias Duarte, current Android designer, used to work on WebOS []

Good Decisions, Else Options

There’s a mantra in the WordPress development community: decisions, not options. It’s meant to be a standard to which you hold any interface design decision: if you make a decision for users it’ll ultimately be better than forcing them to make decisions themselves. It’s a decent mantra — if you’re not mindful you’ll end up with feature creep and UI complexity, and it’s orders of magnitude more difficult to remove an old option than it is to add one in the first place. Adding an option instead of making a decision for the user is almost always bad UI design.

Except when it’s not.

The problem with a mantra like this is that it quickly gets elevated to almost biblical status. When used by a disgruntled developer it can be used to shoot down just about any initiative. Like Godwins law for WordPress: once you drop the “decisions not options” bomb, rational discussion comes to a halt.

The thing about open source development is that it’s much like natural evolution: it evolves and adapts to changes in the environment. Unfortunately that also means features once useful can become vestigial once the problem they used to solve becomes obsolete. Baggage like this can pile up over years, and maintaining backwards compatibility means it can be very difficult to rid of. Sure, “decisions not options” can help cauterize some future baggage from happening, but it’s also not the blanket solution to it.

The problem is: sometimes the right decision is unfeasible, therefore beckoning an option in its absence. WordPress is many things to many people. Some people use it for blogging, others use it for restaurants, portfolios, photo galleries, intranets, you name it. Every use case has its own sets of needs and workflows and it’s virtually impossible to make a stock experience that’s optimal for everyone. Most bloggers would arguably have a better experience with a slew of WordPress features hidden or removed whereas site owners might depend on those very same features for dear life. By catering to many use-cases at once, user experiences across the board are bound to be unfocused in some way or other.

The “Screen Options” tab is an example of a feature that would probably not exist were “decisions not options” taken at face value. Screen Options exists on the idea that not everyone needs to see the “Custom Fields” panel on their Add New Post page, yet acknowledges that some users will find that feature invaluable. It is an option added in absence of a strong decision, for example, to limit WordPress to be a blogging-only tool. I consider it an example of an exception to the mantra for the good of the user. Sure, the UI could use some improvement, let’s work on that, but I really appreciate the ability to hide the “Send Trackbacks” panel.

I’m a fan of WordPress. I’m a fan of good decisions, and I’m a fan of good UI design. I believe that if we relieve ourselves of arbitrary straitjackets and approach each design objective with a sense of good taste and balance, we can make excellent open source software. Cauterizing entire avenues of UI simply because it adds options, however, negates the fact that sometimes those options exist in absence of a higher-up decision that just can’t be made for whatever reason.

Jony’s iOS 7

Back in October last year, Scott Forstall was replaced by Jony Ive, and I asked the question: did iOS just get interesting again? Last night we found out, and the answer is yes.

design_hero_screen_2x class=

There’s a lot to like about the new iOS 7. As a whole, the result looks mostly unique. There’s a nice clean aesthetic going with the thin Helvetica, the white UI chrome, the sandblasted layers and the almost complete absence of gaudy textures. It’s also colorful. Which is a good thing. Right?

Leading up to this there were jungle-drums touting how flat the new UI was going to look (as though every UI will suddenly be clean and uncluttered if you just run it over with a bulldozer). Fortunately that’s not what happened. Don’t get me wrong: I do like my UIs to be clean and simple, I just find the term “flat” to be mostly meaningless when applied to design. There are no magic bullets, there’s only good design and bad design, and I think Jony Ive gets that. So instead of trumpeting flat, Apple trumpeted true simplicity. Oh, and grid-based icons:

Sure, there’s certainly a grid there. I was mostly paying attention to the light-source for those gradients, though: why does Phone looks embossed while Mail looks inset? Also: Game Center? Again?

There will be no tears shed for the linen texture. I will not mourn the loss of green felt. Still, the new iconography alone makes iOS 7 such a departure that there’s bound to be some learning curve, which begs the question: why didn’t they go further now they were at it?

They had a real opportunity here. Jony could’ve said to his team:

Team! We’ve dominated the smartphone market for the last 5 years with a grid of round-rect icons. How do we re-think it from the ground up for the next decade? How do we create something that’ll make Samsung scramble to copy us again?

Perhaps they did just that. Conceivably they created giant mood-boards. Maybe they decorated hip little cubicles with smiling model faces and photos of subway signs and collages of differently colored post-it notes. Could be they brain-stormed all the places they see the mobile space go in the next ten years: creepy glasses, holographic watches, voice-controlled smart underwear. No doubt they considered the convergence of the cloud with all these new-fangled features. Perchance they arrived right back at a grid of icons: Eureka! We had it right all along!


I hope that’s not the case. I hope they had grander ideas… post-smartphone ideas. I’m hoping they were just so lazer-focused on shipping on time they had to punt their ideas for replacing Springboard. I’m hoping Jony felt the most important thing was to uproot the old linen-clad ways and set out a strong new direction for all future Apple UIs. I want to believe.

I want to believe that maybe one day we’ll have smartphones whose strongest visual cues aren’t defined by the graphical prowess of 3rd party icon designers. I want to believe that maybe one day we’ll look back at websites that use confirm() to alert us of their mobile apps as a dark age. I want to believe that maybe one day it’ll be possible to avoid all social interaction in a manner more impressive than tapping in and out of apps. Is that so much to ask?

Did iOS just get interesting again?

Scott Forstall, head of iOS and apparent fan of skeumorphism, has been booted out of Apple. Jony Ive, designer of beautiful hardware and previous critic of Apples software interface design takes over:

Amazingly, it’s said that Forstall’s coworkers were so excited to show him the door that they volunteered to split up his workload — Eddy Cue takes on Siri and Maps while OS X’s Craig Federighi gets iOS. And Ive, who has cemented his reputation as a legendary industrial designer over his two-decade Apple career, gets the opportunity to refresh an iOS user experience that has stagnated over the last several generations.

It’s no secret that I’m not a fan of the current iOS design philosophy, but I’m a huge fan of Jony Ives design sensibilities (even if he sometimes takes it too far, like the half-sized arrow keys on the keyboards). Now I’m suddenly excited to see iOS 7.

Reimagining the omnibar

Sean Whipps reimagines Google Chrome’s omnibar. The result is super clean and minimal. I certainly liked it. What I’ve come to learn, however, is that we’re in a state of interface design flux. We’re moving away from the keyboard, and towards touchscreen-friendly huge buttons and voice recognition — both things that should be welcome to sufferers of RSI. That future doesn’t look quite like the keyboard/commandline friendly future Jef Raskin imagined.

Android, several years later

Google I/O is wednesday, which traditionally means a peek at the next version of Android. Having used Android since version 2, I thought now would be a great time to reflect on how far Android has come.



The Android open source project has been around since 2005, but it wasn’t until Android 2.0 (no unique dessert name, Android 1.6 was “Donut”) was released alongside the Droid phone that Android started its rise to some sort of smartphone dominance. Looking back, version 2 of Android was a pretty uninspired affair with very few good apps to brag about. Some apps were crashy and copy and paste wasn’t everywhere and not particularly good. The experience as a whole felt sluggish and laggy.

What made it worth getting instead of the iPhone, however, was the fact that everything synced as soon as you were logged in with your Google account. There was not a trace of iTunes, and did I mention the superior turn by turn navigation? Douchy hipsters would ask why anyone in their right minds would get an Android phone when they could buy an iPhone instead. Even back then, the answer was: sync and maps.

The Nexus Phones


While Android 2.0 started the rivalry between Apple and Google, Android 2.1 (“Eclair”) which coincided with the Nexus One, set the war ablaze. Pinch to zoom was omitted due to threats of themonuclear war, but the phone itself was still the best Android to date.

Only, there was a problem: way too little internal storage. 256M if I remember correctly. This little space had to hold the entire operating system, including apps, including application data. Which meant, of course, that you’d run out of space within days if you used the phone like you were presumably supposed to. Android 2.2 (“Froyo”) tried to mitigate this embarrassing hardware decision by allowing you to store apps on the SD card, but since application data was still stored on the system partition this change did little to fix the situation. Visually, Eclair received relatively minor tweaks, Froyo likewise.


The Nexus S was released alongside Android 2.3 (“Gingerbread”) and it solved most of the problems that plagued the Nexus One. There was plenty of internal storage. Copy and paste was now unified across the operating system. There was a new, darker and flatter skin that made the experience a bit more elegant but the design felt weirdly half-baked. As a whole, the phone felt snappier, more coherent, and generally more pleasant.

Only, once again there was a problem. The stock Android browser bundled with the Nexus S was optimized for Snapdragon processors, not Hummingbird processors. The Nexus S had the latter, so browsing anything not mobile optimized was slower than it was on the Nexus One. You had to go out of your way to find an alternate (inferior) browser such as “Dolphin”. Not cool.

The Honeycomb Detour


We eventually found out what ailed the Nexus S. Google was busy making a tablet-friendly version of Android, and either didn’t have time to completely optimize the Nexus S, or simply chose to focus on the tablet instead. Matias Duarte, the original designer for WebOS, had been brought in to spearhead a strong visual direction for Android 3.0, “Honeycomb”. At the time, Gingerbread was just about ready to ship, and Honeycomb development was already underway. So the half-baked feeling that came with Gingerbread was due to the furious race toward the tablet.

For the very same reason, Andy Rubin had made the call that Honeycomb would be tablet only. There simply wouldn’t be time to scale the experience down to the phone form factor, that would have to happen in a later release. There was a lot to like about the end result, but arguably more to dislike. Regardless, a strong direction had been laid, and difficult structural decisions were in place.

Goodbye, Menu Button

Cue Android 4.0, “Ice Cream Sandwich”.


Like sandwiches are combinations of things, Android 4 was for both phones and tablets. It drastically iterated on the Honeycomb UI. The spacey clock was now minimalist, and the pretty terrible Tron font had been replaced with a custom Helvetica-esque “Roboto” font. Applications, icons, even menu items were given a strong design direction, and the result for apps that used this new “Holo” theme was pretty gorgeous. Ice Cream Sandwich was released with the Samsung Galaxy Nexus, and later rolled out for the Nexus S (complete with a stock browser that was finally optimized for the Hummingbird processor).

Impressively, Ice Cream Sandwich managed to shed some of the legacy shackles that had held back earlier Androids. The Menu button, once a requirement on Android phones, was now frowned upon, and developers were asked not to rely on it. Every menu item would come with an icon and shown directly in the action-bar if there was room (and land in the Action Overflow menu if there wasn’t). The death of the menu button was welcome since the button itself was the epitome of mystery meat navigation. Ironic then, that toolbar items would be icon-only. Still, Ice Cream Sandwich was a huge release with fundamental and difficult changes to Android, necessary for the platform to stay competitive.


For every problem Android releases would solve, however, new problems would become apparent. Like a waltz — two steps forward, one step back — Ice Cream Sandwich was no different. While the menu button had been killed, the problems with the back button had become increasingly apparent. I’m not even going to try and explain how the back button works, but here’s a chart:


It’s not optimal. But it’s certainly fixable. Especially on the Galaxy Nexus, where buttons are software. If killing the back button is on the … menu… then it’s possible. If not, there has to be a way to make its behavior more predictable.

In a similar vein, now that Android is beautiful, it’s becoming increasingly clear how most developers don’t care about optimizing their apps for Android. Most apps aren’t using the new Holo theme (which is legitimately beautiful). There are notable exceptions — Tasks, Foursquare, Pocket — but even first-party apps like Google Listen haven’t been updated to the new 4.0 SDK level. If Google can’t eat their own dog-food, how can they expect developers to?

Jelly Bean

Wednesday is Android 4.1 day and it’ll be interesting to see how Google intends to tackle the problems facing their platform. Perhaps it’s time to mimic Apple and create the “Android Design Awards”, showcasing well-designed Android 4 apps in the market. Might as well give a reason for developers to update the SDK level.

There’s also the problem with timely updates. As it turns out, an operating system running on an ARM processor is fundamentally different from one that runs on, say, an Intel Processor. Where on the latter, you can simply make one OS distribution you can install on every Intel processor out there, ARM operating systems have to be written directly for the specific version of the processor. Which incidentally explains why you won’t be able to install Windows RT (Windows 8 for ARM) yourself. So how can Apple do it? Well they build everything themselves, so they don’t have to target more than one processor.

Still, all of that is just software. Software is written by humans. We tell software what to do. If updates for Android are hard to do because there’s no generic interface for the ARM CPU, then make one. Whatever you do, Google, the big next challenge on your table is making Android easy to update.

Hey Google? One more thing. It would be nice if the Nexus phones you make aren’t so big they don’t fit in my pockets.


Smartphones don’t have permanently visible scrollbars. Neither does OSX Lion (unless you’re using a mouse in which case they pop back in). On the phone, there’s a space issue, so the lack of scrollbars seems a good tradeoff. On the desktop, there’s no such space issue. So why the tradeoff?

If Microsoft’s vision for the future — Surface — is any kind of true (and that remains to be seen), soon there will be no desktop. Fine, but tablets do still have room for scrollbars, so why not enable them there?

Let’s look at the pros and cons. On the list of reasons why hiding the scrollbar is a good thing, I have this (and feel free to augment this in the comments):

  • It’s prettier. Less UI is often a good thing. If you don’t miss it, then you have a better experience for it.
  • It’s consistent with phones and tablets (from the same vendor) and gives a sense of coherence.
  • If the future is indeed touchbased (as in: your future desktop is a docked tablet or phone), developers should probably already now start to yank out hover-induced menus and make their scrollpanes indicate overflow when no scrollbar is visible. Having a desktop OS that mimics this, I suppose, is a helpful reminder of what may be coming.

Still, the scrollbar has been around for a while. In fact I would argue it’s a cornerstone in modern GUIs. Such a thing should not be buried willy-nilly. Here are reasons to keep the scrollbar visible at all times:

  • I can think of many ways to indicate that there’s more content to be seen, but none of them are as easy to understand as the scrollbar.
  • A scrollbar doesn’t have to be 18px wide, opaque, with a huge inset gutter, so long as it looks like a scrollbar. In fact, if only Lion scrollbars didn’t fade out completely, this post would probably not have been written.
  • A permanently visible scrollbar, by virtue of its relative height, will sit silently at the side of your view and cue you in how much content remains to be seen. No bottom shadow or clipped content will indicate that. It’s like a minimap of your document.

It’s not that I love scrollbars. Most of them are pretty ugly. Scrollbars, as we’ve grown to know them, can be especially hideous when shown on dark designs. Still, I’m not entirely convinced the solution to this challenge is to hide them. That sounds like mystery meat navigation to me.

Next for Chrome OS

Remember Chrome OS? No? That’s okay. It’s Googles Chrome-browser with a stub of Linux underneath it, making the browser the operating system. I believe in this thing, not for myself, but for my mom; the ability to hold a couple of buttons while it’s booting to format the entire system and reinstall it from the cloud, keeping all personal files intact, is pretty much the perfect mom computer. Still, Chrome OS and “Chromebooks” never really took off. Now Google’s trying to take it to the next step, which apparently means a window manager:

While there’s some interesting UI going on, particularly with the semi-fullscreen-split-view and the ultra-minimal taskbar, I can’t help but feel like they could’ve done something really interesting with this. I don’t think the desktop has a future.

Icons vs. Text

Aside from petty discussions on whether Google is evil or not, design-wise it’s an exciting time to watch the evolution of their products. Services across the board are being redesigned, some from the ground-up. There are even traces of an emerging consistency between the web-services and the Android operating system.

One trend in particular I’m following with great interest, and that is the move from text-labels to icons. Here’s Gmail:


It’s interesting because Google used to be a bastion of usability, and having only icons goes against what I’ve learned on the matter (which is that icon + text label reads best, only text label reads okay, icon only reads the least). So why did Google do this?

Google’s a big company. They’re known for being data-driven. So much, in fact, that they were once criticized for A/B testing shades of blue. Which means unless Larry Page has uprooted every previous principle the company was founded on (which I doubt), I’m pretty sure they’re watching the numbers on this one, and I’d be lying if I said I wasn’t interested in the results on their icon vs. text-label A/B tests. The fact that the icons are still there in Gmail tells me that either the negative impact of icon-only navigation was negligible, or that the decision to go with icons only was forced through despite.

Icon-only navigation can be gorgeous, sure, and well-designed icons or icons based on established metaphors can be really easy to read. The trash-can, for example, is hard to get wrong. But surely some actions can’t easily be translated to icons only? Here’s the Android 4 camera app:


In the above screen capture, I’ve opened up the photo configuration pane. From left — the ellipsis means “more settings”, “SCN” is short for “scene”, the plus/minus is “exposure”, the “AW” is “white balance” set to “auto”, and the lightning is “flash mode” set to off. The icons are gorgeous, but some of them don’t read very well. Particulary SCN means they threw in the towel on an icon.

So where did the whole “icons first” trend start? Android, maybe. From the brand new design guide:

Action bar icons are graphic buttons that represent the most important actions people can take within your app. Each one should employ a simple metaphor representing a single concept that most people can grasp at a glance.

You should really head over to the design guide, the icons really are beautiful, and they are a core aspect of Android 4 apps. To put it briefly: Android 4 apps rely heavily on the action-bar. The action-bar is a bar across the top of the operating system. On phones it features an app icon and app name on the left, and as many icon-buttons as there’s room for on the right. If there are more buttons than there’s room for, these buttons go in to the “action overflow” button, which is the small ellipsis. Click the ellipsis and the icons are shown in a dropdown menu by their text-label counterparts. It’s discussed at length on the Android Developers blog.

As beautiful as icons can be, is the lack of text-labels sacrificing usability? Here’s Photoshop Touch for Android:


Compare and contrast with desktop Photoshop and the UI is a far cry. Obviously the two apps are no-where near feature parity, but UI-wise the difference is stark. The Android app relies on the clean, uncluttered iconography whereas the desktop app fills the top of system-bar with text-labelled dropdown menus.

I really don’t know what’s best. On the one hand, icons certainly make for a prettier UI. If screen real-estate is at a premium, icons can be smaller than text-labels, and the uniform size can make them easier to fit in a clean grid. Icons need no translation either, which is nice. On the other hand, a text label can say what the button does. Right there. On the button.