The Plot To Kill The Desktop

As a fan of interface design, operating systems — Android, iOS, Windows — have always been a tremendous point of fascination for me. We spend hours in them every day, whether cognizant about that fact or not. And so any paradigm shifts in this field intrigue me to no end. One such paradigm shift that appears to be happening, is the phasing out of the desktop metaphor, the screen you put a wallpaper and shortcuts on.

Windows 8 was Microsofts bold attempt to phase out the desktop. Instead of the traditional desktop being the bottom of it all — the screen that was beneath all of your apps which you would get to if you closed or minimized them — there’s now the Start screen, a colorful bunch of tiles. Aside from the stark visual difference, the main difference between the traditional desktop and the Start screen, is that you can’t litter it with files. You’ll have to either organize your documents or adopt the mobile pattern of not worrying about where files are stored at all.

Apple created iOS without a desktop. The bottom screen here was Springboard, a sort of desktop-in-looks-only, basically an app-launcher with rudimentary folder-support. Born this way, iOS has had pretty much universal appeal among adopters. There was no desktop to get used to, so no lollipop to have taken away. While sharing files between apps on iOS is sort of a pain, it hasn’t stopped people from appreciating the otherwise complete lack of file-management. I suppose if you take away the need to manage files, you don’t really need a desktop to clutter up. You’d think this was the plan all along. (Italic text means wink wink, nudge nudge, pointing at the nose, and so on.)

For the longest time, Android seems to have tried to do the best of both worlds. The bottom screen of Android is a place to see your wallpaper and apps pinned to your dock. You can also put app shortcuts and even widgets here. Through an extra tap (so not quite the bottom of the hierarchy) you can access all of your installed apps, which unlike iOS have to manually be put on your homescreen if so desired. You can actually pin document shortcuts here as well, though it’s a cumbersome process and like with iOS you can’t save a file there. Though not elegant, the Android homescreen works reasonably well and certainly appeals to power-users with its many customization options.

Microsoft and Apple both appear to consider the desktop (and file-management as a subset) an interface relic to be phased out. Microsoft tried and mostly failed to do so, while Apple is taking baby-steps with iOS. If recent Android leaks are to be believed, and if I’m right in my interpretation of said leaks, Android is about to take it a step beyond even homescreens/app-launchers.

One such leak suggests Google is about to bridge the gap between native apps and web-apps, in a project dubbed “Hera” (after the mythological goddess of marriage). The mockups posted suggest apps are about to be treated more like cards than ever. Fans of WebOS1 will quickly recognize this concept fondly:

The card metaphor that Android is aggressively pushing is all about units of information, ideally contextual. The metaphor, by virtue of its physical counterpart, suggests it holding a finite amount of information after which you’re done with the card and can swipe it away. Like a menu at a restaurant, it stops being relevant the moment you know what to order. Similarly, business cards convey contact information and can then be filed away. Cards as an interface design metaphor is about divining what the user wants to do and grouping the answers together.

We’ve seen parts of this vision with Android Wear. The watch can’t run apps and instead relies on rich, interactive notification cards. Android phones have similar (though less rich) notifications, but are currently designed around traditional desktop patterns. There’s a homescreen at the bottom of the hierarchy, then you tap in and out of apps: home button, open Gmail, open email, delete, homescreen.

I think it’s safe to assume Google wants you to be able to do the same (and more) on an Android phone as you can on an Android smartwatch, and not have them use two widely different interaction mechanisms. So on the phone side, something has to give. The homescreen/desktop, perchance?

The more recent leak suggests just that. Supposedly Google is working to put “OK Google” everywhere. The little red circle button you can see in the Android Wear videos, when invoked, will scale down the app you’re in, show it as a card you can apply voice actions on. Presumably the already expansive list of Google Now commands would also be available; “OK Google, play some music” to start up an instant mix.

The key pattern I take note of here, is the attempt to de-emphasize individual apps and instead focus on app-agnostic actions. Matias Duarte recently suggested that mobile is dead and that we should approach design by thinking about problems to solve on a range of different screen sizes. That notion plays exactly into this. Probably most users approach their phone with particular tasks in mind: send an email, take a photo. Having to tap a home button, then an app drawer, then an app icon in order to do this seems almost antiquated compared to the slick Android Wear approach of no desktop/homescreen, no apps. Supposedly Google may remove the home button, relegating the homescreen to be simply another card in your multi-tasking list. Perhaps the bottom card?

I’ll be waiting with bated breath to see how successful Google can be in this endeavour. The homescreen/desktop metaphor represents, to many people, a comforting starting point. A 0,0,0 coordinate in a stressful universe. A place I can pin a photo of my baby girl, so I can at least smile when pulling out the smartphone to confirm that, in fact, nothing happened since last I checked 5 minutes ago.

  1. Matias Duarte, current Android designer, used to work on WebOS  

Jony’s iOS 7

Back in October last year, Scott Forstall was replaced by Jony Ive, and I asked the question: did iOS just get interesting again? Last night we found out, and the answer is yes.

design_hero_screen_2x class=

There’s a lot to like about the new iOS 7. As a whole, the result looks mostly unique. There’s a nice clean aesthetic going with the thin Helvetica, the white UI chrome, the sandblasted layers and the almost complete absence of gaudy textures. It’s also colorful. Which is a good thing. Right?

Leading up to this there were jungle-drums touting how flat the new UI was going to look (as though every UI will suddenly be clean and uncluttered if you just run it over with a bulldozer). Fortunately that’s not what happened. Don’t get me wrong: I do like my UIs to be clean and simple, I just find the term “flat” to be mostly meaningless when applied to design. There are no magic bullets, there’s only good design and bad design, and I think Jony Ive gets that. So instead of trumpeting flat, Apple trumpeted true simplicity. Oh, and grid-based icons:

Sure, there’s certainly a grid there. I was mostly paying attention to the light-source for those gradients, though: why does Phone looks embossed while Mail looks inset? Also: Game Center? Again?

There will be no tears shed for the linen texture. I will not mourn the loss of green felt. Still, the new iconography alone makes iOS 7 such a departure that there’s bound to be some learning curve, which begs the question: why didn’t they go further now they were at it?

They had a real opportunity here. Jony could’ve said to his team:

Team! We’ve dominated the smartphone market for the last 5 years with a grid of round-rect icons. How do we re-think it from the ground up for the next decade? How do we create something that’ll make Samsung scramble to copy us again?

Perhaps they did just that. Conceivably they created giant mood-boards. Maybe they decorated hip little cubicles with smiling model faces and photos of subway signs and collages of differently colored post-it notes. Could be they brain-stormed all the places they see the mobile space go in the next ten years: creepy glasses, holographic watches, voice-controlled smart underwear. No doubt they considered the convergence of the cloud with all these new-fangled features. Perchance they arrived right back at a grid of icons: Eureka! We had it right all along!

carintegration_icon_2x

I hope that’s not the case. I hope they had grander ideas… post-smartphone ideas. I’m hoping they were just so lazer-focused on shipping on time they had to punt their ideas for replacing Springboard. I’m hoping Jony felt the most important thing was to uproot the old linen-clad ways and set out a strong new direction for all future Apple UIs. I want to believe.

I want to believe that maybe one day we’ll have smartphones whose strongest visual cues aren’t defined by the graphical prowess of 3rd party icon designers. I want to believe that maybe one day we’ll look back at websites that use confirm() to alert us of their mobile apps as a dark age. I want to believe that maybe one day it’ll be possible to avoid all social interaction in a manner more impressive than tapping in and out of apps. Is that so much to ask?

Android, several years later

Google I/O is wednesday, which traditionally means a peek at the next version of Android. Having used Android since version 2, I thought now would be a great time to reflect on how far Android has come.

design

Droid

The Android open source project has been around since 2005, but it wasn’t until Android 2.0 (no unique dessert name, Android 1.6 was “Donut”) was released alongside the Droid phone that Android started its rise to some sort of smartphone dominance. Looking back, version 2 of Android was a pretty uninspired affair with very few good apps to brag about. Some apps were crashy and copy and paste wasn’t everywhere and not particularly good. The experience as a whole felt sluggish and laggy.

What made it worth getting instead of the iPhone, however, was the fact that everything synced as soon as you were logged in with your Google account. There was not a trace of iTunes, and did I mention the superior turn by turn navigation? Douchy hipsters would ask why anyone in their right minds would get an Android phone when they could buy an iPhone instead. Even back then, the answer was: sync and maps.

The Nexus Phones

HTC_Desire_cyanogenmod

While Android 2.0 started the rivalry between Apple and Google, Android 2.1 (“Eclair”) which coincided with the Nexus One, set the war ablaze. Pinch to zoom was omitted due to threats of themonuclear war, but the phone itself was still the best Android to date.

Only, there was a problem: way too little internal storage. 256M if I remember correctly. This little space had to hold the entire operating system, including apps, including application data. Which meant, of course, that you’d run out of space within days if you used the phone like you were presumably supposed to. Android 2.2 (“Froyo”) tried to mitigate this embarrassing hardware decision by allowing you to store apps on the SD card, but since application data was still stored on the system partition this change did little to fix the situation. Visually, Eclair received relatively minor tweaks, Froyo likewise.

home-plain

The Nexus S was released alongside Android 2.3 (“Gingerbread”) and it solved most of the problems that plagued the Nexus One. There was plenty of internal storage. Copy and paste was now unified across the operating system. There was a new, darker and flatter skin that made the experience a bit more elegant but the design felt weirdly half-baked. As a whole, the phone felt snappier, more coherent, and generally more pleasant.

Only, once again there was a problem. The stock Android browser bundled with the Nexus S was optimized for Snapdragon processors, not Hummingbird processors. The Nexus S had the latter, so browsing anything not mobile optimized was slower than it was on the Nexus One. You had to go out of your way to find an alternate (inferior) browser such as “Dolphin”. Not cool.

The Honeycomb Detour

honeycomb

We eventually found out what ailed the Nexus S. Google was busy making a tablet-friendly version of Android, and either didn’t have time to completely optimize the Nexus S, or simply chose to focus on the tablet instead. Matias Duarte, the original designer for WebOS, had been brought in to spearhead a strong visual direction for Android 3.0, “Honeycomb”. At the time, Gingerbread was just about ready to ship, and Honeycomb development was already underway. So the half-baked feeling that came with Gingerbread was due to the furious race toward the tablet.

For the very same reason, Andy Rubin had made the call that Honeycomb would be tablet only. There simply wouldn’t be time to scale the experience down to the phone form factor, that would have to happen in a later release. There was a lot to like about the end result, but arguably more to dislike. Regardless, a strong direction had been laid, and difficult structural decisions were in place.

Goodbye, Menu Button

Cue Android 4.0, “Ice Cream Sandwich”.

home-lg

Like sandwiches are combinations of things, Android 4 was for both phones and tablets. It drastically iterated on the Honeycomb UI. The spacey clock was now minimalist, and the pretty terrible Tron font had been replaced with a custom Helvetica-esque “Roboto” font. Applications, icons, even menu items were given a strong design direction, and the result for apps that used this new “Holo” theme was pretty gorgeous. Ice Cream Sandwich was released with the Samsung Galaxy Nexus, and later rolled out for the Nexus S (complete with a stock browser that was finally optimized for the Hummingbird processor).

Impressively, Ice Cream Sandwich managed to shed some of the legacy shackles that had held back earlier Androids. The Menu button, once a requirement on Android phones, was now frowned upon, and developers were asked not to rely on it. Every menu item would come with an icon and shown directly in the action-bar if there was room (and land in the Action Overflow menu if there wasn’t). The death of the menu button was welcome since the button itself was the epitome of mystery meat navigation. Ironic then, that toolbar items would be icon-only. Still, Ice Cream Sandwich was a huge release with fundamental and difficult changes to Android, necessary for the platform to stay competitive.

Waltz

For every problem Android releases would solve, however, new problems would become apparent. Like a waltz — two steps forward, one step back — Ice Cream Sandwich was no different. While the menu button had been killed, the problems with the back button had become increasingly apparent. I’m not even going to try and explain how the back button works, but here’s a chart:

navigation_between_siblings_market2

It’s not optimal. But it’s certainly fixable. Especially on the Galaxy Nexus, where buttons are software. If killing the back button is on the … menu… then it’s possible. If not, there has to be a way to make its behavior more predictable.

In a similar vein, now that Android is beautiful, it’s becoming increasingly clear how most developers don’t care about optimizing their apps for Android. Most apps aren’t using the new Holo theme (which is legitimately beautiful). There are notable exceptions — Tasks, Foursquare, Pocket — but even first-party apps like Google Listen haven’t been updated to the new 4.0 SDK level. If Google can’t eat their own dog-food, how can they expect developers to?

Jelly Bean

Wednesday is Android 4.1 day and it’ll be interesting to see how Google intends to tackle the problems facing their platform. Perhaps it’s time to mimic Apple and create the “Android Design Awards”, showcasing well-designed Android 4 apps in the market. Might as well give a reason for developers to update the SDK level.

There’s also the problem with timely updates. As it turns out, an operating system running on an ARM processor is fundamentally different from one that runs on, say, an Intel Processor. Where on the latter, you can simply make one OS distribution you can install on every Intel processor out there, ARM operating systems have to be written directly for the specific version of the processor. Which incidentally explains why you won’t be able to install Windows RT (Windows 8 for ARM) yourself. So how can Apple do it? Well they build everything themselves, so they don’t have to target more than one processor.

Still, all of that is just software. Software is written by humans. We tell software what to do. If updates for Android are hard to do because there’s no generic interface for the ARM CPU, then make one. Whatever you do, Google, the big next challenge on your table is making Android easy to update.

Hey Google? One more thing. It would be nice if the Nexus phones you make aren’t so big they don’t fit in my pockets.

Scrollbars

Smartphones don’t have permanently visible scrollbars. Neither does OSX Lion (unless you’re using a mouse in which case they pop back in). On the phone, there’s a space issue, so the lack of scrollbars seems a good tradeoff. On the desktop, there’s no such space issue. So why the tradeoff?

If Microsoft’s vision for the future – Surface — is any kind of true (and that remains to be seen), soon there will be no desktop. Fine, but tablets do still have room for scrollbars, so why not enable them there?

Let’s look at the pros and cons. On the list of reasons why hiding the scrollbar is a good thing, I have this (and feel free to augment this in the comments):

  • It’s prettier. Less UI is often a good thing. If you don’t miss it, then you have a better experience for it.
  • It’s consistent with phones and tablets (from the same vendor) and gives a sense of coherence.
  • If the future is indeed touchbased (as in: your future desktop is a docked tablet or phone), developers should probably already now start to yank out hover-induced menus and make their scrollpanes indicate overflow when no scrollbar is visible. Having a desktop OS that mimics this, I suppose, is a helpful reminder of what may be coming.

Still, the scrollbar has been around for a while. In fact I would argue it’s a cornerstone in modern GUIs. Such a thing should not be buried willy-nilly. Here are reasons to keep the scrollbar visible at all times:

  • I can think of many ways to indicate that there’s more content to be seen, but none of them are as easy to understand as the scrollbar.
  • A scrollbar doesn’t have to be 18px wide, opaque, with a huge inset gutter, so long as it looks like a scrollbar. In fact, if only Lion scrollbars didn’t fade out completely, this post would probably not have been written.
  • A permanently visible scrollbar, by virtue of its relative height, will sit silently at the side of your view and cue you in how much content remains to be seen. No bottom shadow or clipped content will indicate that. It’s like a minimap of your document.

It’s not that I love scrollbars. Most of them are pretty ugly. Scrollbars, as we’ve grown to know them, can be especially hideous when shown on dark designs. Still, I’m not entirely convinced the solution to this challenge is to hide them. That sounds like mystery meat navigation to me.

Next for Chrome OS

Remember Chrome OS? No? That’s okay. It’s Googles Chrome-browser with a stub of Linux underneath it, making the browser the operating system. I believe in this thing, not for myself, but for my mom; the ability to hold a couple of buttons while it’s booting to format the entire system and reinstall it from the cloud, keeping all personal files intact, is pretty much the perfect mom computer. Still, Chrome OS and “Chromebooks” never really took off. Now Google’s trying to take it to the next step, which apparently means a window manager:

While there’s some interesting UI going on, particularly with the semi-fullscreen-split-view and the ultra-minimal taskbar, I can’t help but feel like they could’ve done something really interesting with this. I don’t think the desktop has a future.

Google Chrome, Metro-Style

Windows 8 is a pretty bold new move for Microsoft. It’s bright, vivid, touch friendly and puts apps and contents way up top. It appears to have ditched the traditional desktop metaphor and filesystem. Apps look very different. Here’s what Internet Explorer 10 looks like:

IE10

That new look and feel for apps is being referred to as “Metro-style”. Metro-style apps run fullscreen and navigation happens through edge-activated interfaces. While I’m concerned about discoverability for edge-activated interface controls (essentially this is classic mystery meat navigation), I do like that apps are full-screen and that Metro-style apps ditch all archaic notions of UI chrome.

Which brings me to Google Chrome, capital C. Really great browser, my such of choice. From a high-level perspective, Google built this browser to accelerate the pace of web technology development, so that Googles own web-apps — Gmail, Calendar, Docs — could adopt newer features sooner. To that end, Google has gone to great lengths to make sure Chrome is not only cross-platform (Windows, Mac, Linux, soon Android), but that Chrome looks native to each platform. This tenet has been taken to the extreme, actually, with Chrome on Windows XP featuring the horrible “Luna” skin, and Chrome on Linux more or less establishing GTK as the de-facto UI toolkit on the platform, just to be able to use said toolkit. It’s really quite impressive, the amount of work put into making Chrome not only look native, but be native.

Of course we’re only on the cusp of the future. The next round of operating systems are likely to be much more mobile inspired. Windows is blazing a trail with adopting the Windows Phone Metro UI, OSX is likely to become even more iOS-like, and Ubuntu is already exploring more touch-friendly UIs. If Google is going to keep following the path of full-on nativity, Chrome engineers are going to be having some nasty headaches in the not too distant future. Is it even technically possible to replace Internet Explorer 10 as your browser of choice? With Windows 8 treating HTML5 web-apps as first-class citizens among native apps, it’s likely that IE is baked in to the operating system more deeply than it ever was before.

It’s also an interesting mind-game, imagining what Google Chrome would look like, if it were to theoretically be re-written as a Metro-style Windows 8 app. The Metro-style UI is already so minimalist in layout, icon style and even interaction patterns that it’s difficult to think of Metro-style Chrome looking very much different from IE10. The racing-car diagonal tabs for instance, which are important to Chrome’s branding, are hard to translate to Metro-style. Though I suppose if Google were to go this way, they could make their tabs look similar to those of the Android Honeycomb browser (which is likely to spell the direction of how Chrome will look like on Android, once that happens).

Will it happen? I think so, but I think Google will want to play the wait-and-see game for a while. Just like Android Ice Cream Sandwich may be a make-or-break proposition for Google, so do I think that Windows 8 is for Microsoft. Could be that Windows 8 adoption is too slow to worry about. Could be Google’s already working on Metro-style Chrome.

The Weird Voodoo Necessary To Spawn Great Apps On Your Platform

versions

“Android users don’t buy apps”, people will tell you. I have no idea whether that’s true, but I do know I switched to The Mac in part due to the presence of great apps, apps not present on Windows. I don’t think it’s a stretch to claim that a platform will gain in popularity by virtue of having great apps. Which makes launching new platforms difficult. Inherently, new platforms won’t have many apps at launch and unless some really good ones are written fast, your platform might never take off.

Let’s define a great app as being an app that’s simple, beautiful, solves a problem for you, and is fast and stable.

I like Windows. I’ve used it for a decade. There are window-management features I still miss, having switched. I hope Windows 8 will do great. But I can’t say Windows ever had great apps; Windows had good apps. I particularly miss Directory Opus, an over-the-top-powerful file management application with integrated FTP, regex file renamer and too many nice features to mention. This was a good app, and I would love a Mac version. But it’s not a beautiful app. It’s got an uninspiring icon, the UI is cluttered by default, the bundled icons don’t look good and the app itself is only as pretty as Windows native UI is. But does it matter that an app isnt’ beautiful?

My noodling on the matter says yes. During the formative months or years of a new operating system — case in point, OSX — the apps that come out will generally dicatate what follows for that platform. If a slew of functional, great-looking apps come out, these apps will define where the bar is set. Once the platform, for a variety of reasons including the presence of aforementioned apps becomes popular enough, it will obviously attract a slew of crappy apps as well, sure. But the higher the bar was set initially, the fewer crap apps will follow. There’s simply no need to look beyond that one app that filled a niche.

Back when I was still powerusing Windows, ALT-tabbing and generally working things to my liking, I was surprised at my Mac friends and their utter determination to make sure all their dock icons were pretty. Sure, I can appreciate a good icon design, but an app can be good without a great icon, can’t it? This mac-using-friend-determination went further and involved criticising the lack of native UI in the Firefox browser, an otherwise tech-hipster darling at the time. I couldn’t care less at the time. As Yogi Berra said: if the app is good the app is good. Right?

Right. And also sometimes wrong. Windows has good apps, but few of them are beautiful. That’s how it’s always been. As the PC has grown from its DOS infancy, apps have improved in both features and looks. But Windows itself, although functional, was never particularly beautiful to look at. Almost reflecting this, neither were Windows apps. Still, it was the platform with the most apps by far, probably still is. The downside is that most of them are crap. Google windows video converter and you’ll more results than is funny. How are you going to find the one good one among them?

The Mac, on the other hand, made a clean break with OSX. Apps had to be rewritten from scratch, and the operating system itself had received a “lickable” design — it was very pretty to look at by yesteryears standards. The Mac was in a bad place at the time, marketshare-wise, so the trickle of new OSX-ready apps wasn’t overwhelming. Still, because of the clean break and the presence of a userbase, apps did appear. For some reason, these apps were simple, beautiful and userfriendly. Like the OS. You could think the Mac developers at the time felt their apps should reflect the sense of taste the OS itself exuded. Whatever happened, a philosophy of building the one app to rule each niche seems to have been born at this time. Microsoft never made this clean break with Windows, so there was never an opportunity for developers to stop and rethink their apps, and the standard for “pretty” was never very high. The result is a billion apps that do the same thing, because no developer filled a niche in any significant fashion.

I sound like a long-time Apple lover, which I’m not. I switched to The Mac because of the UNIX commandline. Make no mistake about it, there are things about The Mac Way that I sincerely loathe. OSX Lion, for example, is the worst $29 I’ve spent in years. I’m also firmly entrenched with The Android, the Gmail app and seamless syncing is enough to ensure that.

But thinking about the weird voodoo necessary for a new platform to take off, it’s really hard to get around both the Mac and the iPhones portfolio of apps and the standard they’ve set. While it’s all a bunch of evening noodling and gut-feelings, this all tells me that if you want great apps on your platform, you need to combine a beautiful UI with a clean break. It appears Microsoft may be taking this route. Android take note.