After last week’s WWDC, Apple’s Developer Conference, the whole web is brimming with commentary on the radical design changes that Apple plans to introduce with iOS 7 this fall.
I have a few (conflicting) thoughts of my own, but there’s little reason to go into them at length, considering the number of words spent on the topic across tech news websites and personal weblogs over the past week. I’ll briefly jot down my top three annoyances and concerns anyway, to revisit at a later point:
- I think the new lockscreen design is a failure, inadvertantly prompting the user to swipe up instead of left-to-right, potentially tarnishing one of the very first interactions users will experience with their new iOS 7 device. I expect this to be changed pre-release.
- I’m not a fan of the new homescreen app icons (like everone else), but they don’t worry me too much because they’ll probably also get fixed pre-release.
- My biggest gripe and probably the one that won’t get fixed is the use of text labels as buttons – I think a button should look different from a label and you should be able to distinguish between the two without ambiguity with a quick glance.
Overall what we saw of iOS 7 at WWDC looked incoherent and unfinished, but then again, it probably is. A comprehensive, systemwide user interface overhaul of this magnitude is an enormous undertaking and I have little doubt that Apple’s designers and developers will keep polishing and refining until the very last minute before they ship. What we saw this week gave us a valuable preview of the design direction that iOS is headed in, but I expect things to improve significantly before iOS 7 actually ships.
One parting thought that few people seem to be talking about: I’m sad that the iOS user interface as we knew it for 6 years will soon be a thing of the past. For all its shortcomings it was a beautiful user interface, certainly more beautiful and polished than Android and Windows Phone in my eyes. Its look might have gotten stale over the years and many users might have gotten bored by its staunch familiarity, but to think that it will be gone for good so soon seems almost tragic to me.
For a long time it was widely accepted in HCI circles that touchscreens and desktop environments don’t blend well. While the concept might be tempting on a quick glance, the ergonomics of vertically mounted touchscreens at arm’s length quickly lead to muscle fatigue over prolonged use. Touchscreens work reasonably well for brief interactions, such as ATMs or vending machines, but you probably wouldn’t want to use them as your primary way of pointing at things on your screen for eight hours a day. There’s even a popular term for this phenomenon, the Gorilla Arm effect.
A number of companies have tried to defy this widely held belief and bring touchscreens and desktop environments together over the past few years, most notably Microsoft, from its early foray with Windows XP Tablet edition more than ten years ago to their recent ambitions with Windows 8, but so far these attempts were met with poor critical reception and modest success in the marketplace.
Nevertheless, touchscreens can be a useful complement to other pointing devices. Maybe you’ve already found yourself haplessly tapping your laptop screen on occasion, conditioned by years of smartphone and tablet use. Furthermore, when you’re already typing on your laptop’s keyboard, your hands are already close to the screen anyway – I haven’t found any studies about this, but I would imagine that moving your hands to the screen instead of the mouse might be more comfortable occasionally.
The allure of touchscreens for auxiliary input on laptops is strong. Being an Apple user myself, I’ve occasionally been envious of innovative new devices such as Microsoft’s Surface or the Google Chromebook Pixel. I like to think I can’t be the only one feeling this way, and just back in February John Moltz speculated about the possibility of a touchscreen Macbook. Unfortunately traditional desktop user interfaces such as Windows or Mac OS X have historically proven to be ill-suited for touchscreen interaction. The demands of legacy application support (as evident in Windows 8′s hybrid desktop environment) highlight the difficulty of adapting traditional desktop user interfaces for touchscreen input. However, going in the opposite direction and adapting tablet user interfaces for traditional laptop form factors might just work. Lukas Mathis said as much back in November 2012, when he wrote:
Most of the things required for a great touch user interface are also good ideas on the desktop. Large touch targets, fast, responsive user interfaces, a simple, intuitive information architecture, uncluttered screens that don’t offer too many different features, easily understood screenflows, lightweight applications, simplified window management — all of these things work on the desktop just as well as on a tablet.
When I first encountered his reasoning I was very skeptical. While I wouldn’t argue against the specific arguments he presents, I was largely convinced that touch user interfaces would be rather inefficient for mouse interaction, with lower information density necessitated by larger touch targets and poor multitasking support being my most immediate and obvious concerns. Since then I’ve largely come around on this issue, though: While touch user interfaces might not be quite as efficient as desktop interfaces for expert users, the familiarity of a unified user experience across a broad range of devices would probably outweigh this disadvantage for most users. Bringing iOS to laptops would probably require a few adaptations, but it’s certainly more feasible than adapting Mac OS X for touch interaction. So over the past few months I’ve come to believe that in the long run Apple will probably replace Mac OS X with iOS on its entire device lineup (maybe save for developer machines, but I expect those to fade away eventually as well).
And then, just yesterday, esteemed Apple commentator John Gruber posted this quite interesting remark:
I expect an iOS notebook eventually; I expect never to see a touchscreen MacBook.
So I guess the idea isn’t completely crazy. It’s also interesting to ponder what might have changed John Gruber’s mind on this topic, as just six months ago he wrote:
A touch-optimized UI makes no more sense for a non-touch desktop than a desktop UI makes for a tablet. Apple has it right: a touch UI for touch devices, a pointer UI for pointer (trackpad, mouse) devices.
Now I can’t help but wonder how well iOS would work with mouse input. I don’t expect Apple to add mouse support to iOS unless they have very good reason to do so (such as, you know, an iOS laptop, and I don’t really see that happening anytime soon). But iOS already supports bluetooth keyboards out of the box and you can hack mouse support into jailbroken iOS devices. An iPad with keyboard and mouse should already give you a pretty good idea about how an iOS laptop might work for you. Maybe it’s time to jailbreak my old first generation iPad…
Facebook Home, an ambitious attempt to bring the social network front and center on Android smartphones by way of a new home screen experience, launched to great attention and cautious praise by the technology press less than two months ago. Since then however, things have turned a little bit sour: After fledgling sales of the HTC First in the U.S., the first Android smartphone to ship with Facebook Home preinstalled, Facebook just announced they would postpone the First’s U.K. launch until they’ve had time to improve Home. They’ve already outlined some of these planned improvements such as a dock for easier access to your apps, one of the more obvious shortcomings of Home in its first iteration. Facebook seemingly underestimated how much people care about apps in their attempt to put the limelight on your Facebook friends instead. Perhaps focussing on your Facebook stream is a poor design decision to begin with: Where Facebook assumes beautiful pictures and captivating, relevant stories, most people’s Facebook streams are probably a horrifying mix of grainy, over-filtered cam-phone shots, tired cat memes and spam – seeing these every time you unlock your phone isn’t a particularly appealing proposition. As Marco Arment put it:
Facebook Home was flat-out badly designed: it’s designed for optimal input and failed to consider real-world usage.
I don’t mean to be too negative Facebook Home’s design here though. I personally think it’s a failure in product strategy and a victim of unfavorable market dynamics more than anything. Launching a major new mobile platform without widespread manufacturer and operator support doesn’t strike me as a strategy ripe for mainstream success. From a design standpoint, Home is probably more interesting than usable, but I’m glad that Facebook tried something bold and new rather than playing it safe, even if it didn’t work out quite as well as expected thus far.
Post-launch the designers behind Home shared some insights into their design process and I found them an interesting read. Julie Zhuo posted a post-mortem on Medium and Marco De Sa spoke with Fast Co. Labs about their user experience testing process. The Verge ran an article about the making of Chat Heads, arguably the most successful and best-received feature of Home. Finally, David O’Brien (no affiliation with Facebook as far as I can tell) created a series of tutorial videos on how to recreate the Home experience as interactive prototypes in Quartz Composer after he heard that Facebook used the tool internally for prototyping. Whether you consider Home a success or not, you might find these an interesting read.
“Racer,” allows users to put devices together and control tiny toy cars around a Hot Wheels-style stunt track by tapping and touching their own device’s screen. The track itself automatically extends across all the device screens, with different segments of the track appearing on each screen.
Via The Verge.
I’ve been interested in these connected multi-screen experiences for quite some time as I see a lot of untapped potential there. The technical challenges to make several devices work seamlessly together always seemed rather complicated, so I’m glad to see Google experimenting in this direction, even though this experiment doesn’t seem to be publicly available yet.
Neven Mrgan provides a helpful illustration if you’re still undecided on the matter. I personally lean towards the one on the right in the middle row.
I have a soft spot for QBASIC, because it was the first programming language I picked up back when I got my first PC at the age of 14. I wanted to learn programming even earlier than that, but my parents refused to buy me a PC (and I didn’t have the spare funds to buy one myself at that age). I always wanted to be a programmer so I could write my own video games (I did get a Nintendo Entertainment System at the age of ten – talk about messed up priorities and opportunities…) and of course the first programs I wrote were computer games. These games were nowhere near as complex or refined as Black Annex, but this quote from the aforelinked article resonated strongly with me:
I didn’t want to ‘learn’ how to make a game—I realized I already knew how to make a game. I just had to go back to the tools I knew.
I haven’t tried my hands on creating a video game in more than a decade, and it’s probably because I find the process too daunting and I lack the faith in my skills and tools to pull off anything worthwhile. Which is a silly reason for not even trying.
It also reminded me of an article by James Hague titled Write Code Like You Just Learned How to Program, in which he shares a fun little anecdote and reaches this interesting conclusion:
It’s extremely difficult to be simultaneously concerned with the end-user experience of whatever it is that you’re building and the architecture of the program that delivers that experience. Maybe impossible. I think the only way to pull it off is to simply not care about the latter. Write comically straightforward code, as if you just learned to program, and go out of your way avoid wearing any kind of software engineering hat–unless what you really want to be is a software engineer, and not the designer of an experience.
There’s something mind boggling about a fully functioning cellphone for $12 – especially if you consider Bunnie’s detailed component breakdown between this phone and a $29 Arduino.
The new interface is said to be “very, very flat,” according to one source. Another person said that the interface loses all signs of gloss, shine, and skeuomorphism seen across current and past versions of iOS. Another source framed the new OS as having a level of “flatness” approaching recent releases of Microsoft’s Windows Phone “Metro” UI.
If this is true (and that’s a big if, as far as I remember 9to5mac isn’t particularly reliable in its reporting of rumors), it could be either great or terrible (I like my ham-fistedly skeuomorphic signifiers and affordances thankyouverymuch, that’s why I never got to terms with Android or Metro).
Everything else aside, this just freaked me out:
iOS 7 is codenamed “Innsbruck,” according to three people familiar with the OS.
Remember Microsoft’s Illumiroom concept from Samsung’s CES keynote back in January? They’re back in full swing at CHI 2013 and frankly, I haven’t been this excited about anything out of Microsoft Research in forever. Watch a pretty comprehensive walkthrough of its capabilities:
I really hope we’re looking at something that will find its way into the next generation Xbox ecosystem (we should know in about three weeks), but I’m only cautiously optimistic. While the system appears to have evolved well beyond the limited confines of research labs, two limiting factors come to mind: First there’s obviously pricing, but beyond that I would imagine installation in a living room to be quite challenging without the projector getting in your way or constantly requiring re-callibration. Nevertheless, please let us have at this, preferably before 2015.
In the meantime, if you need something to read before bedtime, have a go at their CHI paper.