There’s something tantalizing, almost irresistible about the new Nokia N9 to me. Its polished exterior and unique look in cheery colors is unmistakably Nokia and so very distinct from all the Korean iPhone lookalikes.
Even more tantalizing than the quality of its hardware engineering is its unique position as Meego’s swan song, the first and last device to demonstrate the culmination of years of Nokia’s platform development. While I have my doubts about the actual day-to-day utility and usability of the Harmattan UI, it looks like it could have been a solid foundation for future iteration and refinement.
Back when Stephen Elop announced Nokia’s switch to Windows Phone 7 in February, many in the tech press lauded that move. Meego had been in development hell for years with little to show for it. A recent Businessweek profile of Stephen Elop quotes Nokia Chief Development Officer Kai Oistämö saying, “MeeGo had been the collective hope of the company [...] and we’d come to the conclusion that the emperor had no clothes. It’s not a nice thing.” Back then only Nokia insiders could assess whether Meego had any legs, but looking at the N9 as it was introduced this week it’s hard to imagine that it made such remarkable strides from burning platform to what they are showing now in four short months.
Unfortunately we will in all likelihood never find out what Meego could have been as the N9 is destined for failure. Nokia made it abundantly clear in recent months that they’re betting their future on Windows Phone 7. There won’t be a thriving ecosystem of software, services, updates and accessories around the N9. It’s hard to imagine that consumers, operators and, above all, developers will flock to a dying platform.
Parsing Nokia’s marketing speech (from this Wired UK article) strongly suggests that the N9 does indeed mark the end of a road:
As to how many future devices will run on the MeeGo platform, unfortunately, we do not comment on unannounced devices or our planned product roadmap. We had previously stated that we would bring a MeeGo operated device to market during 2011 and that is exactly what we achieved with yesterday’s announcement of the Nokia N9.
It seems the N9′s raison d’être rests somewhere between making good on old promises and Nokia’s old guard’s pride to prove to the world that years of development resulted in something good enough to bring to market. But still… between Nokia’s marketing push and the overwhelmingly positive press reception it’s hard to fathom the N9 as a stillborn. Just look at the developer UX guidelines – all these efforts and resources for a hopeless platform, to be ignored by most developers for lack of traction?
Despite all potential shortcomings, the N9 is almost worth buying for being such a singular technological dead end alone.
GROUP is a collective sound work that will start on individual mobile devices and ends with participants coming together for a large-scale gathering at 12:45 PM on June 21, 2011 near the corner of Wall and Broad streets. Anyone with an iPhone or iPod Touch can download the GROUP app from the Apple App Store and be a part of this experience. Participants will start the app in the morning of the June 21st and keep it running until 12:45 PM when they congregate in front of the Stock Exchange. The piece begins with a dense drone that sheds layers throughout the day and then transforms into a monumental sound as hundreds of participants come together with their sounding devices to activate the hallowed downtown area. Group is a collaborative project between Aaron Siegel and Larry Legend.
- RT @bldgblog: Chinese architects "secretly copy" an entire Austrian town listed by UNESCO: http://t.co/Dd8UV5n (via @otolythe) #
- This, exactly: http://t.co/fpUeI53 (well, except i don't care as much about the Delany bit…) #
- "The slogan for the Chromebook should be: 'The computer for the rest of them.'" http://t.co/gyDdsqH #
- How to cross Dublin without passing a pub: http://t.co/LR6yReB #
- Most Common iPhone Passcodes – 1234 is as popular as you might expect. http://j.mp/jYLkPJ #
- Apple iCloud icon golden ratio: http://t.co/GIE88Tu #
- I find it troubling that Airbus envisions harvesting passengers' energy… http://t.co/XJ618JA #
- RT @AW_FT: Behold the frightening real time retail exploits of women the world over right here: http://www.net-a-porter.com/live #
- Also, can't believe the NYT would describe this attack as "especially ingenious"… http://t.co/dSnVHny #
- Wow. Just wow. That's why i don't do online banking. http://t.co/1VYaTdI #
- RT @benhammersley: WIRED Germany is go. Viel Glück @tknuewer! #
- RT @warrenellis: Immaterials (vid): Jones & Schulze of BERG shout at you about photons & bacon for an hour http://bit.ly/kXGQnO #robotweet #
- Andy Baio interviews Telehack's creator: http://t.co/VhDSNkp #
- RT @odannyboy: Microsoft's NUAds: gestural advertisements. I know one gesture I'd like to give this. http://is.gd/CySHHZ #
- RT @kellan: OH: "Woke up the other day and realized that the Singularity isn't going to happen and I have to plan for getting old." #
- RT @twrbrdg_itself: Hello! I've changed my username. It's not pretty, but I hear there was some kerfuffle, and I don't like causing trouble. #
- RT @julian0liver: I've written a text in response to Kevin Slavin's talk on #AR now known as That Talk by Kevin Slavin: http://t.co/gu2mScs #
- Receipts as paper apps: http://t.co/BPvy02i #
I hope kids are still finding some way, despite Google and Wikipedia, of not knowing things. Learning how to transform mere ignorance into mystery, simple not knowing into wonder, is a useful skill. Because it turns out that the most important things in this life — why the universe is here instead of not, what happens to us when we die, how the people we love really feel about us — are things we’re never going to know.
The project website lays out the whole design process very nicely.
You know, this one. Three responses you might be interested in, by…
Expressing discontent at the ocular focus of Visual AR is like giving Painting a hard time because it denies the rich plethora of experiential possibilities afforded by audio, and that all paintings should ship with a small orchestra. All the while painting’s have been ‘augmenting reality’ for centuries, changing the way people see the world, even their own faith..
there was something missing that i think would add even more weight to your argument and i wanted to throw it out there for future conversation. i think it’s problematic, for the construction of your thesis, to counter the “it’s all in the eyes” perspective with its polar opposite “it’s all in the brain” because most visual research of the past few decades shows that it’s actually a little of both.
I think the key takeaway point is in Slavin’s suggestion that “reality is augmented when it feels different, not looks different” – which basically echoes Marcel Duchamp’s (almost) century-old contempt for the ‘retinal bias’ of the art market. If AR development (thus far) is lacking imagination, perhaps the problem is that we’re very much tethering the medium to our antiquated VR pipe dreams and the web browser metaphor.
I have to wonder though if they really think that punishing people for reading more by making it harder to read the text is a rewarding game mechanic for interactive ebooks. I personally very much doubt this…
[P]ay $20 if you think you’ll get $20 of use out of the app. That is the only meaningful criterion to use.
Direct interaction with everyday objects augmented with artificial affordances may be an approach to HCI capable of leveraging natural human capabilities. Rich Gold described in the past ubiquitous computing as an “enchanted village” in which people discover hidden affordances in everyday objects that act as “human interface “prompt[s]” (R. Gold, “This is not a pipe.” Commun. ACM 36, July 1993.). In this project we explore the reverse scenario: a ubiquitous intelligence capable of discovering and instantiating affordances suggested by human beings (as mimicked actions and scenarios involving objects and drawings). Miming will prompt the ubiquitous computing environment to “condense” on the real object, by supplementing it with artificial affordances through common AR techniques. An example: taking a banana and bringing it closer to the ear. The gesture is clear enough: directional microphones and parametric speakers hidden in the room would make the banana function as a real handset on the spot.
In other words, the aim of the “invoked computing” project is to develop a multi-modal AR system able to turn everyday objects into computer interfaces / communication devices on the spot. To “invoke” an application, the user just needs to mimic a specific scenario. The system will try to recognize the suggested affordance and instantiate the represented function through AR techniques (another example: to invoke a laptop computer, the user could take a pizza box, open it and “tape” on its surface). We are interested here on developing a multi-modal AR system able to augment objects with video as well as sound using this interaction paradigm.
“[A] ubiquitous intelligence capable of discovering and instantiating affordances suggested by human beings” – i had to read that twice until my head wrapped around it…