“Racer,” allows users to put devices together and control tiny toy cars around a Hot Wheels-style stunt track by tapping and touching their own device’s screen. The track itself automatically extends across all the device screens, with different segments of the track appearing on each screen.
Via The Verge.
I’ve been interested in these connected multi-screen experiences for quite some time as I see a lot of untapped potential there. The technical challenges to make several devices work seamlessly together always seemed rather complicated, so I’m glad to see Google experimenting in this direction, even though this experiment doesn’t seem to be publicly available yet.
Neven Mrgan provides a helpful illustration if you’re still undecided on the matter. I personally lean towards the one on the right in the middle row.
I have a soft spot for QBASIC, because it was the first programming language I picked up back when I got my first PC at the age of 14. I wanted to learn programming even earlier than that, but my parents refused to buy me a PC (and I didn’t have the spare funds to buy one myself at that age). I always wanted to be a programmer so I could write my own video games (I did get a Nintendo Entertainment System at the age of ten – talk about messed up priorities and opportunities…) and of course the first programs I wrote were computer games. These games were nowhere near as complex or refined as Black Annex, but this quote from the aforelinked article resonated strongly with me:
I didn’t want to ‘learn’ how to make a game—I realized I already knew how to make a game. I just had to go back to the tools I knew.
I haven’t tried my hands on creating a video game in more than a decade, and it’s probably because I find the process too daunting and I lack the faith in my skills and tools to pull off anything worthwhile. Which is a silly reason for not even trying.
It also reminded me of an article by James Hague titled Write Code Like You Just Learned How to Program, in which he shares a fun little anecdote and reaches this interesting conclusion:
It’s extremely difficult to be simultaneously concerned with the end-user experience of whatever it is that you’re building and the architecture of the program that delivers that experience. Maybe impossible. I think the only way to pull it off is to simply not care about the latter. Write comically straightforward code, as if you just learned to program, and go out of your way avoid wearing any kind of software engineering hat–unless what you really want to be is a software engineer, and not the designer of an experience.
There’s something mind boggling about a fully functioning cellphone for $12 – especially if you consider Bunnie’s detailed component breakdown between this phone and a $29 Arduino.
The new interface is said to be “very, very flat,” according to one source. Another person said that the interface loses all signs of gloss, shine, and skeuomorphism seen across current and past versions of iOS. Another source framed the new OS as having a level of “flatness” approaching recent releases of Microsoft’s Windows Phone “Metro” UI.
If this is true (and that’s a big if, as far as I remember 9to5mac isn’t particularly reliable in its reporting of rumors), it could be either great or terrible (I like my ham-fistedly skeuomorphic signifiers and affordances thankyouverymuch, that’s why I never got to terms with Android or Metro).
Everything else aside, this just freaked me out:
iOS 7 is codenamed “Innsbruck,” according to three people familiar with the OS.
Remember Microsoft’s Illumiroom concept from Samsung’s CES keynote back in January? They’re back in full swing at CHI 2013 and frankly, I haven’t been this excited about anything out of Microsoft Research in forever. Watch a pretty comprehensive walkthrough of its capabilities:
I really hope we’re looking at something that will find its way into the next generation Xbox ecosystem (we should know in about three weeks), but I’m only cautiously optimistic. While the system appears to have evolved well beyond the limited confines of research labs, two limiting factors come to mind: First there’s obviously pricing, but beyond that I would imagine installation in a living room to be quite challenging without the projector getting in your way or constantly requiring re-callibration. Nevertheless, please let us have at this, preferably before 2015.
In the meantime, if you need something to read before bedtime, have a go at their CHI paper.
Using voice, gesture, and projection, frog has turned a conference room into an environment called RoomE—a space to experience the value of room-sized computing. Computer vision and voice recognition combine to provide inputs and context to a computer, opening the way for frog to build a heads-up digital experience using light projected onto surfaces and digital control of objects.
The Greenhouse SDK is a creative coding toolkit for spatial interfaces by Oblong Industries, the company that created the G-Speak platform.
Haven’t had a chance to try this yet, but looks intriguing.