I’ve been thinking about moving this place over to wordpress.com for a while and I’ve finally made the jump. If you’ve been following along, I’m now blogging at engadgeted.wordpress.com. The full archive is there and I’ll probably shut down this place soon and start redirecting. If you’re subscribed to my RSS feed (much appreciated) I’d kindly suggest you subscribe to my new weblog over there.
I’m not sure how comfortable I’ll be posting on a hosted service, but I’ll be giving it a try for now.
Stephen Wolfram introduces the Wolfram Language – a “general-purpose knowledge-based language”. It looks very impressive, but then again, most demos do until you finally get your hands on the actual product. Nevertheless, I would have loved this as a kid learning to program. Tight and responsive feedback loops, lots of capabilities out of the box, immediate results – the very things so often missing from contemporary programming environments. In a way it reminds me of what Bret Victor described in his essay on Learnable Programming.
Origami is a free toolkit for Quartz Composer—created by the Facebook Design team—that makes interactive design prototyping easy and doesn’t require programming.
Origami, a prototyping tool by the Facebook team that did Paper. Looks quite useful.
I don’t think we want our bodies to be UIs.
The assumption driving these kinds of design speculations is that if you embed the interface–the control surface for a technology–into our own bodily envelope, that interface will “disappear”: the technology will cease to be a separate “thing” and simply become part of that envelope. The trouble is that unlike technology, your body isn’t something you “interface” with in the first place. You’re not a little homunculus “in” your body, “driving” it around, looking out Terminator-style “through” your eyes. Your body isn’t a tool for delivering your experience: it is your experience. Merging the body with a technological control surface doesn’t magically transform the act of manipulating that surface into bodily experience.
LG introduced a line of new smart appliances at CES that you can text and chat with. The idea is to allow people to communicate with their home appliances in natural language through well-established and understood communication channels.
While my initial gut-reaction was to dismiss the idea as silly internet fridges for the social media age, the more I think about it I can’t shake the feeling there might just be something there. Lots of people seem to really like text messaging as a means of communication and I don’t see why that predilection wouldn’t translate from communicating with people to communicating with machines.
While natural language interaction is still a bit of a novelty, it’s gaining traction in recent years (in no small part thanks to Siri). We still don’t have a very good idea of how we’re supposed to control and interact with a growing number of smart, connected devices in our homes. For a long time it appeared to me that both researchers and developers favored the intelligent, autonomous agent model, where smart devices adapt to their owners needs on their own, as if they could somehow magically read their mind. I never really bought into this particular vision because these autonomous software agents generally cause more trouble than they’re worth as soon as anything goes wrong, and things inevitably do go wrong from time to time.
In addition, most existing approaches are limited in interoperability, isolated in their respective manufacturers service silo (what Jean-Louis Gassée just recently called the basket of remotes problem). Using a reasonably open and widespread communication channel (such as text messages), with natural language interaction substituting for rigid, proprietary and undocumented protocols could solve this problem.
The lowly text message of yesteryear as the glue of tomorrow’s Internet of Things, quite a thought.
Anyways, I would be amiss not to mention that Ericsson has been doing some interesting research in this area for the past few years on what they call a social web of things.
As a counter to the fantasy-laden future worlds generated by our industry, I’d like to propose a design approach which I call ‘The Future Mundane.’ The approach consists of three major elements, which I will outline below.
- The Future Mundane is filled with background talent.
- The Future Mundane is an accretive space.
- The Future Mundane is a partly broken space.
Source: The Future Mundane by Nick Foster.
A useful discussion of functional and formal design dimensions, which are often conflated when talking about design:
There is plenty of good design that is ugly, and of course there’s good design that both works well and looks pretty. But a design that doesn’t work can never be substantially good — ugly and broken is just worthless crap, and pretty and broken is phony or kitsch:
Source: Learning to See by Oliver Reichenstein.
I can never remember if we are supposed to live each day as it were our last, or if it’s the first day of the rest of our lives. It’s hard to tell sometimes.
We’re surrounded by objects and systems that are too big or too opaque to understand — everything from the global banking system, to the Edgerank algorithm Facebook uses to order your newsfeed [...] And the effect of this alienation is felt subtly: I believe it means we can never build a good mental model of the technologies we use. We’re constantly having our expectations slightly violated, we feel a little itchy, like we don’t fit comfortably in our own world.