Archive for September, 2008
Here are two examples, both of which assume a preference for sitting close to the floor:
The ad also boasts scientific-looking heat-maps charting pressure on the derrière, proving that sitting on this chair will distribute weight more evenly than sitting directly on the tatami mat.
You’ve gotta love how the desk folds into an end table…
Unfortunately, these products are only available to Japanese residents, so the rest of us are out of luck.
This week’s Carnival of the Mobilists is being hosted at Judy Breck’s excellent Golden Swamp blog (which I generally follow — it particularly appeals to the overlap in my interests in education and mobile). Go have a look at a roundup of the past week’s best writing on all things mobile.
Here’s the kanji (Chinese and Japanese character) for tree (ki, in Japanese):
Here’s the kanji for woods, hayashi (i.e., many trees):
And here’s the kanji for forest, mori (even more trees):
Now, here’s the kanji for power, chikara:
And the kanji for cooperate, kyo (i.e., even more power):
Cultural concepts run deep. I rest my case.
As mediated experiences overtake most of our waking hours, the power of a huge mass experience in real life rises in meaning. [from the CT2 blog]
The point Kevin makes is a good one. But… to be honest.. what entranced me was the use of the term mediated experiences. It’s a powerful term. Experience is so defined by the personal feelings and senses, and mediation is so defined by the intervention or removal from personal experience, that the combination just fascinates me.
I did a little poking around on line to get a sense of how mediated experience is used. I got as far as finding it to be an expression used in psychology to refer to a person’s [internal or external] filtering of life’s experiences. But that doesn’t quite convey the punch conveyed to me.
The best I could find was this definition in Spanish:
Mediated experience es una forma de experiencia indirecta. En el caso del arte, es la experiencia donde el artista se ha interpuesto “en medio” entre la experiencia y el que la experimenta.
My very rusty Spanish understands that as:
Mediated experience is a form of indirect experience. In the case of art, it is an experience wherein the artist has interjected the artistic medium itself into the experience and into what is being tested.
I’m not completely certain that I’ve grasped the full concept, but I am really drawn to what I see. A description of the influence that the medium itself (video, computer, television, telephone, news source, whatever) has upon how the information or event is experienced.
The most alien, shocking and awesome portion of the Opening were the mass routines. Part of this is cultural. The Koreans are good at these mass effects, and the Japanese too. It’s somewhat an East Asian thing. Historically these mass dances are designed to resemble machines. […]
That is our first reaction but I think it goes further than that. The 2008 fou drummers represent the We — the power of the collective. The West and particularly Americans have traditionally emphasized the Me — the individual. China is a culture more comfortable with the We than the Me, and here they were showing both the power of the We and its modern face — blinking LED drums. We once thought computers were about individuation, but these days we see they are about socialization as well.
More importantly, the social aspects of web 2.0 have shifted the center of gravity from Me to We. Witness books like Clay Shirky’s Here Comes Everybody. Here come 2008 Chinese drummers. The great uncertainty in the coming years is how far China will shift to the Me and how far the west will shift to the We. What the Opening Ceremonies opened up was the arrival of the We. What I heard in the pounding pulse of the drummers was not “Here come the Chinese,” but “Here comes everybody.”
Long after the winners of the gold metals are forgotten, these Olympic Opening Ceremonies will be bookmarked as the Opening Ceremonies for China itself.
Heartfelt thanks to Ruti for recalling many of the reasons I love living in Israel…
Menticulation: the chemical reaction generated by the interaction between Diet Coke and Mentos.
Menticulator: an experimental environment designed to promote menticulation.
If you like the word, you’ll love the context: Robert Woodhead’s Zero-G menticulation tests. I really believe his claim that his kids get better than A+ on their “What I Did in the Summer” essays.
And absolutely don’t miss the video.
Predicting user intention has a long history. There’s always the hope that you can train a computer to anticipate the user’s next move and launch the desired application or function at just the right moment, without requiring a user command.
The question is, how do you predict? How do you know what a user wants to do next?
The traditional methods can be generally categorized as:
- Statistical methods. Study ten or a hundred or a thousand people using your program, and discover which functions are usually requested after which other functions. A common example: you might find that after launching Word, 90% of the time users next create a new blank document. Therefore, when launching the program, automatically cause a new file to be created immediately. Another common example: Apple Mail recognizes an email address format or internet link format in text, and automatically creates a clickable link within the mail body. There are mountains of ethnomethodological studies that try to provide relevant data for predictive use.
- Track individual user habits. Allow the application to track a user’s actions, and learn the user’s behavior patterns. Then activate functions automatically based upon past use.
A non-real-life example — more of a wish — from JK On The Run:
After I finish doing my email, or even before I’m done if there are too many emails to do them all, I want to go to Google Reader to check all the items from my RSS feeds overnight. I can open up Firefox or just say “check the feeds” or the equivalent and the [Intuitive Interface] knows to fire up Firefox with the Google Reader page loaded. The key to the learning capabilities of the [Intuitive Interface] is that just because I use Firefox doesn’t mean you do. If it’s learned from your actions that you use Opera or Internet Explorer then that’s what it will use for you. No overt training required, the [Intuitive Interface] can learn volumes about your preferences and what you normally do just by paying attention when you do them. After just a short time of doing this the [Intuitive Interface] can be working WITH you, not just for you. It will become a very intelligent personal assistant that works the way you do when you do. It’s always watching what you do and WHEN you do it as most people’s work days are very routine when it comes to schedule.
3. Allow the user to control and register actions and preferences. Photoshop does this by recording your action history, and then letting you not only undo actions, but also “record” sets of actions for future application to other documents. The Mac OS does something similar in helping you set which applications are used to open which documents.
What everyone yearns for is something like the first two categories — where the user does nothing, and the computer comes up with the right action “like magic”. The problem is that in real life, only category three is really useful. Why?
Consider the following two reports:
One of the features on my three-year old Acura that I’ve come to enjoy is its keyless entry and ignition feature. Walk up to the car, touch a button on the door handle to unlock it, and start the car without inserting the key. All while the key stays in my pocket. It’s a feature now found on many cars and eliminates the need to find your keys in a pocket, briefcase or purse.
It can even tell the difference between my key or my wife’s. This can have some unintended consequences. If my wife enters the car first from the passenger side with her key, all of the radio stations and other settings default to hers. (She thinks that’s great as it reminds me to be a gentleman and open her door first.)
[from Phil Baker’s Concept to Consumer blog]
Blackberry has this nice feature where you type a word without bothering with capitalization or punctuation, for example, typing “im” for “I’m”, and it changes it on the fly. (Funny, because there’s no actual spell-check…) It’s a feature that’s convenient, although I tend to under-use it.
Anyway, little glitch, I tried to send someone my Israeli email address the other day. It ends with @netvision.net.il. Except that my alert Blackberry insisted it was @netvision.net.I’ll. I went back to erase/change/fix maybe 6 times, unsuccessfully. Not a helpful feature, in this case! Why should I be in a power struggle with my cell phone? […]
Found another one: can’t type the word “id” (as in Freudian), or the initials for identification or industrial design (ID). I just keep getting “I’d”.
When is the tradeoff of 95% accuracy offset by the 5% error rate (uncorrectable errors)? Another long tail question? Kind of.
[from Feature Power Struggle, posted in this blog]
You get the idea. I’m sure you can draw examples from your own life. Unless a use-case prediction is true 100% of the time, the frustration of an incorrect prediction has to be allowed for. If the error is minor or easily corrected for, then the predictive action may be worthwhile (eg, having applications create new documents at launch — closing the new document window is a minor inconvenience, and the extra wait is unnoticeable). If the error is harder to correct, or more annoying (How do you tell the car who is really driving? How do you override Blackberry’s auto-punctuation?), the frequent convenience may not outweigh the occasional frustration.
It’s worth pointing out that anything in categories 2 or 3 will benefit from unshared use of the device. Sharing machines/phones/computers/cars when preferences have been customized or learned for a particular individual will entail even greater frustration than if there had been no customization in the first place. Which leads us to more “Me”and less “We”.
[Disclosure: I work for Power2B, who are developing a 3D touchscreen and interactive TV interface that predicts user activity by tracking actual trajectories in real time, rather than through any of the above systems.]