I’m taking the following as an axiom: Exposing real pre-emptive threading with shared mutable data structures to application programmers is wrong. …It gets very hard to find humans who can actually reason about threads well enough to be usefully productive.
When I give talks about this stuff, I assert that threads are a recipe for deadlocks, race conditions, horrible non-reproducible bugs that take endless pain to find, and hard-to-diagnose performance problems. Nobody ever pushes back.
October 26, 2009
Threading is Wrong
July 2, 2009
Design for Mental Imperfections
When it comes to building the physical world, we kind of understand our limitations. We build steps. … We understand our limitations. And we build around it. But for some reason when it comes to the mental world, when we design things like healthcare and retirement and stockmarkets, we somehow forget the idea that we are limited. I think that if we understood our cognitive limitations in the same way that we understand our physical limitations, even though they don’t stare us in the face in the same way, we could design a better world. And that, I think, is the hope of this thing.
—Dan Ariely, concluding a very entertaining TED talk. The transcript is up, but I liked his delivery so much I watched the video.
Stairs and ladders aren’t an implication that you’re too weak to pull yourself out of a pool. Yet amazingly people sometimes get insulted by simplified interfaces, as if it somehow implies they are so stupid they can’t handle complexity.
I was fortunate enough to hear Jonathan Ive talk about launching the iMac. As he was leaving a store on launch-day, a furious technology reported accosted him in the parking lot, shouting What have you done? He was incensed that the iMac was so cute, approachable, and untechnical — everything that he thought a computer shouldn’t be.
Some of this behavior is explained by simple elitism. If computers are hard to use, than it keeps the idiots out, and proves what a macho man you are if you can use them.
But I suspect refusal to accept our cognitive limitations is also related to our cultural refusal to accept mental illness. Quoting Mark C. Chu-Carroll’s experience with depression,
How many people have heard about my stomach problems? A lot of people. I need to take the drugs three times a day, so people see me popping pills. … Out of the dozens of people who’ve heard about my stomach problem, and know about the drugs I take for it, how many have lectured me about how I shouldn’t take those nasty drugs? Zero. No one has ever even made a comment about how I shouldn’t be taking medications for something that’s just uncomfortable. Even knowing that some of the stuff I take for it is addictive, no one, not one single person has ever told me that I didn’t need my medication. No one would even consider it.
But depression? It’s a very different story.…
Somewhat over 1/2 of the people who hear that I take an antidepressant express disapproval in some way. Around 1/3 make snide comments about “happy pills” and lecture me about how only weak-willed nebbishes who can’t deal with reality need psychiatric medication.
I confess to being thoroughly mystified by this. Why is it OK for my stomach, or my heart, or my pancreas to be ill in a way that needs to be treated with medication, but it’s not OK for my brain? Why are illnesses that originate in this one organ so different from all others, so that so many people believe that nothing can possibly go wrong with it? That there are absolutely no problems with the brain that can possibly be treated by medication?
Why is it OK for me to take expensive, addictive drugs for a painful but non-life-threatening problem with my stomach; but totally unacceptable for me to take cheap harmless drugs for a painful but non-threatening problem with my brain?
If we can accept that our brains are fallible, like everything else, and that this isn’t somehow immoral, we can build a better world.
June 19, 2009
All’s Well That Ends Well
…the peak end rule. When thinking about a total experience, people tend to place too much weight on the last part of the experience. In one experiment, people had to hold their hands under cold water for one minute. Then, they had to hold their hands under cold water for one minute again, then keep their hands in the water for an additional 30 seconds while the temperature was gradually raised. When asked about it afterwards, most people preferred the second option to the first, even though the second had more total discomfort. (An intrusive medical device was redesigned along these lines, resulting in a longer period of discomfort but a relatively comfortable final few seconds. People liked it a lot better.)
May 3, 2009
Confounding Circles
Consider a technique used by the legendary pickpocket Apollo Robbins…. When the researchers asked him about his devious methods—how he could steal the wallet of a man who knew he was going to have his pocket picked—they learned something surprising: Robbins said the trick worked only when he moved his free hand in an arc instead of a straight line. According to the thief, these arcs distract the eyes of his victims for a matter of milliseconds, just enough time for his other hand to pilfer their belongings.
At first, the scientists couldn’t explain this phenomenon. Why would arcs keep us from looking at the right place? But then they began to think about saccades, movements of the eye that can precede conscious decisions about where to turn one’s gaze. Saccades are among the fastest movements produced by the human body, which is why a pickpocket has to trick them: The eyes are in fact quicker than the hands. “This is an idea scientists had never contemplated before,” Macknik says. “It turns out, though, that the pickpocket was onto something.” When we see a hand moving in a straight line, we automatically look toward the end point—this is called the pursuit system. A hand moving in a semicircle, however, seems to short-circuit our saccades. The arc doesn’t tell our eyes where the hand is going, so we fixate on the hand itself—and fail to notice the other hand reaching into our pocket. “The pickpocket has found a weakness in the way we perceive motion,” Macknik says. “Show the eyes an arc and they move differently.”
April 30, 2009
Acceptable Delays
This is a collection of sources on what constitutes an acceptable delay. It’s very much a work in progress, and will be updated when I stumble into new information. I’m very interested in any insights, experience, or sources you may have.
Based on some experiments I did back at IBM, delays of 1/10th of a second are roughly when people start to notice that an editor is slow. If you can respond is less than 1/10th of a second, people don’t perceive a troublesome delay.
One second … is the required response time for hypertext navigation. Users do not keep their attention on the page if downloading exceeds 10 seconds.
In A/B tests (at Amazon.com), we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue. (eg 20% drop in traffic when moving from 0.4 to 0.9 second load time for search results).
—Greg Linden covering results disclosed by Google VP Marissa Mayer
If a user operates a control and nothing appears on the display for more than approximately 250 msec, she is likely to become uneasy, to try again, or to begin to wonder whether the system is failing.
— Jeff Raskin, The Humane Interface (page 75)
David Eagleman’s blog post Will you perceive the event that kills you? is an engaging look at how slow human perception is, compared to mechanical response time. For example, in a car crash that takes 70ms from impact until airbags begin deflating, the occupants are not aware of the collision until 150-300 milliseconds (possibly as long as 500 milliseconds) after impact.
April 15, 2009
Beyond Two Page Programs
And one of the things that is disturbingly true about most novices on computers is that about 2 pages of program is the maximum they can handle. They like to spread it out, use their visual field as an extension of their short term memory
–Alan Kay From Doing With Images Makes Symbols
A few thoughts on this phenomenon.
A denser, more concise, less “English like” programming language would counter-intuitivly be easier for novices to use, if it let them keep their project below the 2-page limit.
Does this limit increase with more and bigger displays?
Do graphical programming language change anything? It seems like they might “scale” better on a very large display. But in my (albeit limited) experience they are much less compact then textual source code. And it’s not clear to me they support abstraction as well.
March 5, 2008
Zooming can be Difficult
I hadn’t realized until today that “zooming” was a nontrivial task. I’m habituated to choose “zoom in” to see more detail, but less of the big picture, and “zoom out” to see how less-defined blocks fit together. But selecting which flavor of zoom is appropriate is not as easy as it first appears.
I didn’t realize this until I saw two participants in usability tests frequently choose the opposite kind of zoom from what they wanted. One lady would even explain how zooming would be helpful to what she was going to do, then say that she should perform the opposite kind of zooming that she described.
Understanding zooming involves forming and manipulating a three-dimensional model of abstract data represented on a two-dimensional screen. Sometimes there’s even the question of what is being zoomed. If you zoom in on a window containing a map, should the window itself change size (take up the fullscreen perhaps), or should the contents of the window be changed?
Small wonder then that some people choose the wrong kind of zoom, especially if they are not spatially-oriented.
The good news is that a zoom error is easy to recover from. You just hit the other button. Recovery is even easier if a knob, or other reversible input device, controls zoom. You just twist in the opposite direction, without having to acquire another button first. The scroll-wheel on a mouse is such a control, but …
But be careful when using the scroll-wheel to zoom, because users may trigger a zoom when they meant to scroll. I have this problem all the time with Google Maps. After a few hours of using the scroll-wheel to scroll webpages, I look something up in the Google Maps webpage — but if I attempt to use the scroll wheel to scroll the map it shoots my viewpoint up hundreds of miles into the atmosphere. Even though recovery is easy, it’s very disorienting, and quite annoying. It’s also slow, because the data has to be loaded over an internet connection — even the fast connection at the office is still orders of magnitude slower then loading the data from local memory.
It’s interesting that Mac OS X’s Exposé feature, while arguably a zooming-interface, is not described as one. To quote the help, “Your computer includes Exposé, a tool that temporarily rearranges windows to help find things.” No spatial reasoning is needed to use it.