Vincent Gable’s Blog

July 30, 2008

Hell Hath No Fury…

Filed under: Research,Security | , , , ,
― Vincent Gable on July 30, 2008

The New York Times ran an article on research into the economics of vengeance. It’s fairly interesting, but to quote the article, “Most of (the) findings confirm what researchers in different disciplines have already found”.

The meat:

people who have been victims of the same kind of crime … tend to be more vengeful, but not if they have been victims of a different crime…

Vengeful feelings are stronger in countries with low levels of income and education, a weak rule of law and those who recently experienced a war or are ethnically or linguistically fragmented.

…most surprising was that women turned out to be more vengeful than men. If a woman had been a victim of (a crime), she was 10 percent more likely to (seek a stricter punishment); for men the figure was 5 percent.

June 3, 2008

AppleScript is the Uncanny Valley

Filed under: Design,MacOSX,Programming,Quotes,Usability | , , ,
― Vincent Gable on June 3, 2008

A interesting theory:

I think this “like English but not quite” aspect of AppleScript is the Uncanny Valley of programming languages. Because AppleScript looks like English it is easy to fall into the trap of believing it has the flexibility of English. When that mental model fails its more unsettling than when you screw up the syntax in a regular programming language because your mental model isn’t making unwarranted assumptions.

Mark Reid

May 26, 2008

People Prefer Sure (but small) Gains; Avoidable (but possibly large) Losses

Filed under: Quotes,Research,Usability | , ,
― Vincent Gable on May 26, 2008

Bruce Schneier has a new essay, How to Sell Security. As usual, it’s well worth reading.

The most interesting tidbit, to me, is that people have a bias to choose a small certain gain over an uncertain but possibly larger gain. But with loss, it’s the opposite. People avoid certain losses; preferring to “play double or nothing” — risking a larger loss for the chance of not sustaining a loss.

Here’s an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and 50 percent chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50 percent chance of losing $1,000.

These two trade-offs are very similar, and traditional economics predicts that the whether you’re contemplating a gain or a loss doesn’t make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn’t affect the mathematics and therefore shouldn’t affect the results. This is traditional economics, and it’s called Utility Theory.

But Kahneman’s and Tversky’s experiments contradicted Utility Theory. When faced with a gain, about 85 percent of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70 percent chose the risky larger loss over the sure smaller loss.

This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of “economic man,” Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or “A bird in the hand is worth two in the bush.” And two, a sure loss is worse than a chance at a greater loss, or “Run away and live to fight another day.” Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50 percent chance at $1,000,000. But all things being equal, we tend to be risk-adverse when it comes to gains and risk-seeking when it comes to losses.

This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the “Asian Disease Experiment” for an almost surreal example. Describing the same policy choice in different ways–either as “200 lives saved out of 600” or “400 lives lost out of 600”– yields wildly different risk reactions.

Evolutionarily, the bias makes sense. It’s a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food — whether small or large — can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.

April 6, 2008

Yes

Filed under: Design,Programming,Research,Usability | , ,
― Vincent Gable on April 6, 2008

Just a curiosity, but it happens that in a yes-no binary response test, the reaction time to select “no” is longer than for “yes.”

Source

I haven’t taken the time to verify this, or see if anyone has quantified the difference in response times.

UPDATE 2009-12-18: The technical term for this is Acquiescence Response Bias — the tendency to agree with any assertion, regardless of its content.

« Newer Posts

Powered by WordPress