Vincent Gable’s Blog

June 22, 2009

(Hyper)Text is King of Substance

Filed under: Accessibility,Design,Quotes,Usability | , , , , ,
― Vincent Gable on June 22, 2009

…I’d rather have the text of Clay’s speech than the video. For things that matter, written words are unambiguously better than speech. To start with, anything that matters isn’t just written, it’s usually rewritten repeatedly (and more important, condensed). Plus, it has hyperlinks. Plus, it’s smaller and cheaper to ship around. Plus, it’s searchable. Plus, it works on more devices. (I acknowledge that only the first of these is fundamental; but that alone would be enough).

Tim Bray

Videos, speech, etc. will always carry more emotional content. But for consuming ideas, text offers the highest bandwidth and most precision. Unfortunately, writing well takes time, and can hinder conversation.

Conceptually, I believe illustrative pictures and infographics are valid elements of modern text, like links, or typography.

June 1, 2009

Pass Phrases, Not Passwords

Filed under: Accessibility,Research,Security,Usability | , , , ,
― Vincent Gable on June 1, 2009

Thomas Baekdal makes a convincing argument for using pass-phrases not passwords (via). It’s excellent advice, and I know I’m not alone in having advocated it for years.

My keyboard has 26 letters, 10 numbers, and 12 symbol keys, like ~. All but spacebar make a different symbol when I hold down shift, giving me 93 characters to use in my passwords. But the number of words that can make-up a pass-phrase is easily in the 100,000s. Estimating exactly how big is a bit tricky, but I will stick with 250,000 here (I think it’s an undercount, more on this later).

We Know How To Talk

The human brain has an amazing aptitude for language. But “passwords” aren’t really words, so they don’t tap into this ability. In fact, we often use words to try and remember the nonsense-characters of a password.

Wouldn’t it make more sense to just use the words directly, if we can remember them more easily?

Hard For Computers, Not Hard For Us

People feel that if security system A is harder for them to use then system B, then A must be harder for an attacker to bypass. But the facts don’t always match this intuition.

What authentication code do you think is harder for a bad guy to hack, the 7 character strong password “1Ea.$]/”, or the mnemonic for the first 3 characters, “One Elvis Amazon”? Certainly “1Ea.$]/” is harder for a person to remember. It feels like it should be harder to break. But a computer, not a person, is going to be doing the guessing, and all it cares about is how big the search space is. There are 937 possible 7 character passwords. Let’s say there are 250,000 possible English words (more on that figure later). Then there are 250,0003 3 word combinations — meaning an attacker would have to do 260 times more work to guess “One Elvis Amazon” than to guess “1Ea.$]/”.

With pass phrases, easier for the good guys is also harder for the bad guys.

Exactly How Much Harder

The “250,000 word” figure is a bunch of hand-waiving, but I believe it’s an undercount. I picked it, because I wanted a round number to crunch; it’s what Thomas Baekdal picked; and it’s about the size of the Mac OS X words file,

$ wc -l /usr/share/dict/words 
  234936

But liberally descriptive linguists say that the 1,000,000th word will be added to the English Language on June 10th, 2009. The more conservative Webster’s Third New International Dictionary, Unabridged list 475,000 English words. Obviously neologisms, slang, and archaic terms are fine for pass phrases. People like discovering quirky words. I see far more more people embracing the login, “kilderkin of locats”, then rejecting it.

Different conjugations (can) count as different words in pass-phrases. There’s only one entry in a dictionary for swim, but swim, swimming, swam, etc. make for distinct pass-phrases (eg. “Elvis swims fast”, “Elvis swam fast”, etc. Both phrases don’t show up in a google search by the way.) So the real number of words should be a few fold larger than a dictionary indicates.

But not all words are equally likely to be chosen — just as some characters are more popular in passwords. My earlier figure of “2500003 3 word combinations” was based on the naive assumption that each of the 3 words is independent. But people do not pick things at random. And a phrase is by definition not completely random — it must have some structure. I’m unaware of research into exactly how predictable people are when making-up pass-phrases.

But given how terrible we are at picking good passwords, and how good we are at remembering non-nonsense-words, I am optimistic that we can remember pass-phrases that are orders of magnitude harder to guess than the “good” passwords we can’t remember today.

Fewer Ways To Fail

We’ve all locked ourselves out of an account because of typos or caps lock. But pass-phrases can be more forgiving.

Pass-phrases are caseinsensitive. There’s no need to lock someone out over “ELvis…”.

Common typos can be auto-corrected, much as google automatically suggests words. Consider the authentication attempt “Elvis Swimmms fast”. The system could recognize that “Swimms” isn’t a word, and try the most likely correction, “Elvis Swimms fast” — if it matches, then there’s no reason to ask the user if it’s what they really meant. (Note that only one pass-phrase is checked per login attempt.) I don’t have hard data here, but given how successful google is at interpreting typos, I’d expect such a system to work very well.

Pass-phrases might be more difficult on Phones, and similarly awkward to write with devices. Writing more letters means more work. Predictive text can only do so much. Repeatedly typing 3 letters and accepting a suggestion is clearly more work then just tapping out 6 characters. Additionally, there are security concerns with a predictive text system remembering your pass-phrase, or even a small part of it.

But for computers, pass phrases look like a clear usability win.

Easily Secure Conclusion

(In case you were wondering that was a unique phrase when I wrote this.) Using pass-phrases over passwords (which are really pass-strings-of-nonsense-sybols-that-nobody-can-remember) makes a system significantly harder to crack. Pass-phrases are easier for humans to remember, and a system that uses them can be very forgiving. But as always, the devil is in the details. It’s terrifying to be an early adopter of a new security practice, even if it seems sound.

April 30, 2009

Acceptable Delays

This is a collection of sources on what constitutes an acceptable delay. It’s very much a work in progress, and will be updated when I stumble into new information. I’m very interested in any insights, experience, or sources you may have.

Based on some experiments I did back at IBM, delays of 1/10th of a second are roughly when people start to notice that an editor is slow. If you can respond is less than 1/10th of a second, people don’t perceive a troublesome delay.

Mark Chu-Carroll

One second … is the required response time for hypertext navigation. Users do not keep their attention on the page if downloading exceeds 10 seconds.

Jakob Nielsen, (in 1997?)

In A/B tests (at Amazon.com), we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue. (eg 20% drop in traffic when moving from 0.4 to 0.9 second load time for search results).

Greg Linden covering results disclosed by Google VP Marissa Mayer

If a user operates a control and nothing appears on the display for more than approximately 250 msec, she is likely to become uneasy, to try again, or to begin to wonder whether the system is failing.

— Jeff Raskin, The Humane Interface (page 75)

David Eagleman’s blog post Will you perceive the event that kills you? is an engaging look at how slow human perception is, compared to mechanical response time. For example, in a car crash that takes 70ms from impact until airbags begin deflating, the occupants are not aware of the collision until 150-300 milliseconds (possibly as long as 500 milliseconds) after impact.

April 27, 2009

A Childish Thing

Filed under: Accessibility | , , ,
― Vincent Gable on April 27, 2009

I couldn’t resist doing this after reading this snooty fashion editorial in the New York Times,

But to other writers and editors, the Kindle is the ultimate bad idea whose time has come. Anne Fadiman, the author, was relieved to learn that her essay collection, “Ex Libris
,” was not available on Kindle. “It would really be ironic if it were,” she said of the book, which evokes her abiding passion for books as objects.

“There’s a little box on Amazon that reads ‘Tell the publisher I’d like to read this book on Kindle,’ ” she said. “I hope no one tells the publisher.”

Go on, make the world a more ironic and accessible place — tell them, I’d like to read this book on Kindle

Oh, and as to the rest of the article, it turns out kindle owners are not more literarily self-important; and the Kindle will give us more data to judge people with, as it becomes more integrated with our online presence. Imagine, for example, being able to tell what books someone actually read, and how many times, on their facebook profile.

April 15, 2009

Beyond Two Page Programs

And one of the things that is disturbingly true about most novices on computers is that about 2 pages of program is the maximum they can handle. They like to spread it out, use their visual field as an extension of their short term memory

–Alan Kay From Doing With Images Makes Symbols

A few thoughts on this phenomenon.

A denser, more concise, less “English like” programming language would counter-intuitivly be easier for novices to use, if it let them keep their project below the 2-page limit.

Does this limit increase with more and bigger displays?

Do graphical programming language change anything? It seems like they might “scale” better on a very large display. But in my (albeit limited) experience they are much less compact then textual source code. And it’s not clear to me they support abstraction as well.

March 25, 2009

Lemur CATTA and the Author’s Burden

Filed under: Accessibility | , ,
― Vincent Gable on March 25, 2009

Mike Lee had an interesting idea for improving comments on blogs. Before someone can post a comment, they have to pass a reading comprehension test, to show they read the article they are commenting on. He called his implementation Lemur CATTA

It’s a good idea, especially for certain communities. But the implementation rubbed me wrong from day one. My problem was that the questions were automatically generated.

I feel like it’s the author’s burden to make their work as accessible to readers as they can. If they are placing a barrier in front of commenters, they should do everything they can to make sure the barrier is effective at blocking the bad, but easily permeable to the good. I don’t feel Auto-generated questions are compatible with that responsibility.

There’s a callousness to auto-generated questions. The author is essentially saying, “Yeah, I took many hours to write this post, and I’m going to inconvenience every reader who wants to reply, but damned if I take a few minutes to write these questions myself”. And that just rubs me the wrong way.

To be fair, writing reading comprehension questions is harder then it seems — at least when I’ve tried it. Maybe a computer is more effective at generating quizzes then a prose writer. (But I doubt it’s more effective then a prose writer with practice.) And auto-generation has all kinds of benefits. It can be applied to comments, and real-time discussions, for example. But sometimes the human touch is more comforting. It gives an assurance that the author cares about what you have to say, and worked through any problems you might be having with the comment system.

More info on Lemur CATTA at lemurcatta.org.

March 13, 2009

Reasons to WANT to Design For Accessibility

Accessibility is too often seen as a chore. But there are many reasons to be excited about making things usable for everyone.

It Just Feels Good

I know it’s cliché, but helping people does feel good. Making your website work with screen-readers is not the same as volunteering your time to read for the blind and dyslexic. But it still helps…

More cynically, accessibility means your work reaches more people. Even if it’s just an extra 0.6%, it still feels good to know you are having a bigger impact.

We Are All Impaired

As Keith Lang points out, “we are all impaired to some amount (or sometimes)”. Everyone is “deaf” in a library, because they can’t use speakers there. Similarly, if you try showing a video on your phone to a dozen people, many of them will be “blind”, because they can’t see the tiny screen.

Consequently, accessibility means designing for everyone, not just a disabled super-minority.

Accessible Design is Better Design

Usability improves when accessibility is improved. For example, a bus announcing stops with speakers and signs means you can keep listening to your iPod, or looking at your book, and still catch your stop. It makes buses easier to ride.

Maximally accessible design engages multiple senses. Done well that means a more powerful experience.

Early Warning

The flip-side of accessibility improving usability is that bad design is hard to make accessible. How easy it is to make something comply with accessibility guidelines is a test of the soundness of the design.

I don’t care about accessibility. Because when Web design is practiced as a craft, and not a consolation, accessibility comes for free.

Jeffrey Veen

Accessibility compliance should be like running a spellcheck — something quick and easy that catches mistakes. When it’s not, it’s a warning that something is fundamentally wrong. That’s never fun, but the sooner a mistake is caught, the cheaper it is to correct it.

Challenge the Establishment

Accessibility might be the best “excuse” you’ll ever get to do fundamental UX research.

I think for people who are interested in user interface disability research is another area that gets you out of the Mcluhan fishbowl(??) and into a context where you have to go back to first principles and re-examine things. So I think the future there is very bright but we need more people working on it.

–Alan Kay, Doing With Images Makes Symbols

If anybody knows what he meant by what I heard as “Mcluhan fishbowl” please let me know!

Technology is Cool

Accessible design makes content easier for machines and programmers to deal with. This makes the future possible. For example, embedding a transcript in a video means that the video’s contents can be indexed by google, or automatically translated, etc.

BUt the really exciting stuff hasn’t happened yet.

Accessibility research is going to be a huge part of what advances the state of the art in Augmented Reality and cybernetics/transhumanism. The common theme is mapping data from one sense to another, or into a form that computers (eg. screen readers today) can process.

Why do You Like it?

I’d love to know what makes you passionate about accessibility. For me it’s that it feels right, and as a programmer, I am very excited about what it enables.

February 24, 2009

More Flash Hate and Graceful Degradation

Filed under: Accessibility,Announcement | , , , ,
― Vincent Gable on February 24, 2009

Adobe’s website for Air (their cross-platform ‘web for the desktop’ technology) requires Flash 10. If you have an earlier version of Flash, like 75% of the visitors to my website, then you see a big blank box.

This is a terrible mistake for the company that makes Flash. In no way does it inspire confidence that Flash is accessible.

The real irony is Adobe’s own website was the first website I’ve seen that was incompatible with the version of Flash I was using. If other websites leveraged Flash 10, they gracefully degraded so that I could use them with Flash 9.

When I finally upgraded, I couldn’t see why Adobe’s website needed Flash 10 was required. I wasn’t wowed. All I saw was some fancy transitions between slow-loading flash videos.

Just by being open, that one website used 125% of my CPU even when I wasn’t interacting with it. No joke, 125% is what OS X reported. I am using a dual core machine, so the 125% means that 100% of one CPU, and 25% of another were used — just to render a webpage I wasn’t even looking at.

Is Adobe fine with alienating 75% of the internet?

Why can’t they make their own website laptop friendly?

Why should I trust their new Air platform that “lets developers use proven web technologies” if its own website won’t just work for me?

February 16, 2009

Simplifying by Adding Features

Filed under: Accessibility,Design,Programming,Quotes,Usability | , , ,
― Vincent Gable on February 16, 2009

One of the oldest canards in the interface business is the one that says “Maximizing functionality and maintaining simplicity work against each other in the interface” (Microsoft 1995, p.8). What is true is that adding ad hoc features works against simplicity. But that’s just bad design. It is often, but not always, possible to increase functionality without increasing difficulty at a greater rate. Often, added functionality can be had without any added interface complexity; note the difference between interface complexity and task complexity. If the added functionality unifies what had previously been disparate features, the interface can get simpler.

— Jeff Raskin, The Humane Interface (page 201)

Examples of this are the exception, not the rule. Usually, more features means more complexity.

The best example I can think of is Coda, an award-winning web development IDE.

text editor + file transfer + svn + css + terminal + books + more = whoah.

The story of Coda.

So, we code web sites by hand. And one day, it hit us: our web workflow was wonky. We’d have our text editor open, with Transmit open to save files to the server. We’d be previewing in Safari, adjusting SQL in a Terminal, using a CSS editor and reading references on the web. “This could be easier,” we declared. “And much cooler.”

(To really get a sense of Coda you should check out the website, or try it for free).

Even though Coda’s interface is more complicated because it does more then just edit code, it simplifies the task of web-design, by unifying tasks that used to be done in different applications with different interfaces.

What other examples of things becoming simpler through added functionality can you think of? Please share in the comments below.

February 9, 2009

Color Blindness

Filed under: Accessibility,Design,MacOSX,Programming,Usability | , , , ,
― Vincent Gable on February 9, 2009

Roughly 10% of men are color blind to some degree. You need to be sure your interfaces are accessible to them. (Unless you are designing exclusively for women I suppose, since women are about 20x less likely to be color blind.)

Sim Daltonism is the best way to test an interface on Mac OS X I’ve seen.

Here is a web-based colorblindness simulator. Here is another. Personally I prefer a native program though. It’s faster and more versatile.

If you are curious, you can test yourself for colorblindness. I have no idea how accurate that test is, but since different displays and operating systems usually show colors differently I’d be a little skeptical.

ADDITION 2009-10-11: WeAreColorBlind.com is a website dedicated to design patterns for the colorblind.

« Newer PostsOlder Posts »

Powered by WordPress