Vincent Gable’s Blog

November 3, 2009

Magnetoception Will Be Our First Superhuman Sense

Filed under: Uncategorized | , , , ,
― Vincent Gable on November 3, 2009

Magnetoception, the ability to sense magnetic fields like a compass, is my prediction for the first widely implemented super-sense, because it’s so simple.

I’m no biologist, but it certainly seems that only a little wetware is necessary to implement magnetoception, since even bacteria have it. On the mechanical front, tiny manometers have been built into millions of devices already. I have no idea what the state of the art is, but the first 3-axis digital compass chip I found on google measures 4x4x1.3mm. They’re only getting smaller and more efficient. We already have the technology to build it into belts and clothing.

But I hope I’m wrong. Certainly, the future promises more than better compasses.

April 21, 2009

A Scalpel Not a Swiss Army Knife

Filed under: Design,iPhone,Programming,Quotes,Usability | ,
― Vincent Gable on April 21, 2009

Steven Frank summarizing feedback on the direction future of computer interfaces,

The other common theme was a desire to see applications become less general purpose and more specific. A good example was finding out train or bus schedules. One way to do this is to start up your all-purpose web browser, and visit a transit web site that offers a downloadable PDF of the bus schedule pamphlet. Another way is to use an iPhone application that has been built-to-task to interface with a particular city’s transit system. It’s no contest which is the better experience.

…In 2009, it’s still a chore to find out from the internet what time the grocery store down the street closes — we’ve got some work to do.

I would like to see a nice pithy term replace “very specific task-driven apps”. Perhaps “Specialty Applications” or “Focused Programs”. But I’m not enamored with ether. Whatever the term, it should emphasize excelling at something, not being limited. What are your thoughts for a name?

March 13, 2009

Reasons to WANT to Design For Accessibility

Accessibility is too often seen as a chore. But there are many reasons to be excited about making things usable for everyone.

It Just Feels Good

I know it’s cliché, but helping people does feel good. Making your website work with screen-readers is not the same as volunteering your time to read for the blind and dyslexic. But it still helps…

More cynically, accessibility means your work reaches more people. Even if it’s just an extra 0.6%, it still feels good to know you are having a bigger impact.

We Are All Impaired

As Keith Lang points out, “we are all impaired to some amount (or sometimes)”. Everyone is “deaf” in a library, because they can’t use speakers there. Similarly, if you try showing a video on your phone to a dozen people, many of them will be “blind”, because they can’t see the tiny screen.

Consequently, accessibility means designing for everyone, not just a disabled super-minority.

Accessible Design is Better Design

Usability improves when accessibility is improved. For example, a bus announcing stops with speakers and signs means you can keep listening to your iPod, or looking at your book, and still catch your stop. It makes buses easier to ride.

Maximally accessible design engages multiple senses. Done well that means a more powerful experience.

Early Warning

The flip-side of accessibility improving usability is that bad design is hard to make accessible. How easy it is to make something comply with accessibility guidelines is a test of the soundness of the design.

I don’t care about accessibility. Because when Web design is practiced as a craft, and not a consolation, accessibility comes for free.

Jeffrey Veen

Accessibility compliance should be like running a spellcheck — something quick and easy that catches mistakes. When it’s not, it’s a warning that something is fundamentally wrong. That’s never fun, but the sooner a mistake is caught, the cheaper it is to correct it.

Challenge the Establishment

Accessibility might be the best “excuse” you’ll ever get to do fundamental UX research.

I think for people who are interested in user interface disability research is another area that gets you out of the Mcluhan fishbowl(??) and into a context where you have to go back to first principles and re-examine things. So I think the future there is very bright but we need more people working on it.

–Alan Kay, Doing With Images Makes Symbols

If anybody knows what he meant by what I heard as “Mcluhan fishbowl” please let me know!

Technology is Cool

Accessible design makes content easier for machines and programmers to deal with. This makes the future possible. For example, embedding a transcript in a video means that the video’s contents can be indexed by google, or automatically translated, etc.

BUt the really exciting stuff hasn’t happened yet.

Accessibility research is going to be a huge part of what advances the state of the art in Augmented Reality and cybernetics/transhumanism. The common theme is mapping data from one sense to another, or into a form that computers (eg. screen readers today) can process.

Why do You Like it?

I’d love to know what makes you passionate about accessibility. For me it’s that it feels right, and as a programmer, I am very excited about what it enables.

February 19, 2009

“Enhanced” Sports

Filed under: Research | , , , , , , ,
― Vincent Gable on February 19, 2009


200px-Oscar_Pistorius-2.jpg


Oscar Pistorius
, “The fastest man on no legs”, uses carbon-fiber prosthetic feet to run … apparently more efficiently then an able-bodied sprinter. And if he isn’t more efficient today, it’s a sure bet that technology will surpass mere flesh in the near future (at least in sprinting).

The cultural, ethical, and even technological, issues surrounding cyborg/transhuman athletes are fascinating.

The Genie is Out of the Bottle

Let’s be blunt, technology plays a roll in every sport today, and there is no going back.

Technology goes into equipment as basic as a shoe — making them lighter, springer, and more adhesive then anything humans have worn before.

The impact of better equipment was popularly recognized by at least the 1920s (if you have an earlier source please share),

Much of Improvement in Baseball Is Attributed to Evolution and Steady Progress of Mechanics and Invention

WHEN Babe Ruth hits three home runs in one game or the home team cracks out a barrage of base hits to score seven or eight times in one inning, it does not necessarily mean that long-distance hitting in modern baseball comes from superiority of today’s players over those of years past. The truth is that much of the improvement in the game itself and in the proficiency of its players has come from evolution and progress in science and invention.

Popular Mechanics, May, 1924

Then there’s the elephant in the room: the athlete’s body, and the “stuff” that goes into it.

The prisoners dilemma essentially forces athletes to dope — because the only way to be sure your opponent does not have an advantage over you is to take advantage as well. (This is the best overview of the doping problem, and solution I have seen.)

But it’s not just drugs and steroids. There’s also nutrition, and sports medicine. Where exactly is the line between a supplement and a drug? More chemical sophistication goes into todays vitamins than the drugs of the past.

Modern training regimens and equipment seem to have more to do with the science of conditioning then the love of a sport. It’s interesting that someone who just played all day would be at a disadvantage compared to someone who used targeted exercise machines.

Genetic engineering might be the most interesting future trend to watch. Obviously genetics are a huge part of determining physical ability.

What do We Want?

We love to watch superhumans compete. Professional athletes are supermen, since they play significantly above average human ability.

But we also want a “fair” and “honorable” fight. I honestly don’t know exactly what it all means. It’s OK to have an unplanned genetic advantage. Drugs are bad, even if everyone has access to them. We love the underdogs the most, yet celebrate the winners who have the most funding going into their training.

What’s Sportsmanlike

It’s not whether you win or lose, it’s how you place the blame.

–Oscar Wilde

The problem with giving disabled athletes accommodations, like carbon fiber feet, is that they are only work until they start winning. Then accommodations become an unfair advantage. It doesn’t matter if they are unfair in reality, because they look unfair.

But there’s a quality of life problem with essentially saying, “you cripples can only play with the other cripples”.

Accommodations in the context of sportsmanship is a sticky issue, and I don’t pretend to have the answers. But I’m not necessarily against “play until you win”, as a lesser of many evils. Sometimes playing is more important then winning.

One analogue is gender differences. There is good reason behind having separate men, women, and weight categories for sports. But in recreational play, mixed gender teams are often the norm (Ultimate seems to work very well with mixed gender teams).

But there’s a good case to be made for letting “enabled” athletes to compete separately, but to their fullest — essentially making the Paralympics the Cyberlimpics.

Conclusion

Maybe these pretty women will distract you from realizing I don’t have any answers, (Via Sensory Metrics):

Bilde 1-1.png

17453042_p1_mullins2.jpg

February 9, 2009

Resolution Independent Screenshots

Filed under: Announcement,MacOSX,Programming | , , , ,
― Vincent Gable on February 9, 2009

Leopard includes technology that generates (mostly) resolution independent screenshots. That means when you enlarge the pictures, they won’t get pixelated, and more importantly, they will stay sharp when printed.

I don’t know if you’ve ever seen a printout of text mixed with a screenshot of text, but it looks like ass. That’s because even a very cheap printer is much higher resolution then your screen. It prints text very sharply. But when it prints the screen shot, it reproduces the low resolution display in high-fedelity — which actually makes it look worse. Plus, computers use tricks (eg sub pixel antialiasing) to make text look sharper on LCD screens — but those tricks can backfire on other media. A screenshot grabs exactly the pixels shown on the screen. And those pixels are optimized to be shown on a screen, not paper.

Example

Here’s an example screenshot (PDF). It looks like this:
Preview.png

If you open it, and zoom in, you will see that the text stays sharp, while some (but not all) of the interface gets pixelated.

PreviewBlownUp.png

How it Was Made

When Automator.app (click to open) saves a workflow, it puts a (mostly) resolution-independent screenshot of the workflow’s UI inside it. The screenshot is at SomeWorkflow.workflow/Contents/QuickLook/Preview.pdf. (In Finder, right-click a .workflow file, and choose “Show Package Contents” to look inside it).

If you print a workflow to a PDF file, it has the same limited resolution-independence. So I suspect Automator.app generates this PDF in much the same way files are printed. I have not investigated why the gray border is vectorized as well as the text. If anyone has an insight there, I’d love to hear it.

In the future, I expect text, and most UI elements, to be represented as vectors at every level of the OS. Screenshots will capture those vector-elements, as as they capture pixel-elements (pixels) today.

Now Recognizing President Barrack Abeam

Filed under: Design,Programming,Usability | , , , , , ,
― Vincent Gable on February 9, 2009

President “Barack Obama” is not recognized by my Mac’s spellchecker. Firefox, Microsoft Word1, Mac OS X — each of them has a built in spellchecker, and none of them know how to say our president’s name. Spell checker dictionaries need to be updated more frequently — to keep up with the emails we write.

Things have improved since 1995, but there’s still a long way to go.

There’s more to say about how to fix things, but someone has already said it. The future looks bright,

(Microsoft) now scans through trillions of words, including anonymized text from Hotmail messages, in the hunt for dictionary candidates. On top of this, they monitor words that people manually instruct Word to recognize. “It’s becoming rarer and rarer that anything that comes to us ad hoc isn’t already on our list” from Hotmail or user data, Calcagno says. According to a July 14, 2006, bug report, for example, the Natural Language Group harvested the following words that had appeared more than 10 times in Hotmail user dictionaries: Netflix, Radiohead, Lipitor, glucosamine, waitressing, taekwondo, and all-nighter.

I think the next step in spellchecking is to follow Mac OS X’s lead, and adopt a system-wide spellchecker. When there’s only one instance of a spellchecker running (not a separate one for every program that might work with text) we can make it much smarter, without requiring a supercomputer.


1


Microsoft added Barack and Obama to Office’s dictionary back in April 2007, but unfortunately, that change hasn’t yet made it to the Mac Ghetto, ahem, “Mac BU”. Or at least I haven’t seen it in Word yet.

December 22, 2008

How To Multi

Avoid distributed computing unless your code is going to be run by a single client with a lot of available hardware. Being able to snarf up CPU cycles from idle hardware sitting around in the user’s house sounds cool but just doesn’t pay off most of the time.

Avoid GPGPU on the Mac until Snow Leopard ships unless you have a really good application for it. OpenCL will make GPGPU a lot more practical and flexible, so trying to shoehorn your computationally expensive code into GLSL or CoreImage today just doesn’t seem worth it.

Using multiple processes is a good idea if the subprograms are already written. … If you’re writing your code from scratch, I don’t recommend it unless you have another good reason to write subprocesses, as it’s difficult and the reward just isn’t there.

For multithreading, concentrate on message passing and operations. Multithreading is never easy, but these help greatly to make it simpler and less error prone.

Good OO design will also help a lot here. It’s vastly easier to multithread an app which has already been decomposed into simple objects with well-defined interfaces and loose coupling between them.

Mike Ash (emphasis mine, line-breaks added). The article has more detail and is very much worth reading.

One point that this advice really drives home for me is that you need to focus on making good code first, and defer micro-optimizations. If taking the time to clean up some code makes it easier to parallelize, then you are optimizing your code by refactoring it, even if at a micro-level you might be making some of it slower by, say, not caching something that takes O(1) time to compute.

Apple does not sell a Mac that’s not multi-core, and even the iPhone has a CPU and a GPU. There’s no question that optimization means parallelization. And all signs point to computers getting more parallel in the future. Any optimization that hurts parallelization is probably a mistake.

December 18, 2008

Fast Enough or Not Enough Fast?

Filed under: Quotes,Usability | , ,
― Vincent Gable on December 18, 2008

…people are now willing to make trade-offs against performance. For the entire history of the PC industry, computers have been too slow, so trade-offs were made in favor of faster CPUs: higher prices and heavier laptops. But today, for many common tasks, the type of CPU you get when you build a $400 lightweight laptop is fast enough. That’s (a) breakthrough.

John Gruber

Cynically I also wonder if this is because “more cores” isn’t as compelling as faster. As Hank Williams says,

The problem of multi-core computing is really very simple. As most of us have experienced, every problem *can’t* be solved better or faster with more people. Some problems can be solved faster by adding a few people, but most problems cannot. In truth, most problems can best, or only be solved by one person at a time. And so it is with computing. The vast majority of problems can only be solved by one logic thread at a time. The reason is obvious. For most process-oriented work, step B is based on the results of step A. And step C is based on the results of step B, and so on.

November 24, 2008

I For One Welcome Our Vector Overlords

Filed under: Design,Quotes | , ,
― Vincent Gable on November 24, 2008

The pixel will never go away entirely, but its finite universe of digital watches and winking highway signs is contracting fast. It’s likely that the pixel’s final and most enduring role will be a shabby one, serving as an out-of-touch visual cliché to connote “the digital age”

JH

I’ve written before about trends in resolution independence, and why it matters.

November 3, 2008

Voting Done Right: Wait For It

Filed under: Design,Security,Usability | , , , , , ,
― Vincent Gable on November 3, 2008

Everyone wants to know the results of an election as soon as possible, including me. I will be spending tomorrow evening with friends, watching election results on live TV. I’ll be unhappy if a battle-ground state is slow to report, and I expect to know who the next president will be before I go to bed. But quick reporting of election results is in no way necessary, and in fact undermines our electoral system. We should put trustworthiness ahead of entertainment, and count votes deliberately.

According to the project triangle, you can do something quickly, you can do something cheaply, and you can do something well, but you can only do two out of three.

I propose that official tallies should not be released for 72 hours after polls close, by law. This gives us time to do voting right, and affordably.

A Hard Problem

Engineering a good voting system is a much harder problem then most people realize.

The system must be resistant to fraud by voters, and election officials, and the politicians on the ballot.

Voters must vote only once. But nobody can tie a particular vote to someone (that would allow voter intimidation and buying). But their vote must still be counted for the right candidate.

Tallies must be auditable (in case of a dispute a third party can re-count the votes). The whole system must be perceived as trustworthy and transparent by everyone.

Oh, and it has to scale to use by hundreds of millions of people on election day.

And all of this has to be built, and maintained, with very limited public funds.

This is a very hard problem already. Adding the extra requirement, “and final results must be ready two hours after polls close (so results can make prime-time TV)” would, in my opinion, make it an impossibly hard problem. Unfortunately, that is the direction we are moving.

No Need to Rush

Our electoral system was designed in an era when, cliché as it sounds, the pony express was the fastest way to communicate intra-nationally. Officials do not take office for several weeks after they have been voted-in. Delaying the certification of a successor until Friday would not incapacitate government. It’s always clear who the current officials are until new ones take office.

Of course, today we live in a faster, more connected, world. It could be argued that this means we have a modern need for instant results. Fortunately, this does not appear to be the case. The fallout of the Bush v Gore election in 2000 proved that society and government can function just fine for several weeks without knowing who won an election.

The Fear

Confidence in modern voting machines is rightly low. For the first time in nearly three decades, there will be a decline in the number of people casting their ballots electronically. Nobody (lobbyists aside) seems to really think that these voting machines are a working out for us, except that they do give “tallies” faster.

Personally, I am terrified of an all-electronic election. The reason is simple: it can’t be audited. Digital forensics just aren’t real enough. If someone stuffs a ballot box, they leave a trail of clues, down to the chemical composition of the paper. But there’s no record when bits are flipped to a crooked candidate. Any digital footprint can be faked. “Recounting” an electronic election would be pointless — asking the same program to run the same calculation, with the same data.

Of course, there are exotic solutions. It might be possible to develop a digital storage media that can only be written to once, and would record forensic information, like the time of each write. Unfortunately, none of these ideas sound remotely cost-effective. Which leaves….

Good old physical paper ballots. Slow, but sure, they are a proven technology that has earned our trust.

… then the Opposite of Progress is…

So why not simply mandate that paper ballots must be used for an election? Personally, I think that would give us a better election system then we have today. And it’s probably got a much better chance of happening then my idea of sitting on election results for three days.

But I don’t think it’s the best long-term solution. Historically, laws just don’t keep up with technology. And we have every indication that the pace of technological change is increasing. A little over seventy years ago, the Social Security Number was born. Today, we are stuck with them. I’m not convinced that paper will be the best medium for recording votes in 70 years.

Rather then dictating anachronistic implementations, it seems better to codify the right trade offs to make when designing a voting system. Then we can organically reap the benefits of advances in voting-technology, as we have historically.

The real problem is that we, as a voting public, are favoring quick results over reliable ones. This is a social problem, it is not a technological problem. It is best to directly address the social expectations, not the technological details.

But honestly… it will never happen. We like our prime-time TV and instant gratification too much. Withholding election results, even temporarily, feels too dictatorial. We can expect to get our votes counted faster every year. I just hope it’s not at the expense of counting them correctly.

Older Posts »

Powered by WordPress