Vincent Gable’s Blog

November 11, 2009

Just Look at it, Man!

Filed under: Bug Bite,Programming | , , , , , , , , ,
― Vincent Gable on November 11, 2009

You’re looking at Anscombe’s quartet: 4 datasets with identical simple statistical properties (mean, variance, correlation, linear regression); but obvious differences when graphed.

325px-Anscombe.svg.png


(via Best of Wikipedia)

Graphs aren’t a substitute for numerical analysis. Graphs are not a panacea. But they’re excellent for discovering patterns, outliers, and getting intuition about a dataset. If you never graph your data, then you’ve never really looked at it.

War Story

I was working on optimizing color correction, using SSE (high performance x86 instructions). One operation required division — an expensive operation for a computer. The hardware had a divide instruction, but sometimes using the Newton-Raphson method to do the division in software is faster. You never know until you measure.

While doing the measurement, I somehow got the crazy idea to try both: I’d already unrolled the inner loop so instead of repeating the divide or Newton’s Method twice, I’d do a divide and then use Newton’s Method for the next value. Strangely enough, this was faster on the hardware I was benchmarking than either method individually. Modern hardware is a complex and scary beast.

I was fortunate enough to have a suite of very good unit tests to run against my optimized code. But there was a caveat to testing correctness. Because computers don’t have infinitely precise arithmetic, two correct algorithms might give different answers — but if the numbers they gave were close enough to the infinitely precise answer (say a couple ulps apart) it was good enough. (We can only be exact within some Tolerance!) The tests cleared my hybrid divide/Newton-Raphson function: but we couldn’t use it, because it was fundamentally broken.

Even though the error was acceptably small, it had a nasty distribution. Using divide gave color values that were a bit too light. Doing a divide in software gave values that were a bit too dark. Individually these errors were fine. Randomly spread over the image they would have been fine. But processing every other pixel differently had the effect of adding alternating light/dark stripes! We see contrast, not absolute color, so the numerically insignificant error was quite visible. Worse still, bands of 1 pixel stripes combined to form a shimmering Moiré pattern. It was totally busted. Unusable.

This was all immediately obvious when the results of the color correction were “graphed”. Actually looking at the answer caught a subtle error that our suite of unit tests missed.

To be clear, more subjective graphical analysis is not a substitute for numerical analysis and data mining. But I believe in actually looking at your data at least once. A graph is a kind of end-to-end visualization of everything, and that has value. Graphs are a cheap sanity check — does everything look right? And sometimes, they can give you real insight into a problem.

May 29, 2009

Don’t Waste Memory to Save Time

Filed under: iPhone,MacOSX,Programming,Usability | , , ,
― Vincent Gable on May 29, 2009

In the provocatively titled, Space is Time: How Your CS Theory Class Lied to You, Greg Parker explains why hogging memory to speed up your program is a bad idea on consumer-level devices.

Space is time. An optimization that makes your program faster may make the user’s system slower overall. Play well with others.

I think it’s excellent advice. There’s only one point I want to make (and it must be made now, before silent SSD hard disks become more popular).

Every time your whole computer is nonresponsive, it makes that grrnrrnrnrr sound, right? That’s because it ran out of RAM, and the virtual memory system is thrashing to the disk. In the real word, on computers like the one you are on now, low memory plays a part in every system-wide slowdown.

Hogging the CPU can make your application slow. (Although if you pay attention to threading, and the run loop, using 100% of every core shouldn’t kill responsiveness and usability.) But hogging memory makes the whole damn world slow, including your application. And the worst case scenario of bad memory management kills the system deader then a kernel panic.

This isn’t to say that caching never makes sense. But a healthy respect for memory usage generally trumps worrying about the CPU.

March 4, 2009

Retest Your (Low Level) Optimizations

Filed under: Programming,Quotes | , ,
― Vincent Gable on March 4, 2009

(Measuring before and after applying an optimization is) more important these days. With optimizing compilers and smart virtual machines, many of the standard optimizing techniques are not just ineffective but also counterintuitive. Craig Larman really brought this home when he told me about some comments he received after a talk at JavaOne about optimization in Java. One builder of an optimizing virtual machine said, in effect,

“The comments about thread pools were good, but you shouldn’t use object pools because they will slow down our VM.”

Then another VM builder said,

“The comments about object pools were good, but you shouldn’t use thread pools because they slow down our VM.”

Not only does this reinforce the need to measure with every optimization change, it also suggests that you should log every change made for optimization (a comment tag in the source code is a good option) and retest your optimizations after upgrading your compiler or VM. The optimization you did six months ago could be your bottleneck today.

Martin Fowler (PDF), 2002

I’ve written before about the decay of machine-specific optimization. Even if your code isn’t run by a VM, I think it’s reasonable to expect that (at least some of it) might be run on a smartphone in the near future.

March 3, 2009

Vincent’s Notes: End To End Arguments in System Design

Filed under: Programming | , , ,
― Vincent Gable on March 3, 2009

At Michael Tsai’s suggestion I listened to the paper End-To-End Arugments in System Design while driving. (Fair warning: Since I was also driving while listening, I didn’t absorb everything as well as I should have.)

The thrust of the paper is that you generally want to make your low level components (aka libraries) simpler then you think. Counter-intuitively, building extra reliability into a low-level component does not (usually) make it easier to build a reliable application that uses the component. That’s because the application has to work around all sorts of other errors from different components. So it must have error handling code. Making one low level component “smarter” does not change this. But it does make the component more complex. And some of that complexity is duplicate code that does just what the application’s error handling code does.

The “End to End” in the title of the paper is from a file transfer application having to do an “end to end” check to make sure that the files at both end of the transmission are the same.

Conclusions

End-to-end arguments are a kind of “Occam’s razor” when it comes to
choosing the functions to be provided in a communication subsystem.
Because the communication subsystem is frequently specified before
applications that use the subsystem are known, the designer may be
tempted to “help” the users by taking on more function than necessary.
Awareness of end-to-end arguments can help to reduce such temptations….

January 29, 2009

Flash Hate

Filed under: Quotes | , , , ,
― Vincent Gable on January 29, 2009

I don’t like Flash because it is responsible for the overwhelming majority of my browser crashes. I don’t like it because it consumes memory and (especially) CPU resources on my computer for almost the sole purpose of showing me advertisements, which also translates directly to reduced battery life on my laptop.

But it’s interesting to note that it’s quite a technical and ethical challenge to run a browser without Flash.

Steven Frank

I’ve written before about how important it is to optimize CPU usage of your website for the mobile world. And this is yet another reason for anyone who is add-supported. People will tolerate advertisements that are just there. But when they kill their work time, or are otherwise malignant, then they will take active steps to stop them. And that means no more advertising revenue.

January 17, 2009

Lessons From Fast Food: Efficiency Matters

Filed under: Design,Programming,Quotes,Research,Usability | , ,
― Vincent Gable on January 17, 2009

Every six seconds of improvement in speed of service amounts to typically a 1% increase in sales. And it has a dramatic impact on the bottom line.

–John Ludutsky, President of Phase Research, quoted on the “Fast Food Tech” episode of Modern Marvels, aired 2007-12-29 on the History Channel.

I wouldn’t expect things to be much different in the software world. The faster you get your burger bits the better.

UPDATED: 2009-02-05:
Apparently people want service much faster from software. Greg Linden reports,

Half a second delay caused a 20% drop in traffic. Half a second delay killed user satisfaction.

This conclusion may be surprising — people notice a half second delay? — but we had a similar experience at Amazon.com. In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.

If the Mr Ludutsky’s figure is accurate, a 20% drop in fast-food revenue would require a two minute delay. Does this mean every second spent waiting on a computer is as bad as waiting 4 minutes in meatspace? I don’t know — I’m doing a lot of extrapolation from hearsay. But it’s something to consider.

December 22, 2008

How To Multi

Avoid distributed computing unless your code is going to be run by a single client with a lot of available hardware. Being able to snarf up CPU cycles from idle hardware sitting around in the user’s house sounds cool but just doesn’t pay off most of the time.

Avoid GPGPU on the Mac until Snow Leopard ships unless you have a really good application for it. OpenCL will make GPGPU a lot more practical and flexible, so trying to shoehorn your computationally expensive code into GLSL or CoreImage today just doesn’t seem worth it.

Using multiple processes is a good idea if the subprograms are already written. … If you’re writing your code from scratch, I don’t recommend it unless you have another good reason to write subprocesses, as it’s difficult and the reward just isn’t there.

For multithreading, concentrate on message passing and operations. Multithreading is never easy, but these help greatly to make it simpler and less error prone.

Good OO design will also help a lot here. It’s vastly easier to multithread an app which has already been decomposed into simple objects with well-defined interfaces and loose coupling between them.

Mike Ash (emphasis mine, line-breaks added). The article has more detail and is very much worth reading.

One point that this advice really drives home for me is that you need to focus on making good code first, and defer micro-optimizations. If taking the time to clean up some code makes it easier to parallelize, then you are optimizing your code by refactoring it, even if at a micro-level you might be making some of it slower by, say, not caching something that takes O(1) time to compute.

Apple does not sell a Mac that’s not multi-core, and even the iPhone has a CPU and a GPU. There’s no question that optimization means parallelization. And all signs point to computers getting more parallel in the future. Any optimization that hurts parallelization is probably a mistake.

December 18, 2008

Fast Enough or Not Enough Fast?

Filed under: Quotes,Usability | , ,
― Vincent Gable on December 18, 2008

…people are now willing to make trade-offs against performance. For the entire history of the PC industry, computers have been too slow, so trade-offs were made in favor of faster CPUs: higher prices and heavier laptops. But today, for many common tasks, the type of CPU you get when you build a $400 lightweight laptop is fast enough. That’s (a) breakthrough.

John Gruber

Cynically I also wonder if this is because “more cores” isn’t as compelling as faster. As Hank Williams says,

The problem of multi-core computing is really very simple. As most of us have experienced, every problem *can’t* be solved better or faster with more people. Some problems can be solved faster by adding a few people, but most problems cannot. In truth, most problems can best, or only be solved by one person at a time. And so it is with computing. The vast majority of problems can only be solved by one logic thread at a time. The reason is obvious. For most process-oriented work, step B is based on the results of step A. And step C is based on the results of step B, and so on.

December 4, 2008

Optimize People’s Time Not The Machine’s Time

Filed under: Quotes,Usability | , ,
― Vincent Gable on December 4, 2008

Bruce “Tog” Tognazzini gives the best example I’ve seen of optimizing a person’s time, not a machine’s time

People cost a lot more money than machines, and while it might appear that increasing machine productivity must result in increasing human productivity, the opposite is often true. In judging the efficiency of a system, look beyond just the efficiency of the machine.

For example, which of the following takes less time? Heating water in a microwave for one minute and ten seconds or heating it for one minute and eleven seconds?

From the standpoint of the microwave, one minute and ten seconds is the obviously correct answer. From the standpoint of the user of the microwave, one minute and eleven seconds is faster. Why? Because in the first case, the user must press the one key twice, then visually locate the zero key, move the finger into place over it, and press it once. In the second case, the user just presses the same key–the one key–three times. It typically takes more than one second to acquire the zero key. Hence, the water is heated faster when it is “cooked” longer.

Other factors beyond speed make the 111 solution more efficient. Seeking out a different key not only takes time, it requires a fairly high level of cognitive processing. While the processing is underway, the main task the user was involved with–cooking their meal–must be set aside. The longer it is set aside, the longer it will take to reacquire it.

You also need to consider actual performance vr perceived performance, when optimizing a user’s time.

Cocoa Coding Style Minutia: All Forward Declarations on One Line is the One True Way

This is a very petty point, but I believe there’s a right answer, and since it’s come up before I’m going to take the time to justify the best choice.

Forward Declare Classes on One Line, Not Many Lines

Right:
@class Foo, Bar, Baz...;

This is the style that Apple’s headers generally follow, look at NSDocument.h for example.

Wrong:
@class Foo;
@class Bar;
@class Baz;
...

Programming languages are to be read by people first, and interpreted by computers second. I really do hope anyone in software engineering can take that as an axiom!

Therefore, always favor brevity above all else with statements that are only for the compiler. They are by far the least important part of the code base. Keeping them terse minimizes distractions from code that can and should be read.

Naturally, it’s very hard to find a statement that’s only for the compiler. I can only think of one such statement in Objective-C, forward declarations of an object with @class.

What Does @class Do?

@class Foo; tells the compiler that Foo is an Objective-C class, so that it knows how much space to reserve in memory for things of type Foo. It does not tell the compiler what methods and ivars a Foo has, to do that you need to #import "Foo.h" (or have an @interface Foo... in your .m file.)

@class Foo; #import "Foo.h"
Foo is a black box. You can only use it as an argument. You can also send messages to Foo.

What is @class Good For?

Since #import "Foo.h" does everything @class Foo; does and more, why would you ever use @class? The short answer is less time wasted compiling.

Lets say you have a Controller class, and it has an ivar that’s a Foo. To get it to compile, you put #import "Foo.h" inside Controller.h. So far so good. The problem comes when Foo.h is changed. Now any file that has #import "Foo.h" in it must be recompiled. So Controller.h has to be recompiled. So any file that has #import "Controller.h" in it must also be recompiled, and so on. Many objects that don’t use Foo objects, but do talk to the Controller (or to something that talks to something that talks to the Controller!) have to be rebuilt as well. It’s likely that the entire project would end up being rebuilt. With even a moderately sized project, that means a lot of needless compile time!

One solution is to put a forward-declaration, @class Foo;, in Controller.h, and #import "Foo.h" in Controller.m, and the few files that actually do something to Foo objects. @class Foo; gives the compiler just enough information to build an object with a Foo* ivar, because it knows how much space that ivar will need in memory. And since only objects that need to talk to Foo objects directly have any dependency on Foo.h, changes to Foo can be tested quickly. The first time I tried this solution in one of my projects, compile times dropped from minutes to seconds.

Readers Need Less @class

Forward declarations add value to the development process, as does anything that saves programmers time. But @class tells a human reader nothing. Which is why I say you should use them in your code, but minimize the space they take up, by putting them all on one line.

@class Foo; tells a reader that “Foo is black-box that is a class.” That adds no useful information.

If the reader does not care about what a Foo is (they are treating it as a black box), then knowing that Foo is a class is useless information. Foo could be a struct, or a bit-field, or a function pointer, or anything else and they still would not need to know to understand the code.

Conversely, if they need to know what a Foo is to understand the code, then they need to know more then “it is a class”. They need to see Foo.h or documentation — @class just isn’t enough.

Exceptions

I want to again emphasize that this isn’t a big deal. I strongly feel that putting all forward declarations on one line is the right answer, but doing it wrong won’t perceptibly matter in the end.

So I don’t think there’s any reason to address exceptions to the rule. Do what you gotta do.

Best Practices

Use @class to speed up compile-times as much as you can, but use the keyword @class as little as possible, since it is for the compiler, not the person reading the code.

Older Posts »

Powered by WordPress