Panic did some experimentation … a little over a year ago, when they released Candy Bar 3.1 They have a phone-home system for serial numbers — not for any sort of Adobe- or Microsoft-style “activation” scheme, but simply to check whether a serial number is valid or known to be circulating on bootleg message boards and forums. They experimented with different dialog boxes that appeared when a user entered a known-to-be-pirated serial number. One message was staid and serious (“Microsoft-style”, in Cabel Sasser’s words), along the lines of “It appears someone gave you an invalid serial number…”; the other two messages were more personal, along the lines of “Please don’t pirate Candy Bar. We’re a small company making software for you, and software sales are what keep our company going.”
They got better results with the more personal messages — about 10 percent of would-be-bootleggers presented with those dialogs clicked the button and immediately bought a legitimate license for the app. But even the staid, impersonal message had a 5 percent sell-through rate — far higher than Panic expected.
February 11, 2009
Asking Nicely Works
February 10, 2009
Good engineering is necessary, but good design has a more direct impact on helping people do amazing things with computers
…the thinking that ultimately sunk Douglas Engelbart’s visionary but incredibly complicated OLS (online system): Engelbart didn’t consider it all that necessary to develop an easy-to-use interface because, he felt, people invested years in learning human languages, so why not invest 6 months in learning his system’s powerful, language-size command structure? It’s an interesting argument when you think about it that way, but it ultimately doomed his design to obscurity, while his proteges who left for Xerox PARC and designed a system people could learn to use in a hour went on to change the world. Frictionless user experience is paramount, engineering concerns are secondary.
The title this post is something I’ve been saying as part of my personal statement on hirevincent.com for years.
February 5, 2009
I Solemnly Swear to Make Mistakes
President Barack Obama, and two other presidents, have retaken their oaths of office, because of some mistake with their inauguration. That means a little over one in fifteen presidential oaths were botched. If that sounds high, it is. But only because people make mistakes.
That’s why, we must make our software so that people can recover after making a mistake.
January 14, 2009
Why Work
Psychologists talk about two kinds of motivation: intrinsic and extrinsic. Intrinsic motivation is what drives you to do something regardless of whether you will receive a reward. Why do you spend an hour cleaning the inside of your stove? Nobody looks in there. Your intrinsic motivation compels you to do a thorough job. We all have it — in fact, most people start out with the desire to excel at whatever they do. Extrinsic motivation is the drive to do something precisely because you expect to receive compensation, and it’s the weaker of the two.
The interesting thing, according to psychologists, is that extrinsic motivation has a way of displacing intrinsic motivation. The very act of rewarding workers for a job well done tends to make them think they are doing it solely for the reward; if the reward stops, the good work stops. And if the reward is too low, workers might think, Gosh, this is not worth it. They will forget their innate, intrinsic desire to do good work.
January 4, 2009
Conway’s Law
Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization’s communication structure.
As Wikipedia points out, “Conway’s law was not intended as a joke .. but as a valid sociological observation. It is a consequence of the fact that two software modules A and B cannot interface correctly with each other unless the people who made A communicate with the people who made B. Thus the interface structure of a software system necessarily will show a congruence with the social structure of the organization that produced it.”
December 12, 2008
Catching nil Member Variables
When I wrote this method it seemed like it would be helpful,
//log a warning for any member variable that nil
- (void) warnOfNilIVars;
{
unsigned int ivarCount = 0;
Ivar * ivarList = class_copyIvarList([self class], &ivarCount);
if(ivarList) {
for(int i = 0; i < ivarCount; i++){
const char *typeString = ivar_getTypeEncoding(ivarList[i]);
if(typeString && typeString[0] == '@')
if(!object_getIvar(self, ivarList[i]))
NSLog(@"***WARNING: ivar %s of %@ is nil", ivar_getName(ivarList[i]), [self className]);
}
free(ivarList);
}
}
But in practice I haven’t really used it (and when I have there were quite a few false-positives). Still, I think it’s pretty neat that you can do something like this in Objective-C.
If you find a use for it, please let me know!
December 11, 2008
There are Worse Things Than Crashing
Crashing is not the worst thing a program can do. Data-loss and corruption are worse bugs than crashing. I think this is generally accepted in theory, but I often hear developers say things like,
If there’s one behavior of your application that you should focus on eliminating, it’s the behavior of crashing. Above all other “nuisances,” this is the one that is absolutely unacceptable.1
And I certainly agree that crashing is unacceptable, and a high-priority bug. But it’s not the highest priority kind of bug.
For example, say your bank’s website crashes all the time. That sure sucks for everyone! People will be upset, some of them enough to take their business elsewhere. But if the website even occasionally moved the decimal point around when you sent money, the bank would be out of business faster than you could say suedfornegligence.
Even for software that isn’t used by banks, hospitals, nuclear reactors, etc., the basic principle is the same, it’s better to crash than vandalize people’s data.
As with any general rule, there are exceptions. It’s probably worse if an email program crashes than if it “corrupts” data by adding an extra space to the end of of a message. In fact, Mail could be doing this right now, and I wouldn’t know or care. (Outside of movie plot scenarios where evil CS 101 students are holding the city hostage, and I have 10 seconds to defuse the bomb by sending an email with a body containing the de-activation code and not a single extra character. Although to be fair, there’s probably a clause in Mail’s EULA saying it’s a violation of the terms of sale to use it to defuse thermonuclear weapons.)
But the threshold of “acceptable” corruption is vanishingly small. If about 1% of the time Mail made the subject-line of an email all lowercase, that is plausibly worse than a 1% chance of crashing. It’s not OK for the subject of a job-application email to be changed to “vincent gable, professional and detail-oreinted programmer“[sic] (and you know what they say about first impressions).
The Unthinkable
I think a lot of why crashes are thought the worst thing a program can do is, fortunately, crashes are the worst thing most programs will ever do. So they end up taking the #1 spot on the bug severity triage sheet.
Data-vandalization is orders of magnitude less common then crashing in shipping programs. And if you count crashes during development, that proportion shrinks dramatically. Every non-trival program has crashed at some point during the development process.
And a leading cause of data loss is programs crashing without saving something. Do you classify that as a data-loss bug or a crash? Well, the crash is the thing that needs to be fixed to solve the problem, so probably a crash.
Small wonder then that crashes are considered by some to be the worst kind of bug, even though erasing or corrupting someone’s data is worse. Most recently, I have been going by Wil Shipley’s bug triage priorities, which give highest priority to “Data-loss bugs”.
1 I don’t mean to call out Daniel Jalkut; his was just the first quote that jumped to mind. And for all I know he agrees with me, and classifies mauling a user’s data as something worse than a “nuisance”.
December 6, 2008
The Most Memorable Thing a CS Professor Ever Told Me About Software Engineering
The first semester of my sophomore year, I took CS 337: Theory in Programming Practice from Jaydev Misra. On the last day of class, he talked about computer science in general, took questions, and let us out early. And on thing he said really made an impression on me. I’m quoting from what I remember today, I didn’t write the exact quote down, so this could very well be embellished or incorrect in some detail,
Even if we had the fast computers we have today in the 1960’s, and even if we had the internet, we could not have built a modern web-browser, because we did not understand enough about building programs of that complexity.
It’s amazing how young software development is as a discipline.
December 4, 2008
Cocoa Coding Style Minutia: All Forward Declarations on One Line is the One True Way
This is a very petty point, but I believe there’s a right answer, and since it’s come up before I’m going to take the time to justify the best choice.
Forward Declare Classes on One Line, Not Many Lines
Right:
@class Foo, Bar, Baz...;
This is the style that Apple’s headers generally follow, look at NSDocument.h
for example.
Wrong:
@class Foo;
@class Bar;
@class Baz;
...
Programming languages are to be read by people first, and interpreted by computers second. I really do hope anyone in software engineering can take that as an axiom!
Therefore, always favor brevity above all else with statements that are only for the compiler. They are by far the least important part of the code base. Keeping them terse minimizes distractions from code that can and should be read.
Naturally, it’s very hard to find a statement that’s only for the compiler. I can only think of one such statement in Objective-C, forward declarations of an object with @class
.
What Does @class
Do?
@class Foo;
tells the compiler that Foo
is an Objective-C class, so that it knows how much space to reserve in memory for things of type Foo
. It does not tell the compiler what methods and ivars a Foo
has, to do that you need to #import "Foo.h"
(or have an @interface Foo...
in your .m file.)
@class Foo; |
#import "Foo.h" |
---|---|
Foo is a black box. You can only use it as an argument. |
You can also send messages to Foo . |
What is @class
Good For?
Since #import "Foo.h"
does everything @class Foo;
does and more, why would you ever use @class
? The short answer is less time wasted compiling.
Lets say you have a Controller
class, and it has an ivar that’s a Foo
. To get it to compile, you put #import "Foo.h"
inside Controller.h
. So far so good. The problem comes when Foo.h
is changed. Now any file that has #import "Foo.h"
in it must be recompiled. So Controller.h
has to be recompiled. So any file that has #import "Controller.h"
in it must also be recompiled, and so on. Many objects that don’t use Foo
objects, but do talk to the Controller
(or to something that talks to something that talks to the Controller
!) have to be rebuilt as well. It’s likely that the entire project would end up being rebuilt. With even a moderately sized project, that means a lot of needless compile time!
One solution is to put a forward-declaration, @class Foo;
, in Controller.h
, and #import "Foo.h"
in Controller.m
, and the few files that actually do something to Foo
objects. @class Foo;
gives the compiler just enough information to build an object with a Foo*
ivar, because it knows how much space that ivar will need in memory. And since only objects that need to talk to Foo
objects directly have any dependency on Foo.h
, changes to Foo
can be tested quickly. The first time I tried this solution in one of my projects, compile times dropped from minutes to seconds.
Readers Need Less @class
Forward declarations add value to the development process, as does anything that saves programmers time. But @class
tells a human reader nothing. Which is why I say you should use them in your code, but minimize the space they take up, by putting them all on one line.
@class Foo;
tells a reader that “Foo
is black-box that is a class.” That adds no useful information.
If the reader does not care about what a Foo
is (they are treating it as a black box), then knowing that Foo
is a class is useless information. Foo
could be a struct
, or a bit-field, or a function pointer, or anything else and they still would not need to know to understand the code.
Conversely, if they need to know what a Foo
is to understand the code, then they need to know more then “it is a class”. They need to see Foo.h
or documentation — @class
just isn’t enough.
Exceptions
I want to again emphasize that this isn’t a big deal. I strongly feel that putting all forward declarations on one line is the right answer, but doing it wrong won’t perceptibly matter in the end.
So I don’t think there’s any reason to address exceptions to the rule. Do what you gotta do.
Best Practices
Use @class
to speed up compile-times as much as you can, but use the keyword @class
as little as possible, since it is for the compiler, not the person reading the code.
November 24, 2008
How To Space Your Code For Maximal Interoperability
The new rule for indentation and alignment: use tabs for achieving an indentation level, spaces for character alignment within an indentation level.
Christopher Bowns (and independently Peter Hosey)
The arguments make total sense to me. Here’s hoping for better IDE support in the future. Unfortunately, (though interestingly), according to Steve Yegge, indentation is very hard:
I would have been publishing this article at least a month ago if it weren’t for indentation. No, six weeks, mininum.
See, I thought that since I had gone to all the thousands of lines of effort to produce a strongly-typed AST (abstract syntax tree) for JavaScript, it should therefore be really easy to do indentation. The AST tells me exactly what the syntax is at any given point in the buffer, so how hard could it be?
It turns out to be, oh, about fifty times harder than incremental parsing. Surprise!
Here’s a forward-looking (somewhat contrary) opinion,
Soft-wrapped code (leaving a long line long, and letting the IDE handle the spacing) also appears to be the direction that Apple are heading and they tend to drag a lot of Mac programmers along in their wake.