Vincent Gable’s Blog

November 24, 2008

How To Put a % in an NSString/NSLog/printf

Filed under: Cocoa,MacOSX,Objective-C,Programming | , , , , , ,
― Vincent Gable on November 24, 2008

%% is turned into a single % in a call to NSLog, or -[NSString stringWithFormat:], or the printf-family of functions.

Note that %%format will become %format, even if %format usually prints an argument. For example, the code

NSLog(@"%%a will print a float in a machine-readable format, so that *scanf can read it back in from a string with no loss of precision.", 1.0f);

prints:

%a will print a float in a machine-readable format, so that *scanf can read
it back in from a string with no loss of precision.

not:

%0x1p+0 will print a float in a machine-readable format, so that *scanf can read it back in from a string with no loss of precision.

How To Space Your Code For Maximal Interoperability

Filed under: Design,Programming,Quotes,Usability | , , , , ,
― Vincent Gable on November 24, 2008

The new rule for indentation and alignment: use tabs for achieving an indentation level, spaces for character alignment within an indentation level.

Christopher Bowns (and independently Peter Hosey)

The arguments make total sense to me. Here’s hoping for better IDE support in the future. Unfortunately, (though interestingly), according to Steve Yegge, indentation is very hard:

I would have been publishing this article at least a month ago if it weren’t for indentation. No, six weeks, mininum.

See, I thought that since I had gone to all the thousands of lines of effort to produce a strongly-typed AST (abstract syntax tree) for JavaScript, it should therefore be really easy to do indentation. The AST tells me exactly what the syntax is at any given point in the buffer, so how hard could it be?

It turns out to be, oh, about fifty times harder than incremental parsing. Surprise!

Here’s a forward-looking (somewhat contrary) opinion,

Soft-wrapped code (leaving a long line long, and letting the IDE handle the spacing) also appears to be the direction that Apple are heading and they tend to drag a lot of Mac programmers along in their wake.

Matt Gallagher

I For One Welcome Our Vector Overlords

Filed under: Design,Quotes | , ,
― Vincent Gable on November 24, 2008

The pixel will never go away entirely, but its finite universe of digital watches and winking highway signs is contracting fast. It’s likely that the pixel’s final and most enduring role will be a shabby one, serving as an out-of-touch visual cliché to connote “the digital age”

JH

I’ve written before about trends in resolution independence, and why it matters.

November 14, 2008

Prefer copy Over retain

Filed under: Bug Bite,Cocoa,Objective-C,Programming | , ,
― Vincent Gable on November 14, 2008

(Almost) every time you use retain in Objective-C/Cocoa, you really should be using copy. Using retain can introduce some subtle bugs, and copy is faster then you think…

A Bug Waiting To Bite

The problem with using retain to “take ownership” of an object is that someone else has a pointer to the same object, and if they change it, you will be affected.

For example, let’s say you have a Person class with a straightforward setter method:
- (void) setThingsToCallTheBossToHisFace:(NSArray*)newNames {
   [thingsToCallTheBossToHisFace autorelease];
   thingsToCallTheBossToHisFace = [newNames retain];
}

And you use it to initialize a few Person objects:

NSMutableArray *appropriateNames = [NSMutableArray arrayWithObject:@"Mr. Smith"];
[anIntern setThingsToCallTheBossToHisFace:appropriateNames];

//Salaried Employees can also be a bit more informal
[appropriateNames addObject:@"Joe"];
[aSalariedEmployee setThingsToCallTheBossToHisFace:appropriateNames];

//the wife can also use terms of endearment
[appropriateNames addObject:@"Honey"];
[appropriateNames addObject:@"Darling"];
[theBossesWife setThingsToCallTheBossToHisFace:appropriateNames];


The code looks good, it compiles without error, and it has a bug in it. Because setThingsToCallTheBossToHisFace: uses retain, each Person object’s thingsToCallTheBossToHisFace field is actually pointing to the exact same NSMutableArray. So adding “darling” to the list of names the wife can use also adds it to the intern’s vocabulary.

If copy was used instead, then each Person would have their own separate list of names, insulated from changes to the temporary variable appropriateNames.

A Sneaky Bug Too

This is a particularly insidious problem in Foundation/Cocoa, because mutable objects are subclasses of immutable objects. This means every NSMutableThing is also a NSThing. So even if a method is declared to take an immutable object, if someone passes in a mutable object by accident, there will be no compile-time or run-time warnings.

Unfortunately, there isn’t a good way to enforce that a method takes an object, but not a subclass. Because Foundation makes heavy use of class clusters, it’s very difficult to figure out if you have an immutable class, or it’s mutable subclass. For example, with:
NSArray *immutableArray = [NSArray array];
NSMutableArray *mutableArray = [NSMutableArray array];

[immutableArray isKindOfClass:[NSArray class]] is YES
[immutableArray isKindOfClass:[NSMutableArray class]] is YES
[mutableArray isKindOfClass:[NSArray class]] is YES
[mutableArray isKindOfClass:[NSMutableArray class]] is YES
[mutableArray isKindOfClass:[immutableArray class]] is YES
[immutableArray isKindOfClass:[mutableArray class]] is YES

Sad, but true.

copy Is Fast!

With nearly every immutable Foundation object, copy and retain are the same thing — there is absolutely no penalty for using copy over retain! The only time you would take a performance hit using copy would be if the object actually was mutable. And then you really do want to copy it, to avoid bugs!

The only exceptions I know of are: NSDate, and NSAttributedString.

But don’t just take my word for it! Here’s the snippet of code I used to test all this:

NSMutableArray *objects = [NSMutableArray array];
//add anything that can be made with alloc/init
NSArray *classNames = [NSArray arrayWithObjects:@"NSArray", @"NSColor", @"NSData", @"NSDictionary", @"NSSet", @"NSString", nil];
for(NSString *className in classNames) {
   id obj = [[NSClassFromString(className) alloc] init];
   if(obj)
      [objects addObject:obj];
   else
      NSLog(@"WARNING: Could not instatiate an object of class %@", className);
}

//manually add objects that must be created in a unique way
[objects addObject:[[NSAttributedString alloc] initWithString:@""]];
[objects addObject:[NSDate date]];
[objects addObject:[NSNumber numberWithInt:0]];
[objects addObject:[NSValue valueWithSize:NSZeroSize]];

//test if retain and copy do the same thing
for(id obj in objects)
   if(obj != [obj copy])
      NSLog(@"copy and retain are not equvalent for %@ objects", [obj className]);

Best Practices

Get in the habit of using copy, anytime you need to set or initWith something. In general, copy is safer then retain, so always prefer it.

I believe it is best to try copy first. If an object can not be copied, you will find out about it the first time your code is executed. It will be trivial to substitute retain for copy. But it is much harder, and takes much longer, to discover that you should have been using copy instead of retain.

A program must be correct before it can be made to run faster. And we have seen there is no performance penalty for copy on most common objects. So it makes sense to try copy first, and then replace it with retain if it proves to be necessary through measurement. You will be measuring before you start “optimizing”, right? (I also suspect that if taking ownership of an object is a bottle-neck, then the right optimization is not to switch to retain, but to find a way to use a mutable object, or an object pool, to avoid the “take ownership” step altogether.)

Choose copy, unless you have a measurable justification for using retain.

UPDATE 2009-11-10: Obj-C 2.0 blocks have some peculiarities,

For this reason, if you need to return a block from a function or method, you must [[block copy] autorelease] it, not simply [[block retain] autorelease] it.

November 6, 2008

Alan Kay on Why Computer-Based Teaching Fails

Here’s a lightly-edited transcription of Alan Kay, explaining why computer-aided instruction so often fails (from “Doing with Images makes Symbols”, 1987),

After the experience I’ve had with working with both children and adults with computers (and at least dabbling in the areas of learning and education), I think that one of the best ways of thinking of a computer is very similar to thinking of what a piano means when teaching music.

The piano can amplify musical impulse. We can only sing with one voice. If we want to play a four-part fugue, we have to use something mechanical, like a piano to do it. And it can be done very beautifully.

But for most people the piano has been the biggest thing that turns millions of people away from music for the rest of their lives. And I think the best way to sum it up is just to say that all musicians know that the music is not inside the piano…

So, in any situation where education and learning is involved, you first have to develop a curriculum based on ideas, not on media. Media can be an amplifier of those ideas, but you have to have the ideas first.

And I think the reason computers have failed is that almost everybody, no matter which way they have tried to use computers, have wanted the computer to to be some sort of magic ointment over the suppurating wound of bad concepts. … But first you have to have the ideas.

This was exactly my experience as a student. I am dysgraphic — I have trouble writing legibly by hand, and spelling. So I took a laptop to all my classes, from 8th grade (1997) through college. The laptop solved a particular problem for me. But outside of that, it did not enhance my education; in some cases it got in the way. (One professor found students using laptops the most during class did 11% worse on tests compared to the rest of the class). If I wasn’t dysgraphic, I would have been better-off with a Moleskine.

November 4, 2008

The Perils of Localization

Filed under: Uncategorized | , , , , , ,
― Vincent Gable on November 4, 2008

The sign below is supposed to say ‘No entry for heavy goods vehicles. Residential site only’ in English and Welsh.

localization

Unfortunately the Welsh version says ‘I am not in the office at the moment. Send any work to be translated’.

Story from the BBC

(Via Successful Software.)

November 3, 2008

Voting Done Right: Wait For It

Filed under: Design,Security,Usability | , , , , , ,
― Vincent Gable on November 3, 2008

Everyone wants to know the results of an election as soon as possible, including me. I will be spending tomorrow evening with friends, watching election results on live TV. I’ll be unhappy if a battle-ground state is slow to report, and I expect to know who the next president will be before I go to bed. But quick reporting of election results is in no way necessary, and in fact undermines our electoral system. We should put trustworthiness ahead of entertainment, and count votes deliberately.

According to the project triangle, you can do something quickly, you can do something cheaply, and you can do something well, but you can only do two out of three.

I propose that official tallies should not be released for 72 hours after polls close, by law. This gives us time to do voting right, and affordably.

A Hard Problem

Engineering a good voting system is a much harder problem then most people realize.

The system must be resistant to fraud by voters, and election officials, and the politicians on the ballot.

Voters must vote only once. But nobody can tie a particular vote to someone (that would allow voter intimidation and buying). But their vote must still be counted for the right candidate.

Tallies must be auditable (in case of a dispute a third party can re-count the votes). The whole system must be perceived as trustworthy and transparent by everyone.

Oh, and it has to scale to use by hundreds of millions of people on election day.

And all of this has to be built, and maintained, with very limited public funds.

This is a very hard problem already. Adding the extra requirement, “and final results must be ready two hours after polls close (so results can make prime-time TV)” would, in my opinion, make it an impossibly hard problem. Unfortunately, that is the direction we are moving.

No Need to Rush

Our electoral system was designed in an era when, cliché as it sounds, the pony express was the fastest way to communicate intra-nationally. Officials do not take office for several weeks after they have been voted-in. Delaying the certification of a successor until Friday would not incapacitate government. It’s always clear who the current officials are until new ones take office.

Of course, today we live in a faster, more connected, world. It could be argued that this means we have a modern need for instant results. Fortunately, this does not appear to be the case. The fallout of the Bush v Gore election in 2000 proved that society and government can function just fine for several weeks without knowing who won an election.

The Fear

Confidence in modern voting machines is rightly low. For the first time in nearly three decades, there will be a decline in the number of people casting their ballots electronically. Nobody (lobbyists aside) seems to really think that these voting machines are a working out for us, except that they do give “tallies” faster.

Personally, I am terrified of an all-electronic election. The reason is simple: it can’t be audited. Digital forensics just aren’t real enough. If someone stuffs a ballot box, they leave a trail of clues, down to the chemical composition of the paper. But there’s no record when bits are flipped to a crooked candidate. Any digital footprint can be faked. “Recounting” an electronic election would be pointless — asking the same program to run the same calculation, with the same data.

Of course, there are exotic solutions. It might be possible to develop a digital storage media that can only be written to once, and would record forensic information, like the time of each write. Unfortunately, none of these ideas sound remotely cost-effective. Which leaves….

Good old physical paper ballots. Slow, but sure, they are a proven technology that has earned our trust.

… then the Opposite of Progress is…

So why not simply mandate that paper ballots must be used for an election? Personally, I think that would give us a better election system then we have today. And it’s probably got a much better chance of happening then my idea of sitting on election results for three days.

But I don’t think it’s the best long-term solution. Historically, laws just don’t keep up with technology. And we have every indication that the pace of technological change is increasing. A little over seventy years ago, the Social Security Number was born. Today, we are stuck with them. I’m not convinced that paper will be the best medium for recording votes in 70 years.

Rather then dictating anachronistic implementations, it seems better to codify the right trade offs to make when designing a voting system. Then we can organically reap the benefits of advances in voting-technology, as we have historically.

The real problem is that we, as a voting public, are favoring quick results over reliable ones. This is a social problem, it is not a technological problem. It is best to directly address the social expectations, not the technological details.

But honestly… it will never happen. We like our prime-time TV and instant gratification too much. Withholding election results, even temporarily, feels too dictatorial. We can expect to get our votes counted faster every year. I just hope it’s not at the expense of counting them correctly.

Powered by WordPress