Without thinking about it too much, I had written roughly the following code:
Turns out this doesn't scale very well at all.
But first: "Why, Mike, are you not just doing a regular dictionary lookup?". Well in this case the keys in use don't always exactly match the information available for looking them up. Just before this code, yes, I test to see if the dictionary contains the key I'm after, but if not, have to fall back to checking each key individually to see if it meets requirements.
Aaaaaanyway. I had assumed the new-fangled block-based enumeration is able to run very fast through all the keys and objects.
However, sampling quickly reveals it's quite slow. Turns out that for each entry in the dictionary, internally the enumeration is simply calling [self objectForKey:]. (This was a mutable dictionary; immutable dictionaries might well be implemented to be faster and more cunning). As you can imagine, my keys in this case are a little on the complex side, so take a bit longer to look up than, say, a string.
Furthermore, looking at the code, you can see that the object goes unused until we find a match. So all that time in -objectForKey: is likely wasted!
Moral of the story: provided you don't need to enumerate every value as well as the keys, "fast enumeration" introduced in OS X 10.5 Leopard is still faster:
And hey, it's more readable code too!
Update: I forgot to note that one downside of "fast enumeration" here is it loses you the ability to supply NSEnumerationOptions, such as NSEnumerationConcurrent. If you do find that a useful performance gain, I suppose you could first grab the dictionary's keys, and then use block-based enumeration on that. Hopefully not too much slower.
Update 2: This post made its way onto Apple's Objective-C mailing list, garnering a reply from an Apple employee which suggests the situation might improve in future.