The Best Practices suggest extracting key or index lists out of large slices and using hashes to manage them.
I like this idea, but I’m not sure I follow the code in the book. It suggests using a hash of indices – and for some reason, all their keys are negative numbers. Is that needed?
It also then uses an assignment of values %CORRESPONDING = keys %CORRESPONDING, and says that works because keys and values always traverse in the same order. I didn’t know that, and if were anyone with less Perl chops under their belt, I might disbelieve it or at least want to fiddle with it a bit to test it. I worry that the recent hash-ordering changes might have broken this, although if it did I’m sure they’d have broken half the world, and the Perl 5 Porters aren’t that kind of group. So it probably works.
Worryingly, they also say that this clear, extensible, and maintainable construct which I’m having trouble reading will be used heavily later in the book. Statements like this always make me feel stupid, because I don’t think it’s that clear and maintainable. I am having real trouble telling you why this code does what it does. Any time you assume that something is easy, you are certainly wrong for someone else. Building more complex structures on top of this are not going to be easy for me to follow.
The later examples show they don’t need to be negative offsets (why, then did you start with them?) and even show an example where offsets aren’t used. That’s actually a very clear example, setting up a name to order mapping for the fields in the stat call, which I do like and rarely see. I wonder how many times I should have used it for localtime.
So, I think it’s a good idea, but hard to explain, and kind of a corner case. It’s just the right tool when you need it, but that may not be as often as they think.
The, somewhat dubious, reasoning behind the negative indices comes earlier in the text in the “Array Indices” section.
The each, keys, and values functions all use the same logic to iterate over a hash (which is why it is bad to call keys or values on a hash you are iterating over with each). Hash order only changes on inserting or deleting keys. You can find documentation backing this up in perldoc -f each ( http://perldoc.perl.org/functions/each.html ): “So long as a given hash is unmodified you may rely on keys, values and each to repeatedly return the same order as each other.”
The new random hash ordering has no effect on this. What it does is make it very unlikely two runs of the same program with the same data will result in the same hash ordering. The reason for the new randomness is to make it harder for an attacker to craft a set of keys that will drastically slow down hash lookups and assignments. You can learn more by reading perldoc perlsec ( http://perldoc.perl.org/perlsec.html#Algorithmic-Complexity-Attacks ).
> [PBP] says … keys and values always traverse in the same order.
That is true. According to the documentation for keys, values and each. For instance ( http://perldoc.perl.org/functions/each.html , second paragraph): “So long as a given hash is unmodified you may rely on keys, values and each to repeatedly return the same order as each other.” The write up I read about the recent randomization mentioned that they made sure to keep this property.