An ISP has a mail cluster solution which is distributed over a number of hosts in a /24 subnet. The SPF record correctly reads:
$ host -t TXT isp.com
isp.com descriptive text "v=spf1 ip4:subnet/24"
Since the mail cluster has to deliver a large amount of mails per day, it is attached to a SAN and distributes message delivery over the various servers. A random server picks up a message and attempts to deliver it. Locking works well so no double attempts to deliver it are made, ever. This effectively prevents messages with delivery problems from clogging up the queue on a specific server.
Here's why the ISP can forget their great mail server: greylisting. A lot of implementors don't investigate in any way what they whitelist — there's a variety of options ranging from SPF over the RIPE database to server name wildcards (which would be nasty, though) — but instead whitelist one single host. Then, however, the likelyhood of the resend attempt being performed by the same server is fairly low, so the next server will also hit the greylisting barrier. This can continue for a long time until the mail is finally delivered — or, if things go really bad, rejected.
While this is not an argument against greylisting itself, it is one against a majority of implementations.