Presumably ``secure'' is supposed to mean ``secure against known factorization methods.'' All the other known methods appear to be much slower than NFS.
There are other stages of NFS, notably the NFS linear-algebra stage, but the authors estimate that a particular NFS linear-algebra circuit (which they falsely claim to be new) will finish ``within a day'' and cost ``a few thousand dollars'' for 1024-bit inputs. The authors explicitly claim that sieving is the bottleneck.
The Lenstra-Shamir-Tomlinson-Tromer paper spends one paragraph, on page 14, estimating (on the basis of o(1) abuse) that conventional NFS sieving ``would require about a year on about 30 million 1GHz PCs with large memories,'' and leaping to the conclusion that 1024-bit keys provide a ``reasonable level of security.''
The authors claim to understand (and falsely claim to have understood for years) that circuit NFS sieving is much more cost-effective than conventional NFS sieving for large input sizes. However, when faced with 1024-bit keys, they don't even consider the possibility of 1024 being large and of conventional sieving being the wrong approach.
This is a gaping hole in the Lenstra-Shamir-Tomlinson-Tromer analysis.
The authors are implicitly assuming, without justification, that conventional sieving is the most cost-effective approach to sieving for 1024-bit keys. In particular, they are implicitly assuming, without justification, that circuits (mesh sorting, mesh routing, early-abort rho+ECM, etc.; see my October 2001 grant proposal) would not be much less expensive. The authors conclude, without justification, that sieving by itself is a bottleneck for 1024-bit keys.
Maybe special-purpose circuits will have a dramatic impact on the difficulty of 1024-bit and 1536-bit and 2048-bit factorizations. Maybe they'll have no impact at all. I don't know yet. I do know that ignoring the question doesn't help us find the answer.