Another interesting point is that judging from the public-domain scans posted on Google Books and the metadata associated with those, it is clear that Google often scans multiple copies of the same title. Apparently, if it comes in from Library A they just scan it, if it later comes in from Library B they scan it again, if it yet later comes in from Library C they scan it yet again, and so on. With different cataloging data attached to each copy publicly posted. There’s all kinds of confusion in their public data regarding multi-volume works, as well as editions. Note that with 19th-century works reprints of exactly the same material are often called “editions” in the work itself. Sometimes the exact same material was simply retitled for a reprinting for marketing reasons (a practice that has not entirely disappeared).
Given all that, and setting aside the fact that Google is also scanning bound collections of magazines and lumping them in with books, I don’t see that Google is even bothering to count the number of unique books scanned. To be fair, the online library meta-catalog Worldcat also not infrequently has multiple entries for the same title, at least for older works. But Worldcat is not doing anything illegal.
I think Leonid Taycher’s post is merely an assertion that Google can, and will, scan all books, regardless of copyrights, lawsuits, protests by copyright owners, and any and all other actions on the part of everyone who opposes Google’s mighty will. Which is extremely troubling. I cannot address Jame’s statement as to what Google employees believe, but certainly any belief that pirating copyrighted works is good for society goes hand in hand with such piracy being extremely likely to yield enormous profits for Google.