In an article at Library Journal (via), Mark J. Ludwig and Margaret R. Wells praise Google Books for its ability to provide both full-text searching and content, noting that "while users do need to watch out for the Google Books “doughnut hole,” i.e., the gap between scanned material out of copyright and new born-digital books fresh from publishers, materials in Google Books are far more visible and accessible than those in the local catalog and our collections." Quite understandably, they're concerned about Google Books' likely effect on a) smaller college libraries and b) the dissemination of academic monographs in general. But both this article and Merrilee Proffitt's useful response (which raises the question of copyright law) ignore the by-now thoroughly dead horse (in fact, I'm starting to think that it's a zombie, ready to munch on academic brains) that I've been beating for some time: Google Books is digitizing books badly. The scans can be blurry, distorted, or chopped in half--and scanning problems affect the search function. There are books missing their first page. There are books missing their last page. There are books missing random chunks of pages. There are books with pages in the wrong order. There are inconsistencies (are triple-deckers scanned as one volume? As three?). And, of course, there is the sheer and utter uselessness of snippet view. (I just suffered through yet another snippet view that landed me in the margin. You know, the blank space on the page.) From an academic POV, this lack of interest in anything resembling quality control is not a minor glitch or superficial inconvenience.
As I've said on more than one occasion, I'm completely enamored with the idea of Google Books, and I have found all sorts of potentially wonderful material by using it. But far too much of that potentially wonderful material remains just that--potentially wonderful.