Powered By

Free XML Skins for Blogger

Powered by Blogger

Thursday, February 21, 2008

How Google is Making Us Stupid by Gideon Haigh

Many technologies change our lives; only very few infiltrate and colonise our language. Google is the name Sergey Brin and Larry Page gave to their upstart start-up eight years ago, which they charged with an immodest mission to ‘organize all the world’s information and make it universally accessible and useful’. To ‘google’ is now to rove the vast, virtual expanses of the World Wide Web; Germans ‘googelte’, Japanese ‘guguru’, Finns ‘googlata’. The word ‘search’ once evoked journeys to distant places and into darkest interiors; ‘search’ is now something we do from our computers, with Google and ‘search engines’ like it as our emissaries.
Google is synonymous, too, with speed and simplicity. But nothing about it has been speedier or simpler than its rise. It is not a decade since it was first launched on the website of the inventors’ Stanford alma mater, not seven years since they first attracted backing from venture capitalists, and a mere 17 months since the company raised $US2 billion by going public, a process which has since inflated its market value to $US120 billion. Most technologies penetrate either public or private sphere. Google, with its disarming ‘don’t be evil’ philosophy, has come to pervade both, as the preferred means for about half the six hundred million searches initiated every day, whether by students seeking sources, employers verifying résumés, professionals trawling journals, shoppers chasing bargains or even lovers checking partners – it is now two years since Sex and the City’s Carrie Bradshaw googled her paramour Aleksandr Petrovsky.
Nor is there sign of Google’s progress slowing. Google’s revenues mushroomed 96% last year; its ambitions exceed quantification. It has challenged eBay, Craigslist and other classified advertising formats with Google Base; it wants to wrest the video download market from iTunes with Google Video Store; it aspires to computerise the culture itself with the Google Print for Libraries project, comprising fifteen million digitised searchable books. Brin and Page tackle business with such evangelical fervour that one industry observer recently called Google “a religion posing as a company”.
The speed with which Google has attained ubiquity, however, is as problematic as it is intoxicating. Perhaps no innovation has been assimilated so wholly, and with so little reflection on how it may change us – as, inevitably, it will. For technological change, the sociologist Neil Postman remarked, is neither additive nor subtractive, but ecological: “One significant change generates total change. If you remove the caterpillars from a given environment, you are not left with the same environment minus caterpillars: you have a new environment, and you have reconstituted the conditions of survival.” Google’s impact on the biodiversity of the information ecosystem is not something we will ‘find on Google’, but it needs consideration – fast.

Type ‘Martin Luther King’ into Google and you will find as the third link, where it has been for at least five years, the site martinlutherking.org, purporting to be “a valuable resource for teachers and students alike”. If you start tapping the links, it emerges that this is far from “A True Examination of Martin Luther King”, but a white supremacist site promoting, among others, the works of the American fascist David Duke and the Stormfront nationalist movement. Yet there it remains, prominent in the ‘relevance ranking’ of Google’s legendary PageRank system, merely one of its famous eccentricities.
Google will find you any point of view on any subject, occasionally in close cohabitation: google ‘scientology’ and you are led first to the church’s official site, second to xenu.net, dedicated to attacking it. And often this is the outcome of calculated tendentiousness, intended to manipulate. Third on a Google search of ‘global warming’ is globalwarming.org, a site that only collates information disputing that world temperatures are rising; fourth is climatehotmap.org, a site only collating information that they are. And this can also get ugly: the fourth link in a search for Adolf Hitler is still the Hitler Historical Museum, the first crack in whose patina of historical respectability is a remark that negative views of Nazism are “standard, uninformative and clichéd”.
This is not, of course, Google’s fault, but it is an inevitability of PageRank. Early search engines used simple keyword algorithms, ideally suited to small data sets. Spammers soon realised that they could capture traffic by secreting keywords like ‘cars’ on their sites, sometimes in small white letters on a white background; by 1998, for example, most searches for ‘cars’ on Lycos led to pornographic sites. Larry Page made the conceptual breakthrough, as David Vise explains in his new book The Google Story: “Counting the numbers of links pointing to a website was a way of ranking that site’s popularity. While popularity and quality don’t go hand in hand, he and Brin had both grown up in homes that valued scholarly research published in academic journals with citations”. The more citations, runs the academic rule of thumb, the more important the work. Nor did Page and Brin stop at this eureka moment. They grasped that some links were more important than others – from, say, a big collecting institution, university or media organisation – and also measured other attestations of quality, such as how long a site had survived.
Google is upfront about this much: check out its guide to the algorithm at http://www.google.com/technology/. But it is difficult to be more definitive, for Google guards its proprietary secrets jealously, and is resolutely uncooperative even with the information technology press. James Rogers, editor of IT website Byte and Switch, complained recently: “Getting information out of Google is like getting blood from a stone. Harder, in fact. Requests of all kinds, including requests for comment, are simply ignored. Requests for interviews, if persistent, are routinely, if politely, denied.” And, in this, Google may be onto something: in general, the public does not care. Like primitive man and nature, our relationship with technology is marvelling, worshipful, any deep-buried dread balmed by a soothing incomprehension. It doesn’t matter, we tell ourselves, that we don’t know how Google works: what matters is that it does. After all, who understands the workings of a DVD player or a BlackBerry?
The analogy, however, is false. If a person exhibited the biases of Google – if a teacher, for instance, was teaching that Martin Luther King was a communist, plagiarist and “modern-day plastic god” – we would surely want to know; likewise if a library made accessible so much pornography and so many cures for cancer. New media guru Bill Thompson, star of the BBC World Service’s Go Digital, says that a huge catch-up is required: “It’s quite clear that we – the wide internet-using public – have no real idea just how search works or how results are arrived at, and that this will become a significant issue. For the older users – I include myself – who have grown up to believe that trusted sources of information are validated in some way by external authority, the way that Google or MSN Search works would be disconcerting if they had any idea what was going on.”
In its speed, precision and reliability, Google is a marvel. But it is also deceptively limited, being essentially self-reinforcing: the same sites get visited, like martinlutherking.org, because the same sites get visited, our exploratory clicks like a never-ending episode of ‘Information Idol’. By bestowing its highest commendation on sites that are most popular, moreover, Google tends to leave huge vistas of the web unexplored. The so-called ‘dark web’, seldom accessed because it seldom is, but accounting for perhaps 80% of web content, includes probably its most useful material: tens of thousands of content-rich databases maintained by universities, libraries, associations, businesses, and government agencies. Google won’t get you there.
Google is relatively easily ‘gamed’, whether by the hugely profitable ‘search engine optimisation’ industry that promises to push your company into the all-important top ten links, who congregate at sites like webmasterworld.com, or unscrupulous ‘click frauds’, who with bogus clicks push up the advertising rates charged to rival firms. It is also often deceived by multiple links from blogs and discussions forums on websites. James Frost of business website The Eureka Report explains: “Say if you have a forum where someone posts a message saying: ‘I love Collingwood’. Then someone comes along and posts: ‘I hate Collingwood’. And this starts a discussion. Once that gets to, say, twenty people, that’s twenty hits, twenty page views, maybe twenty users with a range of links. Say that one of those links is to a newspaper story about Mick Malthouse changing the exercise bikes, then Google will assume that there’s some kind of relationship between the two. Similarly, if you have lots of links within your website to other pages, and if you grow and grow and grow and grow, then Google will assume that’s a very sophisticated site, and view that as evidence of popularity.”
Google’s PageRank system – so confident, so definite, so apparently fastidious in its distinctions – is an emblem of our times. In an era spoiled for choice, we are grateful for anything that purports to discriminate on our behalf: thus so many recent cultural phenomena, from the rise of PowerPoint to the popularity of top tens. Yet Google does no thinking. The reassuring phrase ‘relevance ranking’, suggesting that the search is nuanced to the subject rather than simply keywords, is extremely misleading. In genuinely surveying a subject, Google remains hugely inferior to something as old-fashioned as a library catalogue itemised according to conceptual categories. “The term ‘relevance ranking’ as applied to internet search engines’ ordering of retrieved results is a corruption of the meaning of ‘relevance’,” says Thomas Mann, chief reference librarian at the US Library of Congress. “Relevance ranking is not at all the same as categorization by concept; the latter brings together all materials relevant to a topic no matter what variant keywords are used by different authors on the subject.”
Google’s cleverest conceit, however, is its home page. Most rivals clutter theirs with news, images and links: Googlers are greeted, thanks to Page’s and Brin’s vestigial countercultural leanings, with a site that has remained altogether pristine. The impression does not last. Behind the virginally white shopfront lurks a retail extravaganza. A Google search for ‘flowers’ leads overwhelmingly to online florists, for ‘apples’ predominantly to Apple Macs. Google owes its exponentially expanding revenues, furthermore, to nothing more elaborate than advertising. But, again, this is something users choose overwhelmingly to ignore. A recent study by the Pew Internet & American Life Project said that almost two-thirds of Google users did not understand the difference between its free search results and the advertisements displayed to the right, soothingly labelled not ‘advertisements’ but ‘sponsored links’. “Google ads are unusually effective because most people don’t realize they’re ads,” commented Alan Deutschmann of the magazine Fast Company. “Is that evil?”
Google may not even be definitively the best search engine any more. Information professionals extol the virtues of Gigablast, which allows searching of PDF, PowerPoint and Excel content, and Exalead, which narrows results by allowing users to specify words and phrases that a search ‘preferably contain’, ‘must contain’ and ‘must not contain’. For the rest of us, Google works, it’s OK, it’s been around for a while, and there’s a sense of reassurance from the fact that everyone uses it.
It’s arguable, in fact, that Google’s principal problems lie with us, its loyal public. At anything other than retrieving the most basic information, we are still not very good at applying it. Google’s advanced search features are barely used, and nobody believes that will change – for one thing, they are actually not easy to find. “The truth about Google is that it is extremely complex,” comments Donald Norman, America’s dean of industrial design and author of the classic The Design of Everyday Things (2002). “Say if you want a scientific paper. Google has a feature called ‘Scholar Search’. But you would have to know how to find it. Is it under ‘Advanced Search’? No. In fact, you only get there by typing ‘Other’, and there it is, a single link on this big page. Google is very proud it is not subject to human bias; it is all done by algorithm. The trouble is that humans use it.”
The vast majority of us are satisfied to tap in one or two keywords, then be dazzled by the resultant profusion of links, with their impression of breadth and depth. In fact, the terms of our search have probably already guaranteed inadequacy. Research indicates that users in four out of five searches do not proceed past the first page of links. This is not necessarily because they have the best information, but because they have ‘enough’. “I tell my students that Google is a cheap date,” says Maureen Henninger of UTS, author of two guides to online research. “It will give you something, but it probably won’t be what you need.”
Clumsy, poorly defined keyword choices are part of the problem. “Google and all the other relevance-ranking algorithms … have a complex set of features they use to try to figure out what each document is ‘about’,” says Sue Feldman of Boston-based technology consultancy IDC. ‘But it’s pretty hard to figure out what someone is looking for based on just one term that comes in out of the blue. If you type in ‘pools’, do you mean swimming pools, car pools, gene pools or betting pools? Most words have multiple meanings. So the search engine, if it has no other clues, is likely to offer up some of each to you, and hope that it has guessed right.”
Context and authority are even greater concerns. To the man with a hammer, everything looks like a nail; to the man with Google, everything looks like information. “In the days of books, you obtained a context for your information from what it was embedded in,” says Henninger. “You understood the artefact. There is no artefact on the web. Say you google a disease. There is some information. Who produced it? How sound is it? People understand the difference between reading about a disease in New Idea and in the New England Journal of Medicine; they know there will be a take, a spin. That sense is unbelievably missing on the web.”
“In the past,” says Donald Norman, “we could assume that teachers wouldn’t prescribe bad books, that respected journals wouldn’t print bad articles, that you could trust good newspapers to print accurate stories. Where everything is available, we have to do that judging for ourselves.” And we are a trusting lot. Apt to raise a great hue and cry about American content on television, we emit not a peep about the vast overrepresentation of American culture on the web. Deploring authors who misrepresent themselves in print like Norma Khouri or James Frey, we remain untroubled that probably the most popular online reference source is compiled anonymously. The five-year-old Wikipedia, now the largest encyclopedia in history, with two million entries, and the most popular, receiving 2.5 billion page views a month, owes its growth to a numberless army of anonymous contributors, and the free hand it grants for revision and improvement. In the last few months, in fact, the inevitable has happened and the Wikipedia has been wracked by scandal. Norwegian prime minister Jens Stoltenberg found that his biography had been vandalised and contained a number of libellous statements; former MTV VJ Adam Curry admitted to having anonymously edited the entry on ‘podcasting’ to embellish his role in its growth; a prankster anxious to win a workplace bet was revealed to have doctored an entry concerning American journalist John Seigenthaler, implicating him in the Kennedy assassination. The aggrieved, moreover, found that they had no recourse: being a hosting company rather than a publisher, Wikimedia Foundation cannot be sued. But any academic will tell you, Wikipedia is the site hit by more students than any other – and it will probably stay that way.
Ultimately, Google owes its thrall not to being hugely effective, but to being enormously convenient. Firstly, it is easy to use: free, simple and accessible. An immutable rule of human behaviour is what social scientists call the principle of least effort, classically defined as follows: “That in any problem situation that admits of more than one possible solution, people will tend to choose the solution that produces a minimally acceptable result with the least expenditure of effort.” It is a rule that not only makes satisfying sense, but one which researchers have modelled again and again, since the first paper by Herbert Menzel and Elihu Katz fifty years ago demonstrated that drug salesmen preferred relying on anecdotal evidence from workmates to more authoritative sources like professional journals. Economist Herbert Simon later coined the word ‘satisfice’ to convey the idea of satisfaction with sufficiency.
In an ideal world, a hit or a link on Google would be the provocation to start looking somewhere – at the book to which it pointed you, in the archive to which it alerted you. In the real world, it tends to become the last stop as well as the first, heading curiosity off at the pass, sometimes shortening the time available for research simply by existing: your boss can demand your report tomorrow, knowing you can google plenty enough in the time allowed; your editor can insist a story be turned round overnight, aware that what’s needed can simply be plundered from the web.
Secondly – and a factor not to be underestimated – Google is comfortable. Looking for information when one does not know where to start can be awkward, even embarrassing. For the last twenty years, psychologists have been studying a condition called ‘library anxiety’. In the seminal two-year study of six thousand students at University of Tennessee twenty years ago, Constance Mellon found that a sizeable majority experienced anxiety while working in libraries, causing “interfering responses” in their researching. As one respondent confessed: “When I first entered the library, I was terrified. I didn’t know where anything was located or even who to ask to get some help. It was like being in a foreign country and unable to speak the language.”
Tony Onwuegbuzie, an associate professor at the University of South Florida whose Library Anxiety (2004) is now the benchmark text on the phobia, began his researches as a statistical enquiry, but found the journals kept by respondents to his first survey remarkably compelling: “Even people who would have been only slightly anxious were far more likely simply to give up if they encountered a problem with research. Say if they could not get parking, they might go around once then turn around and go home, because they were looking for an excuse not to go in the first place. Another common experience was going to the library, finding that the resource they wanted to use had other people using it, then blaming the library, thinking it was a horrible place, and using that as a basis for avoiding it in future. People were amazingly honest. They’d report having gone home and had a fight with their spouse or partner because of the frustration the library had caused them, which suggested these feelings are really deeply felt.” Onwuegbuzie’s subsequent studies at five American campuses suggest that up to 45% of students experience some kind of panic, learned helplessness or mental disarray in libraries. “As instructors, we tend to assume they have library skills, and proper search skills,” he says. “That’s not a good assumption. If they haven’t been taught, where would they learn? In fact, Constance Mellon showed very early that it’s something people hide, tending to assume that everyone else is competent but them.”
Google steps in, at once therapist and tour guide. “People will generally not allow themselves to perceive a gap in their knowledge,” explains Thomas Mann from the US Library of Congress. “What they will do instead is to inflate the part they do grasp to take the place of the whole they do not see … Furthermore, they will mistakenly conclude that they have tried ‘everything’, when in fact they’ve exhausted only the few avenues they do perceive.” Convenience and comfort, of course, are what the consumer society is all about: why should knowledge be any different?

In education, Google arrives at the end of decades in which the face of pedagogy has been reshaped by waves of technology, emanating largely from the US, although even now evidence of the benefits of computers in education is sparse. In the most thoroughgoing critique, The Flickering Mind (2004), a seven-year American study, Todd Oppenheimer concludes that the “lethal combination” of education and technology had failed on every level, producing poorer students ever more expensively. Nonetheless, as Australia’s education ministers indicated in their Joint Statement on Education and Training in the Information Economy a year ago, schools here will continue the experiment until the right result is achieved: “New technologies are transforming our society: the way we work, our social and community life and the way we learn. Embracing information and communications technology in education and training improves the skills and knowledge of all Australians, enhances our international engagement, and moves Australia confidently into the twenty-first century. The everyday use of information and communications technology will transform education and training, and lay a foundation for our future economic and social prosperity.” It is perhaps only fitting that a document concerning computers should sound so like it was written by one. But there is no doubt that Google is changing the face of school education very quickly indeed – perhaps more quickly than we are ready for.
As long ago as September 2001, a Pew survey of search engines in education found that 71% of American online teens relied mostly on internet sources for their research, and that not a quarter used libraries: “Students cite the ease and speed of online research as their main reasons for relying on the Web instead of the library.” Guidance available to children, however, varies considerably, waves of austerity in the education sector having swept away the group who would once have been the primary source of advice: librarians. Thanks to the Kennett revolution, for example, only 13% of Victorian primary-school libraries are run by trained librarians.
Yet for governments, Google is in many senses the perfect modern mass educational tool. As Todd Oppenheimer comments: “Education is an institution dominated by the pressures of mediocrity. Schools are places where treating average needs with average needs with average amounts of resources has long been the rule – a fact that, unfortunately, has become extremely comfortable and therefore deeply entrenched.” Google delivers every student the same not-very-good and not-very-bad resources necessary to craft a perfectly mediocre response. It lends itself not so much to learning as to the appearance of learning – which to politicians is, frankly, of paramount concern.
This distinction between learning and its appearance is important. Sources, of course, have always been browsed and skimmed. Yet never have they perused quite so lightly as in the online environment, which sometimes seems dedicated to sparing us the burden of reading altogether. We can cut and paste chunks of text holus-bolus. We can hit a link if what we’re looking at fails to immediately engage us, convincing ourselves we’re getting somewhere in the process. We can print out for an indeterminate ‘later’ that never arrives. And while there is precious little research into levels of reading comprehension from online sources, what we understand about the workings of memory is not auspicious. In a famous experiment at the University of Indiana in 1954, psychologists Lloyd and Margaret Peterson asked subjects to remember three letters of the alphabet, then repeat them 18 seconds later. The task sounds simple but the respondents found it impossible, because between the reading and the recollection they were asked to count backwards by three at a rapid rate. One imprint on short-term memory, in other words, is erased if it is immediately followed by another – as happens when one progresses quickly from screen to screen. This effect is exacerbated by the free-association to which the web lends itself, as noted by educational psychologist Gavriel Solomon: “Students may start exploring the life cycles of elephants in Central Africa but very quickly find themselves following a lead that takes them to a biography of Napoleon or to the political situation in Turkey.”
Lynn Davey, eLearning group manager for Education Victoria, observes that students are unconcerned by this; on the contrary, they regard Google as like an infallible brainy kid whose work they can copy. “Our kids don’t know the world pre-Google,” she says. “They don’t know they need to know more. They can easily think, ‘Why do we need to sit here and listen to a teacher doodle on about something? All we need to do is hit Google and it will give us the answers.’” And in this belief they have some educationalists in their corner. Humankind, runs the argument, has been externalising memory since the development of written language: the logical conclusion is a world in which we need not ‘know’ anything. “It’s not important that people know that George Washington was the first president of the United States,” insists British education researcher Julian Sefton-Green, currently an associate research professor at the University of South Australia. “It’s much important that people know where to find out … Education used to be based on a scarcity of information. Search engines completely change that system. As a result we’re going to rethink what are our core cultural values. Teaching will have to get away from the discipline of factual learning. In future, it’s going to be much more important to be able to order, rank and interpret information than the information itself, to have the appropriate critical and analytical tools. There’ll be no point in setting an essay on what happened in the French Revolution. That’ll be on Wikipedia. The student will have to address the causes of the French Revolution.” Actually, they’re on Wikipedia too (http://en.wikipedia.org/wiki/French_Revolution#Causes).
Teaching certainly has a lot of adapting to do. Google makes life easy for everyone. Students can coast. “I watch my son do his homework every night in minutes,” laughs Lynn Davey. “I think, ‘Is that it?’” Teachers can easily think they are doing just fine. “The temptation is to let kids get away with cutting and pasting, because we’re so happy they’ve done the assignment,” says Mary Manning of the School Library Association of Victoria. “Most teachers began their careers when only print was available, and the actual task of finding the answer was quite demanding. It’s not any more. Unless the student wants to make it so – and why would they?” But we should be wary of confusing price and value: just because facts are now ‘free’ does not render them worthless. Knowing things is not merely useful but empowering, emboldening, fun. “Knowing things is a vital aspect of our intellectual ability,” says technology critic Bill Thompson. “Knowing things allows us to make links, provide context, evolve metaphors and generally make sense of the world.” Ignorance does not create space for the development of creative and critical faculties; it breeds superstition and credulity. Analysis without fact, too, is a mill that grinds no corn.
Universities are both less and more vulnerable to the forces Google unleashes. In general, Google makes information available to students on a scale undreamed of even a decade ago, and the availability of online sources is especially a boon for those without access to adequately resourced libraries. “Our library in Mildura is a big room with very few books,” says Dennis Altman, a member of La Trobe University’s council. “A bright student ten years ago, even five years ago, was extremely limited in what they could look at. For that same student to have access to the internet is a great advantage.”
Again, however, technology is a mixed blessing. Eighteen months ago in the US, in the biggest survey yet undertaken on the impact of online resources on academic performance in universities, 42% of the 2316 faculty members polled said that the internet had had a deleterious impact on the quality of student work. Which, to Thomas Mann of the US Library of Congress, makes sense: “If a system makes only some sources easily available – especially if these sources are very superficial or of poor quality – then it can do real damage to the quality of research, for it will encourage users simply to make do with whatever sources are retrievable regardless of their quality or completeness.”
Forty-four per cent of the academic respondents also agreed that plagiarism had increased. Plagiarism is not, of course, a practice to which Google gave rise. But it is one it greatly facilitates: a Google search for ‘free essays’ produces almost fifty million hits. “Students have always been able to plagiarise,” says Maureen Henninger. “But in the past it actually involved some effort. They actually had to find a document and write it or type it out themselves. Now it is so easy. It is an enormous problem. My students don’t do it, because they know I can find it. Most academics have no idea.”
Nobody has been able to quantify the increase in plagiarism, but nobody doubts it is worsening: the Herald Sun reported last month that almost one thousand students at Victorian universities had been found guilty of various forms of cheating in the two years to mid-2005, with internet-aided plagiarism the new growth area. “All the anecdotal evidence suggests that the incidence of plagiarism has risen,” says Dr Bridget Griffen-Foley, Australia’s leading media historian, at Macquarie University. “And I have no doubt it will continue to do so as more and more material goes online. All of my younger colleagues are just as — if not more — concerned about this trend than are our older colleagues. Younger academics who have grown up in an online world are only too familiar with how easy, and tempting, it is to cut and paste material from the web.”
Even the scale of the information available to students creates a host of dilemmas. Say you were seeking material about the causes of the Great War. The simplest, most informative and also most enjoyable solution would actually be to read a good book on the subject, like Massie’s Dreadnought or Tuchman’s The Guns of August. A Google solution, to search for ‘causes World War One’, would produce 75.8 million links, from websites and PDF files to free essays and quick quizzes, sorted only by the self-reinforcing system of PageRank. The shortcut, as shortcuts often do, turns out to be the long way round. Nor is that simply an opinion: it is the essence of Google’s next and greatest initiative. For if the World Wide Web really was genuinely sufficient for all informational inquiries, there would be no need for the Google Print for Libraries project.

“Supposing you want to know what Shakespeare had to say about peace or war or love; you type in keywords and do a search of all the things that will be online that have to do with Shakespeare or commentaries on Shakespeare, and you can read hundreds, thousands, tens of thousands of documents or have them read for you by the search engine and pull out all the things that are relevant to what you’re interested in.” Thus said New York Public Library president Paul LeClerc a year ago when Google unveiled plans to scan and digitise millions of books from his institution, and the libraries of Harvard, Stanford, Oxford and the University of Michigan. So heady was the vision – an apparent realisation of H. G. Wells’ dream of a ‘World Brain’ with a “planetary memory of mankind” – that Yahoo and MSN quickly announced copycat ventures with other libraries. As MSN’s search content acquisition manager explained: “We need to get offline content online. Offline is where trusted content is and where people who need to answer questions go.”
These are basically online takeovers of the offline world: the only disagreement concerns whether they are agreed or hostile bids. Publishers contend they are the latter. In October 2005, a group of multinational publishers – McGraw-Hill, Pearson Education/Penguin, Simon & Schuster and John Wiley & Sons – jointly filed suit seeking a declaration by the US District Court that Google Print for Libraries violates the ‘fair use’ provisions of American copyright law. The case – supported by Australia’s Copyright Agency Limited – holds huge ramifications for copyright round the world. Publishers say that the implications for creators are graver still. “Libraries have amazing freedoms,” says Peter Field, Pearson Australia’s CEO. “They can legitimately push us away and make use of any book of which they own a copy. If they decide to digitise something, turn it into a PDF file and offer it as a download from their website, there’s nothing the author or the publisher can do about it. Generally speaking, they haven’t, because they understand that this undermines writers and the publishers working for them. Google, however, intend taking something for nothing and gaining a commercial advantage from it. That’s not fair use. There’s nothing personal about it. And by having ads and pop ups appear, they obtain a financial benefit … It’s not simply what Google is doing that is outrageous. It’s the pose of: ‘We’re the good guys. We’re just making the information available to the world. We’re not being evil. We’re not doing it for the money.’ This is a company with a $US100 billion market cap, a 54% profit margin and anticipated growth of 20% a year. If creators are not rewarded for their work, where is the next body of copyright material going to come from?”
The project’s only real certainly is that Google will make money on it. For those providing the content more or less involuntarily, the future is altogether less clear. Google argues that because the digitised texts will only be searchable, rather than completely readable, it will encourage readers to seek books out and buy them. This is a big assumption: books accessed at Amazon by the ‘Look Inside The Book’ function do not sell noticeably better than those not. The contrary is quite conceivable: that, as we assimilate habits of online perusal, searching a book will become an increasingly adequate substitute for reading it. It is possible to foresee a future with two classes of information: that accessible online, visited extensively, and that not, diminishing in significance to vanishing point, with the result a net narrowing of sources rather than a growth. In the words of Google director of content partnerships, Jim Gerber: “In the future, the only thing that will get read is something that will be online. If it isn’t online, it doesn’t exist.”
That is a big problem. Google not only changes the relationship with what we know but with what we don’t. The epistemological distinction was most famously made by, of all people, Donald Rumsfeld, in a press conference three years ago: “As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.” Onlookers conditioned to regard anything from Rumsfeld’s mouth as lie or cant overlooked some uncommon sense. Unknown unknowns do exist, comments Bill Thompson, and it is important we grasp them: “The problem with any search engine is that it won’t tell you about the stuff you don’t know you don’t know.” As grows our sense of comfort with the quantum of information Google makes available, so will our indifference to and ignorance of what it does not.
If Google was serious about improving search results, it is arguable it should be improving the PageRank system, not giving it even more work to do. Roving over more information, it will be just as crude, and perhaps even less efficacious. As Thomas Mann observes: “If a Google web search for ‘Afghanistan’ and ‘history’ produces eleven million hits right now, a similar search in Google Print for Libraries, with 14.5 billion pages of keywords, is very likely to produce similar results. It will become utterly impossible to ‘see the forest for the trees’.” Pace Paul LeClerc, googling ‘Shakespeare peace love’ already generates 2.8 million hits. Exactly how many does anyone need?
Even if Google Print for Libraries was assuredly of universal benefit, furthermore, there would remain doubt about the role of our chief cultural broker being arrogated by a private corporation – a corporation, moreover, whose securities have been publicly tradeable for less time than Jessica Simpson was married. In a recent essay in the Chronicle of Higher Education, New York State University’s Siva Vaidhyanathan concluded that libraries were relinquishing their core duties to private corporations for expediency’s sake: “Whichever side wins in court, we as a culture have lost sight of the ways that human beings, archives, indexes, and institutions interact to generate, preserve, revise, and distribute knowledge. We have become obsessed with seeing everything in the universe as ‘information’ to be linked and ranked. We have focused on quantity and convenience at the expense of the richness and serendipity of the full library experience. We are making a tremendous mistake.”
We will probably make it anyway. The preconditions are ideal. Thirteen years have elapsed since Neil Postman decided that the US was, and the West would shortly become, a ‘technopoly’: a state where “the culture seeks its authorization in technology, finds its satisfactions in technology, and takes its orders from technology”. It is hard to conceive of a more succinct description of what might be called the Google Society, convinced it is on the verge of a bright, shiny, networked utopia linked by huge virtual libraries to all civilised wisdom even as it reduces its culture to machine-generated lists of what everyone else is looking at, so stupid that it does not realise how stupid it is.

No comments: