Tools for Thought

Thinking beyond productivity

Review: The Big Switch

by Andre · 2 Comments

The Big SwitchFor the last few months, mainly to challenge my thinking, I’ve been reading material on “the cloud” to understand the appeal of web-based applications and supply-side computing. Much of what I’ve seen seemed like a solution in search of a problem, considering that I’ve been running apps of my hard drive since the Mac SE (yes, I’m that old). Most of discussions of cloud-based computing never seem to make a clear case for its advantages, beyond being “cooler” in some way that’s as amorphous as the metaphor itself.

Nicholas Carr’s The Big Switch distinguishes itself by shedding light on the economies of scale that undergird the transition from individual machines and IT facilities to remote sites and distributed systems. The author parallels the new paradigm switch with an historical one: the switch from private electrical generation to a utility — hence the subtitle, Rewiring the World, from Edison to Google.

Prologue. We enter the new world through a tour of VeriCenter’s facilites. Carr was invited by the company’s founder to have a look at the epic IT warehouse in response to a previous publication, Does IT Matter? In the future, companies would forgo investing in their own IT infrastructures, and simply plug into VeriCenter’s mammoth computing resources over the internet. VeriCenter’s facility is the size of a city block, a farm of thousands of computers clustered together.

Chapter 1: Burden’s Wheel. In 1851, Henry Burden built the largest waterwheel of its time to electrically power his farm tool manufacturing operation. A half-century later, his local generator was rendered obsolete by large-scale electrical utilities with far more generating capacity and far greater transmitting distance. The scale economies of these electric utilities could no longer be matched by private factories, and the new plants could generate enough capacity to power households and businesses alike.

The World Wide Web started as a collection of static hypertext pages and links to multimedia files. Surfing the internet was essentially a reading experience. When we wanted to do actual work, we would open up applications that resided on our individual computers, like Microsoft Word or Photoshop. As utility computing gains traction, it can leverage relatively new communication protocols to interact with multiple remote databases in real time, enabling web-based applications like Google Documents, or the photo editing tool, Picnik. Economies of scale will eventually make online applications more robust than their counterparts on our hard drives.

Chapter 2: The Inventor and His Clerk. Thomas Edison’s plans to monopolize electricity were doomed by limitations in his business model, and the technology behind it. His idea was to license his patented system and sell its components to businesses in the market for local power plants — a project that was successful, but short-lived. It took a former employee of Edison’s, Samuel Insull, to conceive of and build an infrastructure of centralized systems based on alternating current, rather than his rival’s direct current, to extend electricity’s reach into the farthest corners of homes and businesses. The utility’s success fed on itself, with increased revenues enabling increased generating capacity at decreased prices, increasing demand.

Chapter 3: Digital Millwork. Like electricity, data processing had a roots at the local level, starting with Herman Hollerith’s punch-card tabulator. His Tabulating Machines Company evolved into International Business Machines under the management of Thomas J. Watson. Tabulators first attracted the attention of insurance agencies, banks, and other institutions after their deployment in the 1890 census. As tabulators, or “computers” progressed into UNIVACs and later mainframes, companies gradually increased their data processing budgets. IT expenditures went from less than 10 percent of the average American company’s equipment budget in the late 1960′s to 45 percent by 2000.

As investment in IT facilities grew, so did their excess capacity. Studies have shown that since the introduction of PCs as “clients” for companies data centers, capacity utilization averages between 25 and 50 percent. Across industries, most firms are using the same hardware and software as their rivals. The redundancy, overbuilding and overstaffing of IT assets is precisely the waste that supply-side solutions like VeriCenter, Google and Amazon Web Services hope to consolidate. As the author asserts, “The PC age is giving way to a new era: the utility age.” The advent of the fiber-optic internet has removed the bandwidth bottleneck that previously prevented distributed computing from becoming economical.

Chapter 4: Goodbye, Mr. Gates. None of this transition is lost on the player who stands to lose the most from it: Microsoft. In 2005, Bill Gates wrote an internal memo warning of the software-as-a-service (SaaS) threat to his company’s revenue stream from desktop and local network applications. Google tiptoed around Microsoft by surreptitiously purchasing land in northern Oregon under the DBA “Design LLC,” and building what’s believed to be the worlds largest data processing plant. By contrast, Amazon Web Services simply exploits the excess capacity it already has, selling cheap data storage and “virtualization” services.

But the big proof-of-concept for SaaS came in 1999 from Salesforce.com, which rented its cloud-based customer relationship management (CRM) software to companies at a mere $50 per user per month, including a free trial, instead of selling it to companies for tens of thousands of dollars. Reliability and response time were demonstrably similar to local installations, and the software allowed some data to be saved for users to work offline. Moreover, Salesforce’s software was extensible — customers could write customized code to run on Salesforce’s systems. Sales skyrocketed from $50 million in 2002 to $500 million in 2007.

Chapter 5: The White City. The 1893 Columbian Expedition was the first World’s Fair to demonstrate what an electrified city, a “White City,” would look like. While most of the utopian rhetoric that followed was just that, the social transformations that actually did come to pass were real enough. Mass production under Henry Ford’s electrified factories, and subsequently those of his rivals, created a vast American middle class of unskilled workers who were paid higher wages to consent to tediously repetitive work. As this labor force grew, so did the need for personnel required to supervise and coordinate them, leading to an increase in skilled employment, and therefore for higher education. Prior to the 20th Century, the idea that those who failed to attend college were “deprived” was nonexistent. High school enrollment at the turn or the century was under 30 percent.

On the domestic front, electrification transformed housework without actually reducing it. In every decade from the 1910s through the 1960s, “women’s work” has steadily remained between 51 and 56 hours, despite the introduction of electric irons, vacuum cleaners and other paraphernia. These “labor saving” devices mainly resulted in raising the standard of cleanliness expected — an illuminating case study for analyzing the benefit of any technology.

Chapter 6: World Wide Computer. Computing differs from electricity in a key aspect. At the end-user level, the resources of a power tool cannot be shared with remote users. With information tools, resources are inherently sharable. Songs and movies can be exhanged by peer-to-peer systems like Bittorrent. Once the entire collection of systems and services become programmable, they form one giant World Wide Computer. A “mash-up,” for instance, can allow a sales representative to identify a customer stored in Salesforce’s system on a Google Map, and call the customer via Skype. A modern blogger can build a successful site exclusively with web-based tools for writing, image editing, video hosting, syndication and ad serving — essentially for free.

Chapter 7: From the Many to the Few. Between early 2005 and late 2006, Chad Hurley and Steve Chen went from tossing around an idea for an easy-to-use video sharing service to selling the result, YouTube, to Google for $1.65 billion. At the time of acquisition, YouTube had 60 employees. As late as 2006, Craigslist had 22 employees. When eBay purchased Skype for $2.1 billion, the internet telephone company had just 200 employees, with twice the number of subscribers as the 90,000-employee British Telecom.

On the internet, the means of production reduces the need for employment more radically than any previous technological advance. Electrified manufacturing, for instance, increased the need for managers, accountants and engineers. Much of the wealth now being created online is “user generated content,” from videos uploaded to YouTube and reference material submitted on Wikipedia to codebases contributed by open source programmers, shrewdly leveraged by entrepreneurs and investors. These online services provide the means of production for communal work, while keeping the fruits of such labor in the private sector.

Chapter 8: The Great Unbundling. Print journalism is fast becoming a casualty of user generated content, testing the social fabric of the free market. While blogs and online news services greatly expand the variety of perspectives available, their integrity is even more vulnerable to the whims of advertisers than their print counterparts. Since newpapers bundle serious news with more frivolous fare, all of it was equally underwritten by the same ad dollars.

Not so as papers move online. Online ads typically pay out on a click-through or page view basis, meaning that the cost of maintaining a bureau in Sierra Leone is unsustainable if the resulting articles get low page views, especially compared to the latest celebrity faux pas. Social imperatives collide with market norms. Readers who get their national and international news from aggregators like Digg and Reddit may soon find themselves with nothing of substance to aggregate.

The unbundling of news and discourse has other implications. When users can design and filter their own “programming” to suit their preferences, they experience less to challenge their biases. A liberal reader of The Wall Street Journal will inevitably rub against conservative opinion. A conservative reader of The New York Times will encounter liberal perspectives. Customized content threatens to undermine critical thinking by reinforcing presuppositions — “ideological amplification,” as researchers call it.

Chapter 9: Fighting the Net. Like many tools, amoral by nature, the internet can easily be used as a weapon. British forces in Basra found themselves under attack by insurgents armed with intelligence of the troops’ whereabouts from Google Earth. Sometimes, the internet itself is the target of attacks, with so-called “botnets” distributing torrents spam and “denial of service” attacks.

As critical institutions rely more heavily on the internet, the implications of online attacks — or disruptions stemming from natural disasters — cannot be underestimated. Airports, financial markets, and other commercial services in Hong Kong ground to a halt in late 2006 due to an earthquake off the coast of Taiwan. The only physical solution to single points of failure is further distribution of network hubs, which creates the political complication of foreign jurisdictions. Since most of the internet’s “root servers” belong to US agencies, the international community has mounted increasing pressure to democratize (or at least, internationalize) the network.

Chapter 10: A Spider’s Web. In theory, internet users are as anonymous as they want to be. In reality, it’s often possible to infer a user’s identity by correlating usernames, search terms, IP address prefixes, and public map information. Scripting a piece of software to “spider” and download thousands of Amazon wish lists, writer Tom Owad was able to correlate list authors with contact information from Yahoo People Search.

Even when name-level anonymity is preserved, the demographic profiling from large-scale, automated data mining exploits our increasing openness with each other online. “Computer systems are not at their core technologies of emancipation,” writes Carr. “They are technologies of control. They were designed as tools for monitoring and influencing human behavior, for controlling what people do and how they do it.” The author cites companies who are building mathematical models of their workforces based on their collected employee data. Google, for instance, asked its employees to fill out a 300-question survey whose questions ranged from the programming languages they know to the pets they keep.

Chapter 11: iGod. When Google’s co-founder Sergey Brin lets his hair down in an interview, he often talks about the search engine’s teleological development into an artificial intelligence Leviathan. Brin has mused that in the future, a person would be able to plug a “little version of Google” into his brain to “improve” it.

Using Google as a proxy for similar sentiments expressed by the technocracy, Carr’s final chapter foreshadows his recent, controversial essay for The Atlantic, Is Google Making Us Stupid?. He asserts that on the internet, “we seem impelled to glide across the slick surface of data as we make our rushed passage from link to link.”

Epilogue. In one generation, one of civilization’s most fundamental inventions — the wick — ceased to be the primary source of artificial light, replaced by the metal filament. Citing an entry from a German diairist in wartime who was forced to live on candlelight during nightly air raids, he notes that electric light brightness imparts a paleness to everything it shines on. Hence our need to keep candles for more emotional interludes.

We tend to focus on what we gain by technological change, not what we lose. With generational change, the memory of what gets lost maintains the illusion that progress is an unalloyed benefit.

Conclusion

Carr has gone far beyond outlining the technological imperative that drives the migration to supply-side computing. He review it from within historical, social, economic, moral, and political contexts that lesser authors blithely disregard as a matter of geek utopianism. This is one of the best books I’ve read this year, and would recommend it to anyone curious about the shift to the cloud, or the consolidation of economic forces the shift portends. If that seems a bit much, at least sample some of Carr’s writing his persistently thought provoking blog, Rough Type.

Technorati Tags: , , ,

Tags: Books

Comments

  • Bill GardnerNo Gravatar // Aug 26, 2008 at 11:58 am

    The Mac SE was far and away my favorite machine.

  • AndreNo Gravatar // Aug 26, 2008 at 2:59 pm

    Absolutely agree. Expensive as hell, but worth every penny.