1. On Emerging Themes of Digital Production and Consumption

    Over the past months, I’ve been reading several books on consumption, culture, design, and the environment. Before I close out the bulk of my secondary research, I want to highlight a few emerging themes regarding our digital production and consumption habits. (I still have to read The Information by James Gleick and Glut by Alex Wright)

    I. Either Never Satisfied or Always Curious

    "Our inventions are but improved means to an unimproved end", as Neil Postman paraphrases Henry David Thoreau in Technopoly. A lofty statement, but one that addresses a fundamental question underlying the torrent of technological advancement in the last 20 years - where is all this headed? While some believe the innovations in technology are leading to a singularity as futurist Raymond Kurzweil proposes, other thought leaders question the insatiable demand for new information and our dissatisfaction with the here and now.

    John Thackara, author of In The Bubble, illustrates our growing dissatisfaction with the analogy of a boy, sitting under a tree, looking out over a landscape. In one case, the boy exists before the invention of the internet, cellphones, pagers; the other case describes the boy existing now. Which boy is more thoughtful in the moment, satisfied with the solitude of thought? Those not part of the Millennial generation might agree with the latter. Some, such as writer Clive Thompson, argues otherwise, saying the boy is actively seeking inspiration to share rather than waiting for some serendipitous apple to drop.

    With his analogy, Thackara references the Italian concept of dolce far niente, describing one’s ability to find pleasure in idleness, literally meaning “sweet doing nothing”. Elizabeth Gilbert also writes about the concept in her book, Eat, Pray, Love. Both authors question whether we can enjoy a moment to ourselves without being able to communicate that feeling to others. In On Paradise Drive, David Brooks criticizes Americans who have never been satisfied with what they have and who are constantly pursuing the next best thing. Applied to our various communication devices, are we loosing our ability to be satisfied with our current place in life by chasing digital bits of potential affirmation?

    II. Seamlessness and Time

    A longtime priority of interaction designers has been to erase the boundaries between experiences with technology, i.e. create a seamless experience. This can range from how easily a user can charge or sync an iPod with his/her computer to the consistency of content design across devices (phone, tablet, computer, television). A fundamental promise of technology: save the user from the drudgery of tasks and make the ones required of them easier.

    In Everywhere, Adam Greenfield points out that, as does computer scientist Mark Weiser, seamlessness can make experiences, “hard to tell when one thing ends and something else begins”. Think of it this way: where and when can you check your email? text or call a friend? Practically anywhere. With this ubiquitous power, our divisions of time – work time, family time, play time – are removed. Thackera also warns that even the design of our spaces can make our bodies, “physically desensitized from its sense of time”. Moreover, Postman laments that the promise of technology is to give us more time by accomplishing tasks faster, “Time, in fact, became an adversary over which technology could triumph.”

    Our attempts to create efficiencies with technology and task completion begets more space for other activities; this space however is often filled with more of the same activity – a consequence described as the rebound effect. The concept explains as technology allows easier access and faster use of a resource (time), the more of that resource is used. The effect leaves us wondering where all our time went.

    III. Information as Metaphor: Water, Garbage, Food

    Open access to a seemingly infinite amount of information is often framed as metaphor. In The Middle Mind, Curtis White describes the abundance of information as a deluge, leaving us to drown in sea of entertainment and communication when all we wanted was a drink. Postman moves up the pessimism scale, declaring, “Information has become a form of garbage”. Beyond subjectivity, his point is reinforced with the advent of content farms – creating content on a mass scale as quickly as possible to seed hundreds of websites for daily use, only to then be forgotten and “thrown away” into a far off database.

    The most consistent metaphor used is information as food. Douglas Rushkoff quotes Shakespeare in his Frontline report, Digital Nation, saying “we are consumed by that we are nourished”. The more quickly we snack on tiny morsels of information [sic], the more our ideas are shaped into bursts of disconnected thought. In his report, Rushkoff points out as undergraduate college students produce and consume information through endless multi-tasking, their ability to defend a thoughtful, consistent argument in an essay is diminished. Gone are the days musing by Walden Pond.

    Exploring similar themes in his new book, The Information Diet, Clay Johnson states, “information consumption is as active an experience as eating”, equating our cravings for salt, fat, and sugar in cheap foods with our desire for affirmation. By quickly viewing and sharing information, we fall prey to our desires of affirmation and recognition (as many media companies have learned), resulting in “information obesity”. Similarly, this rapid, cyclical behavior leads Microsoft researcher danah boyd to describe social media as being the “psychological equivalent of obesity”.

    IV. The Cloud as a Virtual Attic and Digital Hoarding

    While Postman describes information as garbage, more and more it seems to be something we can stash away in our cloud. Given the amount of storage available for various cloud-based services (generally advertised as being “unlimited”), producing and saving information is effortless. We are no longer limited by available storage on our computers and devices; we can save our digital content on nearly infinite levels. For example, as of today, I’m only using 88 MB of 7,671 MB available to me on my GMail account. Why delete an email when I can just have it on hand?

    To me, this is a form of hoarding – saving items of little or no utility for the chance of *possible* use in the future. Seemingly irrational, our digital lifestyle has become a paradox of loss aversion, a decision theory determined by Amos Tversky and Daniel Kahneman. Loss aversion states that we can make decisions based on our desire to avoid loss rather than acquire gains; fears of loosing our digital information forever can be alleviated by storing that information in the cloud. In his classic routine, George Carlin jokes that our homes are just places to store all our stuff. I would argue that our cloud-based services are not only means to access our content anywhere, but are actually digital attics where we can just store all our stuff.

    V. Conspicuous Consumption vs. Conspicuous Production

    Way back in 2001, David Brooks wrote Bobos in Paradise, which described a new upper class of now grey-haired bohemians who express their values with a bourgeois budget. It’s not enough to eat “morally neutral sausages”; Bobos must eat sausage made from local, free-range pork using a recipe passed down through the generations, costing far more than any offering from Jimmy Dean. ”Shopping, like everything else, has become a means of self-exploration and self-expression”, he writes. Through conspicuous consumption, we display our values and beliefs.

    It is now 2012. Our consumption as communicating success has shifted to boasting through production of content. We are all our own PR firm and with the tools of social media, we can broadcast our lives and interests with a simple click or tap. This sentiment is echoed by Kickstarter co-founder Yancy Strickler and entrepreneur Zach Klein in a recent post, pointing out that conspicuous production is now our means for transmitting values. With every upload and post, we are not only showing the world what we have or what we find interesting, but we are also searching for affirmation. I doubt anyone would continue to post content without feedback from friends, family, or strangers.

    In another book by David Brooks, The Social Animal, he mentions the ancient Greek concept of thumos: the human desire for recognition of one’s own existence. With today’s social media tools, our ability to fulfill our own personal thumos is for the taking (or clicking); but the question remains – if everyone is seeking recognition, can we all respond to one another despite the cacophony of requests?

    VI. Starting to Lean Back

    Apple founder Steve Jobs, in addressing a conference, said, “We think basically you watch television to turn your brain off, and you work on your computer when you want to turn your brain on.” What Jobs is referring to is the notion of “hot” and “cool” media, a concept first introduced by the late theorist Marshall McLuhan (also recently covered by Paul Ford in our Content Strategy class). “Hot” media are highly defined mediums which engage one sense of the viewer and require very little participation. On the other hand, “cool” media are low definition mediums that demand more viewer participation and require more attention.

    Another closely related classification of media are “lean-forward” and “lean-back” mediums. Television is a “lean-back” medium where viewers want to be entertained and are in a relaxed, passive state. In “lean-forward” mediums, the Internet, for example, viewers are more engaged users of the medium and are in a more active state. But as Eli Pariser points out in The Filter Bubble, the Internet is becoming a “lean-back” medium.

    Increasingly, we are watching more video content online. In fact, nearly a third of all Internet traffic is from watching movies and shows on Netflix. Both YouTube and Vimeo have recognized this trend and designed LeanBack and Couch Mode features respectively, so users can watch content on a television or by simply “leaning-back” in a chair. Notwithstanding online video content, our Internet tools and apps allow us to sort through and parse vast amounts of information, easing the burden of search. This does not sound bad at all, but Eli Pariser warns, “as personalized filtering gets better and better, the amount of energy we’ll have to devote to choosing what we’d like to see will continue to decrease.”

    References:

    Brooks, David. Bobos In Paradise. New York: Simon & Schuster, 2001.

    Brooks, David. On Paradise Drive. New York: Simon & Schuster, 2004.

    Brooks, David. The Social Animal. New York: Simon & Schuster, 2011.

    Clive Thompson, “The Instagram Effect,” Wired, January 2012. link

    Douglas Rushkoff. “Digital Nation,” Frontline. Produced by Rachel Dretzin. Boston, MA: WGBH Studios, 2010. link

    Greenfield, Adam. Everywhere. Berkley: New Riders, 2006.

    Johnson, Clay. The Information Diet. Sebastopol: O’Reilly, 2012.

    McLuhan, Marshall. Understanding Media. Cambridge: MIT Press, 1994.

    Nancy Miller, “Manifesto for a New Age,” Wired, March 2007. link

    Pariser, Eli. The Filter Bubble. New York: The Penguin Press, 2011.

    Peter Svensson, “Netflix’s Internet traffic overtakes Web surfing” MSNBC. May 17, 2011. accessed January 18, 2011. link

    Postman, Neil. Technolopy. New York: Vintage Books, 1992.

    Thackera, John. In the Bubble. Cambridge: MIT Press, 2006.

    White, Curtis. The Middle Mind. New York: HarperOne, 2003.

  2. The physicality of a bit.

    The BBC recently reported that IBM researchers have created a 12-atom memory bit, and have even created a byte with eight, 12-atom bits. This entails, at least in a highly regulated lab environment, that a 100 KB file would only take up 1,200 atoms of space. This insanely tiny magnetic bit is basically a switch, storing either a 0 or 1, making a block for digital information. As the article reports, “it takes about a million atoms to store a bit on a modern hard-disk" which is still extremely small when thinking of earlier manifestations of a bit - the vacuum tube. With the physical size of our digital information shrinking ever smaller, I wonder how much relevant information we can cram into our digital attics.

  3. Above is a cybernetic model of TCP/IP protocol in the context of sending or receiving a 50 KB photo. The TCP/IP protocol functions as a comparator - a component of a closed-loop system that compares information coming from a sensor to the system goal. In the case of TCP/IP, the protocol checks if a data transmission (divided into packets) is complete and assembled in the right order; anything less than completion and the protocol can request for parts of that data to be transmitted again.
This exercise seeks to determine the what of my thesis, the content; it does not necessarily refer to the overall topic, but the actual category and detail of content so as to define the why, how, who for, who by, where, and when. This exercise is not a linear process where defining what first is necessary, rather to grasp exactly what is being studied, however granular.
On pursuing a thesis about the environmental effects of cloud-based computing, I need to better understand what I am measuring as well as the infrastructure (so I can determine where and when is the best point for intervention). The what in my case is data - little bits of 0’s and 1’s that live on your hard drive, and are subsequently stored and transmitted by remote server(s). The more data, the more energy consumed by the server.
Data is measured in bits and bytes (8-bits); you’ve most likely seen the data on your computer in megabytes (MB) or gigabytes (GB). When you send any type of data over the Internet such as an email, photo, or gchat message, your data is divided up into packets. On average, the size of these packets are 576 bytes or 4,608 bits, and consist of a header and trailer, with the data in between. You may or may not know that your computer has an IP (Internet Protocol) address - a unique numerical identifier for every device on a network. Even websites have IP addresses. The header of each data packet would contain information on the origin or sending IP address, destination or receiving IP address, and total size of the packet. The trailer of each data packet would contain information on how many packets there are and in what order to reassemble them back into the original data.
If I were to send a friend a 50 KB photo, the photo would be broken up into approximately 87 IPv4 (Internet Protocol version 4) data packets and then sent out across the Internet. The TCP/IP protocol checks if any packets are missing, request packets from the sending computer, and notify the sender that the transmission is complete. This operation of error checking is called cyclic redundancy checking and used by networked devices when sending and receiving transmissions.
Direct transmission of data, which can be ineffiecient and take time (think of a landline phone call), is an obsolete method of transmission for the Internet. However, due to the non-linear nature of the IP protocol, a Google search request for example is not handled by one server, but by several, to give faster, more relevant results. There is acutally a carbon footprint estimated by Google for the average search request: about 0.2 grams of CO2. Along with the power a laptop consumes, Mike Berners-Lee estimates a Google search creates 0.7 grams of CO2. Multiply that by the 200 to 500 million search requests per day, and Google searching actually accounts for 1.3 million tons of CO2 emissions per year.
References:
Berners-Lee, Mike. How Bad Are Bananas? The Carbon Footprint of Everything. Vancouver: Greystone, 2011.
"Bit" definition, Wikipedia, accessed December 18, 2011. link
Ethan Zuckerman and Andrew McLaughlin, “Introduction to Internet Architecture and Institutions,” August, 2003, accessed December 18, 2011. link
Greg Ferro, “Average IP Packet Size,” Ethereal Mind, March 18, 2010, accessed December 18, 2011. link
Hugh Dubberly and Paul Pangaro, “Introduction to Cybernetics and the Design of Systems,” January 2010.
"Network packet" definition, Wikipedia, accessed December 18, 2011. link
Swanson, Joe. Interview by author. Written notes. Cambridge, MA.,  November 20, 2011.
Urs Hölzle, “Powering a Google Search,” Google Blog, January 1, 2009, accessed December 3, 2011. link

    Above is a cybernetic model of TCP/IP protocol in the context of sending or receiving a 50 KB photo. The TCP/IP protocol functions as a comparator - a component of a closed-loop system that compares information coming from a sensor to the system goal. In the case of TCP/IP, the protocol checks if a data transmission (divided into packets) is complete and assembled in the right order; anything less than completion and the protocol can request for parts of that data to be transmitted again.

    This exercise seeks to determine the what of my thesis, the content; it does not necessarily refer to the overall topic, but the actual category and detail of content so as to define the why, how, who for, who by, where, and when. This exercise is not a linear process where defining what first is necessary, rather to grasp exactly what is being studied, however granular.

    On pursuing a thesis about the environmental effects of cloud-based computing, I need to better understand what I am measuring as well as the infrastructure (so I can determine where and when is the best point for intervention). The what in my case is data - little bits of 0’s and 1’s that live on your hard drive, and are subsequently stored and transmitted by remote server(s). The more data, the more energy consumed by the server.

    Data is measured in bits and bytes (8-bits); you’ve most likely seen the data on your computer in megabytes (MB) or gigabytes (GB). When you send any type of data over the Internet such as an email, photo, or gchat message, your data is divided up into packets. On average, the size of these packets are 576 bytes or 4,608 bits, and consist of a header and trailer, with the data in between. You may or may not know that your computer has an IP (Internet Protocol) address - a unique numerical identifier for every device on a network. Even websites have IP addresses. The header of each data packet would contain information on the origin or sending IP address, destination or receiving IP address, and total size of the packet. The trailer of each data packet would contain information on how many packets there are and in what order to reassemble them back into the original data.

    If I were to send a friend a 50 KB photo, the photo would be broken up into approximately 87 IPv4 (Internet Protocol version 4) data packets and then sent out across the Internet. The TCP/IP protocol checks if any packets are missing, request packets from the sending computer, and notify the sender that the transmission is complete. This operation of error checking is called cyclic redundancy checking and used by networked devices when sending and receiving transmissions.

    Direct transmission of data, which can be ineffiecient and take time (think of a landline phone call), is an obsolete method of transmission for the Internet. However, due to the non-linear nature of the IP protocol, a Google search request for example is not handled by one server, but by several, to give faster, more relevant results. There is acutally a carbon footprint estimated by Google for the average search request: about 0.2 grams of CO2. Along with the power a laptop consumes, Mike Berners-Lee estimates a Google search creates 0.7 grams of CO2. Multiply that by the 200 to 500 million search requests per day, and Google searching actually accounts for 1.3 million tons of CO2 emissions per year.

    References:

    Berners-Lee, Mike. How Bad Are Bananas? The Carbon Footprint of Everything. Vancouver: Greystone, 2011.

    "Bit" definition, Wikipedia, accessed December 18, 2011. link

    Ethan Zuckerman and Andrew McLaughlin, “Introduction to Internet Architecture and Institutions,” August, 2003, accessed December 18, 2011. link

    Greg Ferro, “Average IP Packet Size,” Ethereal Mind, March 18, 2010, accessed December 18, 2011. link

    Hugh Dubberly and Paul Pangaro, “Introduction to Cybernetics and the Design of Systems,” January 2010.

    "Network packet" definition, Wikipedia, accessed December 18, 2011. link

    Swanson, Joe. Interview by author. Written notes. Cambridge, MA.,  November 20, 2011.

    Urs Hölzle, “Powering a Google Search,” Google Blog, January 1, 2009, accessed December 3, 2011. link

  4. A tour of a Google’s container-based data center in Mountain View, CA. The facility has 45 standard shipping contrainers which have a total of 45,000 slots for servers, and supports 10MW of IT equipment load. The video highlights basic layout, individual container structure, and energy efficiency. When you use Google, you very well use this facility.

  5. Understanding Server Farms, Data Centers, and Cloud Computing
A few weeks ago, I headed up to my old stomping grounds in Cambridge to celebrate my buddy’s Ryan’s birthday and have an early Thanksgiving dinner. Ryan is a Sloanie and now works for the big boys at Intel. I had the chance to talk shop with his co-worker, Greg Lord, and his friend, Joe Swanson, a network engineer for the Federal Reserve. Having a fresh perspective on my thesis topic, I wanted to inform my ignorance around server farms, data centers, and cloud computing. Greg and Joe were graciously up to the task.
This is what I gleaned.
Over the last 15 years or so, the terms “server farms” and “data centers” have become interchangeable. For the most part, the guts of each are similar; there are a collection of computer servers, usually clustered in stacks, forming rows and rows of servers depending on the size of the facility. Minus a monitor and audio jack, an individual server is made of the same components as your computer: central processing unit (CPU), hard drive, processor, memory, fan for cooling, and famously in Google’s case, a battery (I’m told the speed of a CPU is not as critical for servers).
I say “famously” because until a few years ago, Google was extremely hush hush about their server framework. You can’t blame them; servers are a multi-billion dollar industry with tiny advances in engineering creating substantial competitive advantage. Unlike other companies, Google designs and builds their own servers. Kinda badass if you’re a nerd. Google remains secretive; however, they have offered up some larger operational and structural schematics to highlight a few innovations. For example, they moved the uninterruptible power supply (UPS) battery from a separate unit to the server itself. This creates efficiencies in AC/DC conversions from the power grid to a server. But I digress.
While similar, server farms are intended for serving up data, not necessarily storing it. Data centers on the other hand do both. They have rows and rows of server stacks as well as extensive cooling systems, a control center, telecommunications, security, and tons of redundancy. Redundancies are backups of server components including power supply, network connections, and data storage. If one source fails, no problem. It’s backed up. “The more redundancies, the better”, says Joe.
Typically, servers follow a “one to many” or model, where components have at least one backup. Extending this concept beyond power supply and data connection, innovations in optimizing and creating redundancy for data storage, i.e. virtualization, have allowed cloud computing to happen. Depending on who you talk to, virtualization is an over-arching term that allows us to put our data virtually all over the globe and access that data faster by serving it up locally.
Imagine for a moment you physically divided you computer’s internal components and placed them at multiple locations around your neighborhood. You still have your keyboard, monitor, and audio jack, but the guts are all over the place. However, this all doesn’t matter to your computer. The operating system (OS) keeps purring along as if nothing happened, and you can merrily go about your day using your computer, accessing your data as if it were all located in one place. This is basically how netbooks or ultrabooks function.
In the world of servers, technology such as storage area networks (SAN) and redundant array of independent disks (RAID) abstracts where information is held and allows data to be replicated. By spreading the data and traffic load across multiple servers in different locations, data centers optimize their physical real estate. Back in the day, companies overbuilt their servers to make room for data expansion and to protect themselves against high spikes in traffic. I like to think of this method as a giant mall parking lot; every mall has built a parking lot that accounts for the maximum amount of visitors on the biggest shopping day of the year. For the other 364 days, there are scores of spaces being unused.
But why pay for inactive server space? Amazon asked this very question and responded by renting out their server space, creating Amazon Web Serivces. They provide the backbone for Foursquare, Netflix, and Yelp among others, and even host projects for Harvard Medical School and NASA to run complex analysis models. Currently, Amazon Web Services owns one-fifth of the cloud computing market, becoming a major player in providing cloud-based content.
Epilogue
This is my basic understanding of how all this works without delving into the infinite details of information technology. I’d like to thank Greg and Joe for talking about server farms, data centers, and cloud computing. I should let it be known that we did not talk about nerdy topics for the entire time, only most of it.
References:
IDC. “Worldwide Server Market Revenues Increase 17.9% in Second Quarter as Market Demand Remains Strong,” International Data Corporation press release, August 23, 2011, accessed December 2, 2011. link
Lord, Greg. Interview by author. Written notes. Cambridge, MA., November 20, 2011.
“Overview - Google Data Centers,” accessed December 3, 2011. link
Stephen Shankland , “Google uncloaks once-secret server,” CNET News, April 1, 2009, accessed December 2, 2011. link
Steven Levy, “Jeff Bezos Owns the Web in More Ways Than You Think,” Wired, November 13, 2011. accessed December 2, 2011. link
Swanson, Joe. Interview by author. Written notes. Cambridge, MA.,  November 20, 2011.

    Understanding Server Farms, Data Centers, and Cloud Computing

    A few weeks ago, I headed up to my old stomping grounds in Cambridge to celebrate my buddy’s Ryan’s birthday and have an early Thanksgiving dinner. Ryan is a Sloanie and now works for the big boys at Intel. I had the chance to talk shop with his co-worker, Greg Lord, and his friend, Joe Swanson, a network engineer for the Federal Reserve. Having a fresh perspective on my thesis topic, I wanted to inform my ignorance around server farms, data centers, and cloud computing. Greg and Joe were graciously up to the task.

    This is what I gleaned.

    Over the last 15 years or so, the terms “server farms” and “data centers” have become interchangeable. For the most part, the guts of each are similar; there are a collection of computer servers, usually clustered in stacks, forming rows and rows of servers depending on the size of the facility. Minus a monitor and audio jack, an individual server is made of the same components as your computer: central processing unit (CPU), hard drive, processor, memory, fan for cooling, and famously in Google’s case, a battery (I’m told the speed of a CPU is not as critical for servers).

    I say “famously” because until a few years ago, Google was extremely hush hush about their server framework. You can’t blame them; servers are a multi-billion dollar industry with tiny advances in engineering creating substantial competitive advantage. Unlike other companies, Google designs and builds their own servers. Kinda badass if you’re a nerd. Google remains secretive; however, they have offered up some larger operational and structural schematics to highlight a few innovations. For example, they moved the uninterruptible power supply (UPS) battery from a separate unit to the server itself. This creates efficiencies in AC/DC conversions from the power grid to a server. But I digress.

    While similar, server farms are intended for serving up data, not necessarily storing it. Data centers on the other hand do both. They have rows and rows of server stacks as well as extensive cooling systems, a control center, telecommunications, security, and tons of redundancy. Redundancies are backups of server components including power supply, network connections, and data storage. If one source fails, no problem. It’s backed up. “The more redundancies, the better”, says Joe.

    Typically, servers follow a “one to many” or model, where components have at least one backup. Extending this concept beyond power supply and data connection, innovations in optimizing and creating redundancy for data storage, i.e. virtualization, have allowed cloud computing to happen. Depending on who you talk to, virtualization is an over-arching term that allows us to put our data virtually all over the globe and access that data faster by serving it up locally.

    Imagine for a moment you physically divided you computer’s internal components and placed them at multiple locations around your neighborhood. You still have your keyboard, monitor, and audio jack, but the guts are all over the place. However, this all doesn’t matter to your computer. The operating system (OS) keeps purring along as if nothing happened, and you can merrily go about your day using your computer, accessing your data as if it were all located in one place. This is basically how netbooks or ultrabooks function.

    In the world of servers, technology such as storage area networks (SAN) and redundant array of independent disks (RAID) abstracts where information is held and allows data to be replicated. By spreading the data and traffic load across multiple servers in different locations, data centers optimize their physical real estate. Back in the day, companies overbuilt their servers to make room for data expansion and to protect themselves against high spikes in traffic. I like to think of this method as a giant mall parking lot; every mall has built a parking lot that accounts for the maximum amount of visitors on the biggest shopping day of the year. For the other 364 days, there are scores of spaces being unused.

    But why pay for inactive server space? Amazon asked this very question and responded by renting out their server space, creating Amazon Web Serivces. They provide the backbone for Foursquare, Netflix, and Yelp among others, and even host projects for Harvard Medical School and NASA to run complex analysis models. Currently, Amazon Web Services owns one-fifth of the cloud computing market, becoming a major player in providing cloud-based content.

    Epilogue

    This is my basic understanding of how all this works without delving into the infinite details of information technology. I’d like to thank Greg and Joe for talking about server farms, data centers, and cloud computing. I should let it be known that we did not talk about nerdy topics for the entire time, only most of it.

    References:

    IDC. “Worldwide Server Market Revenues Increase 17.9% in Second Quarter as Market Demand Remains Strong,” International Data Corporation press release, August 23, 2011, accessed December 2, 2011. link

    Lord, Greg. Interview by author. Written notes. Cambridge, MA., November 20, 2011.

    “Overview - Google Data Centers,” accessed December 3, 2011. link

    Stephen Shankland , “Google uncloaks once-secret server,” CNET News, April 1, 2009, accessed December 2, 2011. link

    Steven Levy, “Jeff Bezos Owns the Web in More Ways Than You Think,” Wired, November 13, 2011. accessed December 2, 2011. link

    Swanson, Joe. Interview by author. Written notes. Cambridge, MA.,  November 20, 2011.

  6. Going Back to the Future, or to July 2011

    An Alternate 1985

    In the movie Back to the Future Part II, the main character Marty McFly commits the ultimate snafu by leaving a sports almanac in plain sight of an aged version of his arch nemesis, Biff, in the year 2015. Old Biff then hijacks the time-traveling Delorean to travel back to 1955 to give his younger self the sports almanac from the future. Over the next 30 years, Biff uses it to amass a vast sum of money from gambling on sports, always knowing the winner. When Marty arrives back to 1985, he discovers an “alternate 1985” where Biff is his step-dad, mayor of his hometown Hill Valley, and owns just about everything. Way to go, Marty.

    By comparison, Fiona Raby and Anthony Dunne from the Royal College of Art camp put forward the idea of “alternative nows”, offering visions of “how things could be right now if we had different values”. (Moggridge, 2006) Excluding Biff’s iron fist, their work remains in the noir, suggesting, for example, a reality where children grow meat to power their television. Notwithstanding Guy Montag knocking on your door right now, I’d like to imagine a current state where the Knowledge Navigator actually caught on and gestural interfaces – rather than a mouse – were our means of interacting with a computer.

    Coupled with a growing momentum behind the internet with things, these themes formed an area of exploration of my thesis for about 4 months. The notion of creating new forms of internet-embodied objects as a graduate thesis is very appealing; rants about the need for more tangible interfaces along with explorations by firms such as Berg are evidence that interaction design can extend beyond the screen. Earlier sketches of my thesis included a built shelving unit that glowed when I got mentioned on Twitter (I’ve got 78 followers so not that often).

    But as I focused more on the making physical objects, it became apparent that I needed to go beyond, as our chair Liz Danzico put it, “interesting explorations of an interaction design student”. I decided to shift my focus from investigations in academia to what I had outlined in July 2011, consumption.

    Plunging into the Shonash Ravine

    Staying on the Robert Zemeckis’ riff, Back to the Future Part III finds Marty stuck in 1885 with only one way to get out: get a locomotive to push his time-traveling Delorean up to 88 miles per hour, thus enabling time-travel (duh) to send him back to 1985. The kicker, apart from getting a locomotive to go that fast, was the Shonash Ravine cutting off extra miles train tracks, leaving little room for acceleration and error. Marty’s sidekick, the slapstick genius Doc Brown, calculated a point of no return whereby they must commit to reach 88 mph or plunge into the ravine. Spoiler alert: Marty makes it back to 1985.

    Among many environmentalists, there is a consensus that a point of no return exists for Earth, where we have done so much damage to the environment that human beings can no longer inhabit the planet. Doc Brown knew the exact point of no return on the train tracks, but unfortunately, we cannot agree when or what that point of no return is for our planet. Bill McKibben, outspoken author of The End of Nature, offers a number of 350 parts per million of carbon dioxide in our atmosphere as the marker, and has founded a non-profit around the concept. As of October 2011, we are currently at 388 PPM.

    So are we going to plunge into a metaphorical ravine? Yes and no. The ability for our air, land, and water to absorb pollution and then provide its bounty is debatable. Moreover, our behavior, particularly around consumption of natural resources, is so far removed from the extraction, production, distribution, and disposal processes that we have difficultly measuring our collective impact, let alone an individual one. Lester R. Brown of the Earth Policy Institute summarizes, “We are crossing natural thresholds that we cannot see and violating deadlines that we do not recognize.” (Brown, 2008)

    Back to July 2011

    Earlier this year, I drafted a thesis proposal that outlined my exploration for the summer. It stated:

    “In great excess, we can consume digitally at near infinite levels which (I postulate) further removes us from the consequences of our actions. The removal of meaning from the actual object offers another opportunity for investigation on how we consume and ultimately experience these virtual forms.”

    To put it plainly, the further removed from the consequences of our actions, the more we will engage in those actions. Pertaining to our digital consumption habits, there are little to no barriers to produce, save, share, and consume digital content. It’s even the M.O. of internet-based services to make sure our digital lifestyle is seamless and without barriers.

    As we shift our content and communication channels to a digital format, we begin to loose sight of exactly how much data we amass. On a personal computer, it’s easy to notice how much hard drive space we’ve filled, but do you know how much data you have in your Gmail account? Facebook? Flickr? What about all of your online content collectively? One New York based startup, Dispatch, is looking to bring all your cloud-based content into once place; a benefit for those who need to manage their content, but not for those who want to know where their content is physically located. As John Thackera puts it, “These technologies are supposed to give us a clearer image-but by sanitizing the subject, they prevent us from knowing reality itself.” (Thackera, 2006)

    This brings me to server farms or data centers or whatever they’re called. They make cloud-based computing possible and can be found in the form of a small stack in a work closet or come by the thousands, housed in a massive building in Oregon. What’s curious about these (we’ll call them data centers) err, data centers is they consume vast amounts of power. In 2010, global data centers “accounted for between 1.1% and 1.5% of total electricity use.” (Koomey, 2011) The industry recognizes the monetary, and environmental, costs involved with powering and maintaining such large facilities. Recent advances are making data centers more energy efficient, however, as more extreme “green” measures are taken in the location and design of new facilities, many others, old and new, still run on greenhouse gas-emitting fossil fuels.

    Now for July 2011, thinking about the consequences of our consumption. How much power does it take to send an email? Consequently, how much carbon dioxide is produced when I do so? Thankfully, research has been conducted around this question, and Mike Berners-Lee, founder of Small World Consulting, even wrote a book on the topic entitled, “How Bad Are Bananas? The Carbon Footprint of Everything.” But do we keep building more data centers as our data cloud exponentially grows? What happens in 10, 20, 50 years? Are all my pictures and sent emails saved in a virtual shoebox forever? These questions and others help lay the groundwork for my thesis as I move forward with my research, and I can’t wait to get started. Again.

    References:

    Berners-Lee, Mike. How Bad Are Bananas? The Carbon Footprint of Everything. Vancouver: Greystone, 2011.

    Brown, Lester R. Plan B 3.0: Mobilizing to Save Civilization. New York: W.W.Norton, 2008.

    Jonathan G. Koomey, Ph.D., “Growth in Data Center Electricity Use 2005 to 2010,” Analytics Press, August 1, 2011.

    Moggridge, Bill. Designing Interactions. Cambridge: MIT Press, 2006.

    Thackera, John. In the Bubble. Cambridge: MIT Press, 2006.

  7. New technologies alter the structure of our interests: the things we think about. They alter the character of our symbols: the things we think with. And they alter the nature of community: the arena in which thoughts develop.

    — Neil Postman