Making the Internet Safe: Tying Tim Harford’s thoughts on catastrophic accidents to the web

I just attended a talk from economist/journalist Tim Harford where he likened the financial meltdown (a much more extreme term than governments’ beloved euphemism of ‘economic downturn’) to a catastrophic industrial accident, such as when nuclear reactors go off or oil rigs explode. Through the entire talk all I could think about was how well Harford’s industrial accident analogy could apply to the dreaded ‘future’ of the Internet when everything online explodes from lack of regulation. And so, when you have a hammer…

The end is near

First, I will be candid: all you have time to read in grad school is half books – you can’t sit around mulling over entire books, so you must get the gist of the author’s idea and move on. These perceptions are founded on my half book interpretations and I invite you to correct and supplement in the Reply box as you see fit.

My understanding of Jonathan Zittrain’s (2008) assertion in his book, “The Future of the Internet – And How to Stop It” is that the very same qualities of the Internet that allow us to generate wonderful content (e.g. lolcats), develop complex code (e.g. Google Doodles), and innovate limitlessly (e.g. win at WoW), are the same qualities that allow for the production of repulsive or harmful online content, such as hate speech, child porn, and destructive code/viruses. Within this cyberpollution, there are worms that have brought down hundreds of thousands of computers at a time and, Zittrain posits, just because a total Internet meltdown hasn’t happened yet, doesn’t mean there is any good reason why it won’t. With hopes that inspiring widespread terror of online catastrophes will spur policy-makers to action, this Lessig (2006) uses this reasoning to call for regulation of the Internet.

Not so fast! This is where Ted Harford comes in: he spoke about how complex systems that are tightly coupled (where many things are contingent on each other) are frequently obliterated by the very safety systems that were put in place to maintain them. With wires, nodes, users, telecommunications companies, and governments all intertwined in the complex system of the Internet, we had better be extremely cautious when formulating regulations for the global staple of this century.

Safety isn’t simple

Harford discussed two aspects that make it very difficult to increase people’s safety:

  • Safety systems make people careless. They tend to take more risks when safety regulations are in place. It’s true. When I function in a locked down environment, my first inclination is to click on anything. If authoritative bodies tell me that they’ve secured my web experience, I’m more likely to put my judgment on the shelf and assume that everything I can access is simply safe by default. I bet others would do the same. However, the problem is that even with regulations, we will never reach a point where everything online will truly be ‘safe’ because the Internet is always changing and evolving.
  • Safety systems introduce new complexities and make things even more tightly coupled – which causes them to fall harder when they come crashing down. If government regulates the Internet, it unleashes into the equation the massive complexity that is a giant, multi-departmental, longstanding bureaucracy. Due to their nature, governments are slow to react and their resources are difficult to mobilize, which makes them excellent targets for hackers who could identify some weakness in mandated code (e.g. a flaw in anti-circumvention technologies, a way into identity databases) and act long before the government could developed a defense.

But we’ve got to do something!

As you can tell from previous posts, I’m not into arguments that leave us in a state of paralysis with no way out. Identifying the above concerns still doesn’t address Zittrain’s worst-case scenario and it doesn’t make us any safer. Harford concluded his talk with three things that actually do help to protect against catastrophes:

1. Build/re-build the system with safer parts; or, remove the fundamental hazard. This is a large portion of what Lessig talks about: we can alter the very code of the Internet in order to build-in regulation. We can install identity mechanisms that work with individuals’ keys or digital identifiers in order to keep children out of sites with mature content. We can make e-mail programs that spot keywords to indicate that something might be a scam (that’s my own example, but hey it could really be helpful for people who are less computer-savvy). However, that we need to be conscious of who might be favoured or disadvantaged by new types of code. Does code that prevents the copying of music stifle other artists from sampling content within reason while increasing revenues for record labels and leaving the actual band without much profit (as asserted by Benkler, 2007)? Does a special code to identify individuals also give government the means of surveillancing online activity without a warrant?

2. Provide people with better information so they can make better decisions. In my view, this is the most important point because it counteracts a concern mentioned previously: if individuals understand why they should take safety precautions, they are less likely to be careless. In teaching people the full repercussions of online actions and giving them the agency to choose safer options, they are being granted the same type of responsibility that we enjoy as democratic citizens in everyday life. In the same way that I won’t jump into a lake if I can read the sign that says the water is toxic, I won’t make my Facebook profile public if I know what that means in terms of my self-presentation to future employers. Starting in early high school, it is now appropriate to equip our population with media literacy, computer skills and Internet know-how of this type.

3. Spot trouble early – those who work in the complex system are the ones who are most likely to be first to identify flaws/weak points. However, we are very bad at protecting and compensating these people. With the crowdsourced environment of the Internet, this should be an easy one. We are already seeing vigilante groups, such as Anonymous, playing a role in ‘cleaning up the web’ by ensuring there are repercussions for people who post some horrible stuff. However, I believe this should not be relegated to fringe groups or designated to an arm of the government. Just as Wikipedia is a collaboration of experts, there could be some coordinated way to mobilize web experts to be part of a conglomeration of people who increase the safety of the Internet. Governments could subsidize the creation of means for reporting suspicious behaviour and compensate coders who stop destructive worms. In fact, I would suspect that this sort of reporting behaviour, coupled with how fast news moves on the Internet, is one of the reasons the web hasn’t yet melted down (when was the last time you didn’t hear about an e-mail virus before it hit your inbox?). This jives with the Internet’s end-to-end principle, mentioned by Zittrain, where things happen (functions and improvements get implemented) at the endpoints (by programmers, etc) of the network and are distributed instead of being centralized, which makes them more reliable.

Even if you don’t find me to be convincing, I guarantee you that Harford’s industrial accident analogy is very effective when applied to the economic crisis! However, I think it works well when reminding us to consider the multiple possibilities for making the Internet safer before we lapse into terror mode and yield all our rights to a regulating authority, which will never be infallible.

(Image courtesy of

Benkler, Y. (2007). The wealth of networks: How social production transforms markets and freedom. New Haven: Yale University Press.

Harford, T. (2011). Adapt: Why success always starts with failure. London: Little, Brown. [I assume most of Harford’s ideas from his talk come from Chapter 6: “Preventing financial meltdowns or: Decoupling”]

Lessig, L. (2006). Code: And other laws of cyberspace, version 2.0. New York: Basic Books.

Zittrain, J. (2008). The future of the Internet – And how to stop it. New Haven: Yale University Press. [Although Zittrain’s book was published after Code, Lessig refers to Zittrain’s initial ideas and even mentions that Zittrain is in the process of writing a book about them – just in case you were wondering!]

One thought on “Making the Internet Safe: Tying Tim Harford’s thoughts on catastrophic accidents to the web

Add yours

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Up ↑

%d bloggers like this: