Sunday, May 1, 2011

Protecting The Net From Itself...

The internet is its own worst enemy, according to Jonathan Zittrain (see what I just did there), and I tend to agree with him for the most part. The openness and anonymity of interaction on the net does lead to a great deal of innovation, but at the same time, it also allows for a wide variety of collectivist thought (endless repeated memes and 'remixing'), anti-social behavior (griefing, trolling, flaming, vandalism, spamming) and outright crime (malware, spyware, botnets, etc.).

As a result, many are turning towards 'Tethered Appliances,' technology that is easy to use, all but impossible to tinker with and directly controlled by an external vendor, for their digital content. The appeal is obvious: all the consumer has to do is press and click and they leave all the technical problems to the licensing company to figure out. The downside is not as obvious: the inability of users to modify or enhance the content leads to innovative stagnation.

In Zittrain's estimation, the internet and all its potential for disruptive innovation is doomed to becoming largely irrelevant in the face of tethered appliances in the near future unless something is done to make users feel more secure in their online experiences. He does not see overt regulation as the answer to this, but he does believe that a few systematic changes might help to curb the tide of 'bad code' without overly restricting the generative nature of the net. Those ideas that strongly resonated with my own, in particular where my upcoming research paper was concerned, are detailed below.

ISPs AS GATEKEEPERS
One of the main problems he identifies is the end-to-end nature of content dissemination and the difficulty of monitoring and filtering it. As ISPs act more like super-highways rather than gateways, any information, good or bad, can get to any source and the onus for filtering out bad material is left to the end user, who is usually ill-prepared to do so. A temporary solution for Zittrain, until more effective end-to end solutions can be found, is to get the ISPs to actively filter out malicious content. To act as 'Gatekeepers.' This, of course, makes total sense as the ISP only real consistent choke point in the whole system. If an algorithm could be designed to detect malicious code or the activity of zombies and bot-nets, the internet would become a great deal more secure.

Unfortunately, the ISP providers do not want to engage in this sort of activity for ethical and commercial reasons. It is expensive and time consuming to try and filter out good information from bad, and would require the providers to interact directly with the millions of net users on a regular basis to correct cases of misidentified content. Governments can take control of ISPs, but this leads to a less desirable environment for generative purposes, especially in authoritarian environments, like China, where information is heavily filtered and monitored to protect the government, encouraging users to bypass those particular ISPs in any way possible, taking their business off-road, so to speak, defeating the purpose of the ISP as Gatekeeper.

THE DUAL-BRAINED COMPUTER
Now, this idea had occured to me early on as I read Zittrain's book The Future of the Internet and How To Stop It, so I was pleasantly surprised when I came upon his version of it in the solutions section of the book.

Basically, you split your computer into two parts: the frontal lobe that takes in information and is easily 'reset' and the rest of it, which stores your important information and permanent programs. Most knowledgeable users  already do something similar to this through the act of partitioning drives or storing permanent data in external sources. The difference in this case is that your whole computer is separated from its 'desktop' by a tightly controlled gatekeeper. A sort of 'personal ISP' that controls throughput.

You can think of your PC as the castle, with a gate and a moat, and the desktop as the village outside the castle which provides sustenance for the castle's inhabitants. All sorts of things can happen in the village. A spy can enter the village via a trade caravan, for instance, but as long as the gatekeepers check all the goods coming into the castle, he'll never know what goes on behind those walls. The spy can even foment rebellion and get the villagers to turn on you, but as long as you have all your systems of governance inside the walls, you can fight off the rebellion and then sally out to 'reset' the village by clearing out the rabble-rousers (or in the PCs case, the whole village) and starting again.

This whole concept does tend to move internet usage towards the 'tethered appliance' concept, with the internet being a separate appliance from the main computer, but the difference is that the user is the one that decides what is and isn't permissible content and when to reset. The main advantage is that the arduous process of reloading settings and programs becomes a thing of the past as only ephemeral data related to web browsing has to be reset.

The only problem is, again, the technical ability of the user to reinforce their gate properly so that the gatekeeper to distinguish between good and bad code. And then there is still end-to-end contact with every other user on the net, and the most malicious of these may well send an 'army' of bad packets to 'besiege' your gate and overwhelm it. In other words, your castle walls are only so strong as the force defending them.

THE ROTATING SUB-NETWORK PROTOCOL
Ok, this is my own idea, based on the concepts illustrated in Zittrain's book, as well as a few other readings from earlier in the semester concerning network.

The problem with most networks, as pointed out above, is that they are too damned big, diverse and distributed. The only active bottleneck in the system occurs when you build a gate at your end-point, but that gate is often easily overrun or fooled into letting in malicious code.

The solution for these problems comes in two parts. The first is to institute Zittrain's 'Gated Community Network' concept, in which a smaller net of concentrated interest is created (like 'Trekkie-net' or 'Puritan-net.'). Each member of this exclusive community actively participates in it, access outside of the community is not end-to-end, but only allowed through a special ISP like 'gatekeeper.'

The second part is this gatekeeper, but not just a single gatekeeper, but a trio (or more) of gatekeepers who act to validate each other. Basically, one of these nodes acts as out dual-brained computer desktop mentioned above. After any activity is completed within that node, the information filters through a second node to assess long term harm, and eventually into permanent storage. In case of corruption,  the third node can 'reset' the affected node(s) using its base code.

These sub-networks are not mandatory, but are a way for like-minded individuals to gather together and use the principle of motivated self-interest to keep the environment stable, in much the same way the users of Wikipedia 'overwhelm' bad edits. This 'Social' reinforcement is complemented by 'Code' reinforcement in the form of the Gatekeepers, which can be set to heavily reinforce protocol (like 'don't allow pornography' on Puritan-net or 'No Next Generation Content' on Old School Trekkie-net).

I'll be thinking more about this concept and trying to develop it for my research paper, but at first glance, this might be a good method of creating protocol first networks that are resistant to external corruption...

No comments:

Post a Comment