My reading for this week centered on The Exploit, by Alexander R. Galloway and Eugene Thacker, which deals with the way human social structures have changed drastically in the modern era, replacing top-down hierarchies with dispersed social networks, changing the way that human interaction, mainly power/resistance relationships, functions.
Throughout the book, the vast 'super-organism' that Christakis and Fowler imagined (see my previous post) takes on a new paradigm, that of a large biological computer, and is viewed through a mathematical and system oriented lens. Individuals in this set-up are not individuals, per se, but 'dividuals,' semi-autonomous entities (Nodes) which make decisions, but whose decisions are in a large part influenced by the Informatic flow of the network as a whole, the very act of individuation creating a new flow which must, eventually, filter through the other 'dividuals,' undergo further dividuation and then return and influence further decisions.
This brings to mind the 'Memristor' or 'Memory Resistor.' In computer engineering, a two-terminal variable resistor, which not only controls the amount of charge flowing across it, but remembers the amount of charge that has passed between the terminals and changes its resistance based on that last memory. Humans function much the same way in Galloway and Thacker's model, the charge of information passing through them on its way through the network circuit and then the 'dividual' making a decision on how to process and pass that information on based on previous information gathered from its various connections. The practical upshot of this is that humans in a network are often driven by the information within it to make decisions which they think are self-actuated.
Galloway and Thacker explore this relationship to explain how these networks can be exploited with a minimum of force and maximum of effect that could never be matched by older distributed hierarchies. The focus here is different. It is impossible to 'bring down the system' in the traditional revolutionary sense, but much can be achieved by changing the system from the inside through manipulation of the flow of information. Much like the viruses in Black Death (again, see the previous post), or more appropriately, the modern computer virus, the collective nature of modern networks makes it very easy to corrupt a single node to cause change in the topography of a network and shape it to other purposes.
Like our Memsistor, however, where there is current, there is also resistance. In fact, the more overt power you exert against a network, the more likely it will be resisted. Attacking a node in our network directly, say by actively disparaging a member of a social network which you are only marginally tied to, will often strengthen it against further attack due to the panoptic nature of the network to monitor its own nodes and reinforce set protocols. Attacks must be more subtle and are more likely to work if you can get the network to work for you instead.
The classic example is, of course, terrorism. By causing fear to spread throughout a network, you can change its behavior and ultimately, its shape, as ties (known here as edges) are moved or cut out altogether to isolate the individual nodes from 'contamination.' And it isn't even necessary to attack any specific node: you just need to implant the idea of contamination to get the ball rolling and let the reverberating nature of the network take care of the rest. Terrorists cause fear, the fear causes people to shut themselves off or curtail their normal activities, which damages social and economic systems to which they are tied, which causes more fear which causes more damage, etc.
One of the more subtle ways of exploiting networks and taking them over from within is to gain access to information and modify it slightly, in much the same way as a computer virus does, finding a vulnerable node to spread the new 'meme' about until it is reinforced by the network itself. In biopolitics, this is achieved by taking control of media and education outlets, 'informing' or 'teaching' the corrupted information to vulnerable nodes (the uninformed, the young) and then relying on those nodes to affect change for you by creating new protocols or even restructuring the network.
Is there a defense against this sort of attack? The Exploit references the concept of homogeneity, lack of diversity, as the key vulnerability in networked systems. But without homogeneity, the internet and computer communication in general would be fail. So perhaps there is a biopolitical version of a virus scanner that 'identifies' infected nodes and reacts to them by creating resistance in much the same way as a direct attack? Some sort of documented and set protocol that the network could reference each time information passes from node to node to strengthen homogeneity to such a point that deviations from it are recognized and corrected (Let's call it a 'constitution.' That should do the trick. Well, until one of the nodes decides that it is 'living' and changes at the whim of individual nodes, at which point the whole network falls apart)?
Ok, so we've seen that the system falls apart even with a strong protocol is in play because corruption can come from within a network as well as without (one of the two problems Galloway and Thacker identify). Perhaps the problem in both cases is size. The more nodes in an individual network, and the more diversity in individuation, the more likely corruption can spread. Perhaps the strongest network is made up of several smaller sub-networks that are rigidly homogenized to prevent corruption, but are connected up in a larger and more diverse pattern that can be flexible enough to react to threats? What if, for example, the UN weren't made up of nations but of ideologies? Nations would still exist, in a fashion, as legal and cultural entities, but what if all the world-wide decisions were based on small, focused groups of a million or so like minded people? What would the world look like? I think that's worth looking into...