One example of our collective intelligence can be observed in our way of designing our technological artifacts.  Instead of referring to their sophistication, I am referring to how we identify future risks and how we protect ourselves from them.  Indeed, one of the paradoxes of our technology is that it becomes necessary to respond to two contradictory risks: the risk that those who manage it will not be obeyed and the risk that it will obey them too much. According to this distinction, there would be one type of accidents that are attributable to impotence, while others arise from omnipotence. The second type worries us more than the first type; it is more troubling to be at the mercy of men than machines.

The first type of risks is more obvious. Complex systems usually function autonomously, and, without this condition, we could not possess any sophisticated technology. But this autonomy is often accompanied with ungovernability, and the same systems that we have configured escape from our hands and insanely turn against us. Literature is extensively populated with fantasies (which are now extremely realistic) pertaining to creations that take on lives of their own and rebel, from Faust and Frankenstein to the general characterization of the current world as a world that is out of control (as described in Anthony Giddens’s book Runaway World, for example).  If we consider specific problems of contemporary society, there is a multitude of examples of this absence of control, and the difficulty of regulating financial markets is perhaps the most disturbing one. When we affirm that something is not sustainable, for example, we are saying that we were capable of placing it in operation, although we are not capable of ensuring that its future functioning will obey the intentions that justified its being set in motion, or, simply, that it may collapse. Or we can consider the conventional example of the extent to which our relations with the technology that we are using have been altered. We have become accustomed to using tools whose logic is unfamiliar to us, and, for that reason, hardly anyone knows how these tools work or how to fix them. Even the specialists whom we depend upon are mostly replacing parts, rather than performing repairs. When something breaks down, it does so irreparably.

In fact, an automatic pilot system is a good example of the paradox that arises when we ask who is in control of a situation. A pilot believes that he is piloting an airplane, but it is entirely the reverse. A pilot places the system in operation, but it is subsequently the machine that specifies details for everything that the pilot should do. The pilot must adapt to the logic of flight. A system is intelligent when it can even disobey certain absurd orders. No one in their right mind should regret this principle, because we are indebted to it for an enormous number of mechanisms, whereby our lives are made easier and are literally sustained in some instances.

Another significant risk consists of technology being excessively dependent upon the persons who are controlling it.  There are accidents and catastrophes whose origin does not arise from a lack of power on the part of those who control technological systems but from the fact that their power was excessive. Let us consider railway accidents that arise from the fact that speed was excessive and that there was no mechanism preventing drivers from exceeding a critical speed (as was the situation with the train wreck in Angrois, Spain, July 24, 2013).  The most dramatic case was that of the suicidal pilot on the Germanwings aircraft that crashed into the French Alps (March 24, 2015).  In both situations, we encountered the excessive power of one person in relation to an insufficiently intelligent artifact, because this artifact was subject to the decisions of the person who determined its speed, including his freedom to hurtle into a mountain. While all of the alarms were sounding, there were no mechanisms that would make that person correct his flightpath. There are many systems that are intelligent because they are capable of opposing the explicit intentions of the persons who control them. The sophistication of steering mechanisms is expressed through systems that prevent a person in control from doing as s/he wishes, from constitutional limits for the political system to automatic braking systems for our vehicles.

I will say it in a somewhat provocative way: the paradox of any intelligent system is that it does not allow us to do as we wish. Let us look at some examples. The item to which a constitution is most similar is a series of prohibitions and limitations; it even hinders its own modification, for which it establishes conditions for procedures and qualified majorities, in order to ensure that changes are not an occasional occurrence or the result of narrow majorities.

An ABS braking system prevents us from braking as much as we want in a moment of panic. Doing so would jeopardize our stability and would undo it, thereby causing more damage than not braking. Even fear is an instinct that protects us from ourselves. In this regard, we could recall the story of a patient who had received a brain injury that prevented him from experiencing certain emotions such as fear, up to a point of being able to do certain things better than others, like, for example, driving upon icy roads while he avoided the natural reaction of applying the brakes when a car skids (see a description of this case in Antonio Damasio’s book Descartes’ Error). It is possible to purchase financial products as freely as a person may wish (and can do, of course), but experience with economic crises has caused us to strengthen purchasing requirements, with lending institutions being obliged to ensure that persons who buy financial products possess the necessary solvency and knowledge when buying products that are not risk-free. In a certain way, systemic intelligence has configured a series of protocols so that persons cannot do as they wish when especially hazardous artifacts, such as vehicles or financial products, are involved. In fact, there is a thriving market for something that, without any exaggeration, could be called “protecting people from themselves,” such as the market for “behavioral apps” that can warn, direct, and monitor us. Human beings do not always wish to do what they want, and self-limitation is a source of reasonable behavior.

Hence, it is necessary to affirm without exaggeration that, from the most modest technology to the most sophisticated political procedures, control systems are more intelligent according to their greater capability of withstanding the obstinacy of the persons who manage them. That is what Adam Smith and Karl Marx, among others, were trying to teach us: that social systems possess their own dynamic that functions independently of participants’ wishes. All human progress is dependent upon this difficult balance between allowing human intentions to control events and preventing arbitrariness at the same time.

The Germanwings accident may be attributable to the fact that consideration of dangers arising from persons operating a technological mechanism had been overlooked as a consequence of protection from terrorism, which tends to perceive the enemy as someone who is literally and metaphorically situated on the outside.  It should be recalled that the pilot who was flying the aircraft initiated his maneuver for crashing against the Alps at a point where he had remained alone. Neither the other pilot nor the rest of the crew were able to gain access to the locked cabin when suicidal intentions were detected. Our safety protocols have become sophisticated since September 11, with greater attention paid to external enemies instead of internal enemies, to external terrorists than to an insane pilot.  Hence, among other factors, the possibility of closing the cabin door of an aircraft from the inside, or for doors to be reinforced. The entire paradox of this matter is summarized by how to face risks originating from our own safety measures, as well as by how to avoid excessive protection.

An intelligent system is, so to speak, a system that will protect us not only from others but from ourselves. It can be configured according to experience with dangers that we create ourselves and in opposition to the conclusion that our worst enemies are persons other than ourselves. In order to proceed with this type of counterintuitive intelligence, it is necessary to have arrived at the recognition, for example, that a society is not threatened as much by nuclear weapons in an enemy’s possession as by its own nuclear power plants. And that we are less threatened by an enemy’s biological weapons than by certain experiments within our own scientific system; less by an invasion of foreign troops than by our own organized crime groups and by drug addicts’ demands; and less by hunger and death caused by war than from disabilities and death caused by traffic accidents (as Helmut Willke explains in his book Regieren). The factors that prevent pluralistic societies from freely deciding their destiny are not so much an external impediment as an actual lack of internal agreement. I will conclude that, instead of depending upon individuals, solutions depend upon improving systems that protect us from people, from our own mistakes, and from our own dementia or wrongdoing.