Smart Systems Security
The rise of smart systems represents a natural evolution of our information technology but with this next generation of information systems, we are both vastly expanding our technological capabilities and also consolidating – plus handing over – an extraordinary amount of power to these automated algorithms and because of this there needs to be major consideration given to the appropriate use of that control and power; along with the more traditional concerns about securing access to it. The scale of the risk involved is unprecedented as our critical infrastructure becomes automated, networked and remotely controlled via common smart platforms. Today a typical car’s airbag, steering, and brakes can all be hacked and controlled through the Internet for malicious ends. Control systems in nuclear power plants can be broken into and with the rollout of IoT platforms software will soon be permeating all types of technologies as our critical infrastructure becomes increasingly dependent upon it.1
Autonomous agents can be understood as essentially advanced optimization algorithms. When we let a machine autonomously pursue a goal we don’t know exactly what action it will take. With only a limited and narrow form of awareness that is trying to optimize for a limited number of parameters, many negative externalities can result. For example, corporations are a form of agent within the free market capitalist system that is designed to optimize for shareholder value and financial returns. We have long seen how this narrow focus on the profit motive due to the structure of the incentive system can lead to negative environmental and social externalities, indeed it can be identified as a key driver of the current sustainability crisis.
This illustrates how narrow analytical reasoning – the kind that these smart systems will be based upon for the foreseeable future – that is not supported by or operating within some broader form of awareness to the overall context, often leads – due to the incorporation of only a limited set of factors – to negative externalities that create unsustainable results. We can say a system is under control and operating in a sustainable fashion when its actions are integrated with the broader context. The problem with smart systems is their narrow analytical form of awareness. As autonomous agents become more autonomous given greater scope to define the means through which they achieve a given end, there is great potential for them to perform acts that are misaligned with the overall context in their narrow pursuit of their ends without overall awareness of the environment within which they operate. For this smart technology landscape to be developed in a sustainable fashion there needs to be a systems-of-systems approach to control, where more narrow and specific forms of smart systems are nested within larger more general forms of awareness which are in turn coordinated and monitored by broader forms of human intelligence. A system is only really in control when awareness, responsibility, and power are all aligned. This means the exercising of control through a multi-tier framework with more intelligent and aware systems guiding systems that are lower in their capacity for information and knowledge processing.
Whereas information and data may be growing at an exponential rate, this only works to make intelligence an increasingly scarce resource. Information technology, on the one hand, commoditizes information and data driving its value right down. But because of this it also increases the value of knowledge and intelligence making them scarce resources. Wherever there is demand for a scarce resource there is a hierarchy based on access to that resource. This drives a new kind of hierarchical structure that is emerging out of the information revolution, captured in the acronym of DIKW, which stands for data, information, knowledge, and wisdom.2 Controlling these systems in a long-term sustainable and secure way means understanding this hierarchy and building it into our systems of technology so that this world of complex information systems that we are going into is governed and controlled by true knowledge and insight of context and consequences.
Enabling and Constraining
The rise of smart systems can be seen as a whole new level to our development of technology and like all technologies, it holds out the possibility to both enable us or constrain us depending on how it is designed, developed and operated. However, this being said technology should not be understood as always being a neutral thing, perhaps in the abstract as a means to an end it is neutral, but all technologies have to go through a design and development process, and how that process is carried out will determine to a large extent whether the technology is constructive or destructive in nature; whether it works to ultimately enable people or constrain them.
It is possible to industrialize an economy without creating the negative environmental externalities that our particular set of industrial technologies created when we built them, thus they cannot be said to be neutral. A combustion engine that emits toxic fumes into its environment is not a neutral thing, it is destructive in this sense. Technological development may be inevitable and its evolution in the abstract may well be a neutral thing, but how we conduct that process of development is neither inevitable nor neutral, thus there is a responsibility associated with it. The negative externality of smart systems is the potential for an excess of narrow analytical reasoning – which smart systems represent a massive proliferation of – and a lack of broad synthetic reasoning to balance and direct it towards constructive ends. The computer scientist Stuart Russell summarizes this issue as such3 “this is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker — especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure — can have an irreversible impact on humanity. This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research — the mainstream goal on which we now spend billions per year.”
An excess of analytical reasoning and lack of synthetic reasoning could take us into a world where we have an extraordinary amount of technical capabilities and power without sufficient knowledge and wisdom to direct it effectively, the result being unsustainable results. For the opportunities in smart systems to be realized and the negative externalities limited would require a concomitant massive expansion in synthetic reasoning capabilities and the appropriate control and alignment of smart systems within larger more intelligent frameworks of organization. In such a way ensuring its correct alignment and ultimately the appropriate use of that power towards ends that are integrated with a broader context and thus sustainable in the long term. As far back as 1960, Norbert Wiener said4 “we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” As the machines get smarter and more powerful it is our job to stay thinking about the context, to think about the overall desired outcome and align the means with those. An expansion in technological means requires an expansion in human ends and an alignment between the two in order to develop in a sustainable way.
1. YouTube. (2018). Swimming with sharks – security in the internet of things: Joshua Corman at TEDxNaperville. [online] Available at: https://www.youtube.com/watch?v=rZ6xoAtdF3o [Accessed 13 Feb. 2018].
2. Systems-thinking.org. (2018). Data, Information, Knowledge, & Wisdom. [online] Available at: http://www.systems-thinking.org/dikw/dikw.htm [Accessed 13 Feb. 2018].
3. YouTube. (2018). 3 principles for creating safer AI | Stuart Russell. [online] Available at: https://www.youtube.com/watch?v=EBK-a94IFHY&t=41s [Accessed 13 Feb. 2018].
4. Automation", N. (2013). Norbert Wiener’s paper “Some Moral and Technical Consequences of Automation” – Less Wrong . [online] Lesswrong.com. Available at: http://lesswrong.com/lw/i2g/norbert_wieners_paper_some_moral_and_technical/ [Accessed 13 Feb. 2018].