Social Complexity is the study of nonlinear social processes through the use of models from complexity theory combined with computational methods.1 Social complexity theory is an alternative approach to more analytical methods, being based on a paradigm inherited from systems theory that is primarily concerned with synthetic reasoning. Complexity science represents an alternative approach to our traditional scientific framework. As such, it brings with it a coherent alternative paradigm, a new set of theoretical models based on that paradigm and a new set of computational methods. Some of the major modelling frameworks that form part of this paradigm including network theory, system dynamics and nonlinear systems theory. Methodologically social complexity science is characterized by the used of computational tools such as agent-based modeling and network analysis.2
The social sciences can be loosely defined as the study of human beings and the relations between those individuals that give rise to macro patterns of social organization called society. Like all empirical sciences, it is engaged in the enterprise of trying to describe some subset of phenomena in our world. In this case, the phenomenon of interest is human society, and we do this by amassing empirical data and developing logically consistent theoretical models to effectively interpret patterns within that data. However, this scientific enterprise does not happen in a vacuum, it happens within a certain cultural context and depends on a certain set of philosophical assumptions about the way the world is. Physicists don’t go into the laboratory every day and question whether there really is such a thing as some objective universe out there, this is a philosophical question rather than a scientific one. So what really happens is that researchers go into their lab everyday and operate based upon a certain set of assumptions about the way the world is, what important questions to ask, what valid processes of reasoning there are, etc. and as long as the whole community of researchers shares those assumptions then they have the supporting context within which to conduct their collaborative research enterprise.
This set of assumptions that supports a scientific domain and constitutes the whole philosophical framework within which they work is called a paradigm. The Oxford dictionary defines a paradigm as “a worldview underlying the theories and methodology of a particular scientific subject”. The paradigm or set of assumptions within which the enterprise of modern science operates was born approximately five hundred years ago with the massive cultural transformation of the renaissance and scientific revolution that give us the cultural foundations of our modern world.
The Clockwork Universe
This new paradigm really came together and first found its most coherent full expression within the work of Sir Isaac Newton, whose work was extremely influential for centuries to come and laid the foundations of modern science and of course, built into this foundation was a set of assumptions about how the world works. This whole set of assumptions is called the Newtonian paradigm or the clockwork universe; in slightly more technical terms it can also be called linear systems theory. Linear systems theory forms the backbone to virtually all of modern science. It is used in every domain from physics to biology to economics to psychology. The Newtonian paradigm is materialistic and atomistic in nature. It sees the world as a set of isolated objects that interact in a linear cause and effect fashion.3 The Newtonian clockwork universe receives its name because within this paradigm the universe is seen to be compared to a big mechanical clock. It continues ticking along like a perfect machine with its gears governed by the laws of physics, making every aspect of the machine perfectly orderly and predictable. Within this paradigm, we can understand and know this whole machine of the universe by understanding all the parts and the simple linear interactions between these parts. The whole clock is clearly nothing more than the sum of its parts and thus, to understand it we can use the process of inquiry called reductionism, also called analysis. Whereby we break the whole thing down into these isolated individual parts and study the properties of those parts in isolation and how they interact with each other. If we can then create a set of equations that describe this then it is game over. We have completed this process of enquiry and now know everything that is there to know.4
This approach to the scientific enquiry called analysis was very successful within classical physics and came to define what modern science is considered to be and got applied to many different areas throughout the 18th,19th and 20th century. Its application within the social sciences has given us what is called methodological individualism used in many different areas of the social sciences most prominently within standard economics. Methodological individualism is the requirement that causal accounts of social phenomena explain how they result from the motivations and actions of individual agents. It considers that the only thing in the social world that is real are the things that you can touch and see, which is individual humans. This is the materialistic and atomistic nature to the Newtonian paradigm. All phenomena have to be traced back to some discrete tangible entity that can be defined in isolation and described in terms of a set of properties. Within this paradigm, when all is said and done society can be nothing more than all of its constituent individuals.5 This paradigm of methodological individualism then gives us a whole approach to studying social phenomena, one that is focused on the properties of the individuals and their linear interactions. So in using this approach, we are going to want to amass data about the properties of the individuals like a national census where you fill in your age, gender, occupation etc. Once we have all of this data we are then going to look for linear interactions between the variables called correlation analysis, which is a statistical technique that can show whether and how strongly pairs of variables are related. For example, we might ask if there is a linear correlation between an individual’s level of education and their income. We would then collect the data about individuals and do a scatter plot to see how closely the values of these properties move together. This approach can describe simple linear interactions, the interaction between two, three or four variables. It works well on the micro level, and this was the primary focus of science before the 1800s where we were dealt with things like the relations between temperature and pressure, population and time, production and trade etc. During the 1800s scientists developed methods for dealing with macro systems composed of many parts by using statistical methods and probability theory, with most of this happening within the domain of statistical mechanics. Where they were trying to model such phenomena as a gas in a chamber with billions of atoms, phenomena of this kind are sometimes called disorganized complexity.6 In such cases, we are dealing with systems composed of many disorganized parts, that is to say, a large set of random variables; the variables have to be independent and identically distributed called I.I.D. If each random variable has the same probability distribution as the others and all are mutually independent then these statistical methods will work.7 These assumptions only hold within linear systems but by imposing them we can say things about the macro system without actually getting our hands dirty and looking at what is really going on inside. We can say that it will follow the law of large numbers, the central limit theorem. We can use mean field theory and make estimations, talk about the average normal person and so on.
Sufficed to say linear systems theory works well for simple linear systems, that is to say, systems that have a finite amount of independent homogeneous elements interacting in a well-defined fashion with a relatively low level of connectivity. But this is often not what we see when dealing with social phenomena. Many social phenomena such as whole economies, social institutions, cultures, and the human psychology to name just a few are fundamentally complex in nature. By complex, we mean that they consist of many, autonomous, diverse components that are highly interconnected and interdependent. In these complex systems, the scientific underpinnings of our traditional formal approaches begin to break down. And this leaves the social sciences somewhat divided in its response to the question of whether we go on using these formal methods whose assumptions when applied to social systems are floored or do we abandon formal methods all together. For example, we can see this divide between economics and sociology where standard economics has fully embraced linear systems theory, giving it quite powerful formal mathematical models. But in order for it to do that, it has to package up quite subtle and complex social phenomenon into a relatively simple set of assumptions, leaving it subject to continuous critic surrounding its foundational assumptions. while much of sociology and other social sciences, feels this approach is throwing the baby out with the bathwater, and continue to pursue their enquiry without the support of any real coherent formal system, which leaves certain doubts surrounding their status as science, as formal languages are an integral part of the whole enterprise of science. And this is giving us what is called economic imperialism where economics, the only social science that has a formal basis, increasingly dominates the others.8 Complexity theory is fundamentally a set of formal models, so we will just make a quick side note about formal methods before moving on. Formal languages are what make a scientific domain coherent and robust, as the scientist Ernest Rutherford once said “All science is either physics or stamp collecting.” This is clearly a very arrogant statement, but there is some truth to it. Physics is by far the most robust and advanced domain of science largely, because it is directly supported by the sophisticated formal language of standard mathematics. The higher mathematics used in fundamental physics is not about the x’s and y’s that you learnt in algebra at school. It is about fundamental and powerful concepts that describe patterns of organization in terms of symmetries, transformations and invariance. It is these very abstract and powerful concepts that are captured within the language of mathematics that give physics the tools to tackle very difficult phenomena in a coherent fashion. The social sciences often lack these abstract formal methods that are powerful tools for solving difficult questions. A formal language is what gives a scientific domain the capacity to speak with one voice. Without the support of a formal language, you end up with many different subdomains speaking many different languages without any capacity to relate them. And when someone comes looking for an answer, you end up giving them a hundred different models.
Over the past few decades, we have seen the formation of the beginnings of a formal language for modeling the complex systems that social scientists study without resorting to reductionist methods, it is called complexity theory. Complexity theory is based upon very abstract formal mathematical models, but probably not the kind you are used to, and we should be clear that although a lot of complexity theory really originates in mathematics and physics is not another excuse for trying to reduce social live to little particles of matter that get moved around on mass by forces, it starts with a recognition that these reductionist methods have their limitation. So complexity theory starts with an alternative paradigm to that of analysis. This paradigm is really inherited from systems theory. Systems theory is based upon a process of reasoning called syntheses, which is the opposite of analysis and reductionism.This paradigm is referred to as being what is called holistic, meaning that it is characterized by the belief that the parts of something are intimately interconnected and explicable only by reference to the whole. Syntheses means the combination of components or elements to form a connected whole. It is a process of reasoning that describes an entity through the context of its relations and functioning within the whole system that it is a part of. Thus, syntheses focuses on the relations between the elements, i.e. the way those elements are put together or arrange into a functioning entirety. Within this paradigm, we are trying to identify the complex of relations within which an entity is embedded, its place and function within the whole and within systems thinking this context is considered the primary frame of reference for describing something. We are then not particularly interested in breaking things down and talking about the properties of the parts, but we are more interested in these interactions and what emerges out of them.
Paradigms like this are always quite abstract, so let’s take a quick example. Let’s say we are trying to understand the origins of the First World War. Well, from an analytical perspective we would talk about how Archduke Ferdinand was assassinated in Sarajevo and how this effect caused a reaction from Russia which caused another reaction from Germany which in turn caused England to react and so on. In this paradigm, we would talk about the properties of the parts and the cause and effect interactions. Now, from a systems perspective, we would focus on quite the opposite. We would be looking at the whole context both in space and the process in time, the nexus of relations out of which this phenomenon emerged. We might then talk about how, through industrialization and nationalization, the international political environment within prewar Europe selforganized into a critical state and it was out of this whole context that we got the emergence of the First World War. The assassination did not then cause the war, nothing directly caused the war. It was out of the nonlinear interactions of many different factors that we got a critical state of the system and out of that critical state we got this emergent phenomenon.
So this gives us some insight into this alternative paradigm. But how does this actually translate into models that we can use? Complexity theory represents a combination of a number of different modeling frameworks that have developed in different areas in order to deal with complexity. All of which have in common a focus on the interactions between parts and how these interactions give rise to emergent phenomena on the macro level. Agent-based modelling is one good example of this. Agentbased models are a class of computational model for simulating the actions and interactions of autonomous agents in order to try and model their effect on the system as a whole. As an example, we could think about trying to model the spreading of some virus within a population. We have a traditional equation-based model called SIR which will describe this process in a top-down fashion, but we can also describe this with agent models where we ascribe simple rules to the agents and then run the program to see what aggregate phenomena emerge from the bottom up.9 Another major modelling framework within complexity theory is that of network theory that is focused on the connections between actors and how the structure of those connections affect the actors and the system as a whole. Network theory gives us a formal language to model such things as power and influence within social systems. By looking at the structure of connections that surround an individual, network theory gives us a language for talking about how things spread through a network. Nonlinear systems theory is another major modelling framework that helps us talk about the nonadditive interactions between agents in space and over time. How through these nonlinear interactions of synergies or interference we get the emergence of macro level nonequilibrium phenomena that make the whole more or less than the sum of its part. This language of feedback loops and chaos helps us in talking about nonequilibrium processes of change where the whole system moves rapidly in one direction. And this is just a quick sample of some of the topics we will be covering in this course.
Finally, we will look at the new set of practical methods and tools that complexity science uses. Complexity science is a science fundamentally based on computation. The rise of computation within the social sciences is one of the quiet but major revolutions taking place in contemporary science. I will quote the social network scientist, Duncan Watts in describing this phenomena as such, “Up until about ten years ago it was impossible to observe these (social) interactions and it is very, very hard to do science when you can’t observe things, it is very hard to do science when you can’t measure the things you are interested in. And what has changed in the last ten years or so and why it is so exciting for people like me to be at the intersection of social and computation science is that the internet has really unveiled, has really made the invisible visible, has really given us the capacity to measure the interaction between even hundreds of millions of people in real time and over extended periods of time… it feels like for many of us in the social sciences, like we have stumbled upon our equivalent of the telescope, the device, the technology that makes the invisible visible and historically that has lead to dramatic improvements in science.”10 To date, the primary sources of data for social scientist were survey research, government statistics and one off indepth studies of particular people.The statistical databases of governments and the World Bank are full of information about individual people and their properties. They tell us almost nothing about the connections between those individuals because up until very recently we did not have the computation capacity to manage and utilize large complex databases of that kind. But with the rise of the internet and particularly online social networks, this is all changing. We are going from a limited amount of randomly selected historical data on individuals to a mass of real data about the connections between people and this big data is set to revolutionize our insight into human interaction. The future of the social sciences is a lot to do with the new opportunities that are arising from these new computational capabilities and data sources. With these new opportunities for the first time, we have the capacity to not just model society in terms of individuals and simple statistical interactions, but instead in terms of context. We have for the first time in a rigorous way the capacity to map and model context, the context of a choice, the context of a behaviour, and the complex interplay of a lot of different free parameters all at once. This has always been very difficult because of lack of data and computational intractability. These new tools of computation and new data sources are very important, but at the end of the day they are just tools, they will not in themselves, help us solve difficult problems within social theory, age-old questions about the relationship between individual agency and social structure, questions about the exercise of social power, about the formation of the individual, about the rise and fall of civilizations, but with these new computational methods and a new set of sophisticated theoretical tools from complexity theory we can apply them to see what fresh insight we can get on these perennial challenges within the social sciences.