Common system models and frameworks
In the previous chapters we introduced what a system is and went on to develop various concepts and definitions that underpin systems thinking. By now you should be familiar with these concepts and their definitions, and you should be able to think about your organization, your job, programme of study and even your life from a systems perspective.
In this chapter we will build upon these concepts and definitions to introduce a number of systems models and frameworks that are commonly cited and used. Furthermore, the models and frameworks we have selected to include in this chapter take different complementary perspectives to systems which should further enhance your understanding and help you to start thinking in systems.
"combination and together with the concepts we introduced in the previous chapters they equip us with the thinking tools and processes to help us understand and even predict the systems we are trying to manage or the systems within which we live, work and play."
REFLECTIVE EXERCISE
Think about a system you are interested in (e.g. the organization or department you work or worked in; your sports, art or drama club; your course or programme of study) and try to use all the frameworks and models we discussed in this chapter to gain a better understanding of the system by answering the following questions.
1.Is the system part of a larger system? What is the higher-level system that it is part of? What other systems exist within this higher-level system?
2.What are the parts of this system? Are they just simple parts or are these parts also systems (i.e. subsystem) in their own right?
3.Using Miller’s model, think about what matter/energy or information is being processed through the system. Can you conceptualize the system in terms of input, throughput and output stages? Try to explain what happens at each stage.
4.Can you describe Beer’s five systems within the system? Is this system a viable system?
5.What is the system’s architecture? Was it always like this? Did it evolve over time?
6.How easy is it to predict the behavior of the system? What are the primary sources of variation in each part of the system? And how do these variations interact to make the system’s behavior unpredictable?
7.Are there different participants in the system potentially with different worldviews? What are these worldviews? How may they be affecting the behavior of the system?
8.Are there any constraints in the system? Where are they? What can be done to alleviate these constraints?
If you are unable to think about a system, think about your commuting system, i.e. the system you use to travel from home to work, school, or club. We guess that the time it takes you to commute each time varies significantly. What are the reasons behind these differences? Try to use the questions above to think through the system and see if you get any new insights.
TEAM EXERCISE
Viable systems – asking participants to map the five systems of the Viable Systems Model and their interaction for their organization is a very good exercise. To get the ball rolling, you can ask them to think about how the company scans its external environment, competitors, technology, suppliers, etc. (System 4) and how this information is used to make strategic (System 5) and operational management decisions (System 3). Also, it is useful to ask the participants to think about the algedonic signal within their organizations, particularly if the organization is over- or under-sensitive to these signals. A healthy organization would have just the right reaction; an organization that is not so viable would either overreact to all signals or would be numb to signals and not react at all.
Variation – the variation exercise as part of Deming’s Theory of Profound Knowledge (page 74) is useful and works in both small and large groups. It helps to demonstrate the point about variation as a simple linear system. Having completed this exercise, the instructor can motivate a discussion about other kinds of variation we may observe in organizations, such as variation in knowledge, management style, people skills, etc., to enable the group to develop a more fundamental understanding of the concept.
Constraints – although a little old, we still recommend showing the movie The Goal, which enables participants to hear and see what is going on, thus aiding their comprehension of the concepts discussed in the Theory of Constraints.
Notes
1.According to Miller, Energy and Matter are the same. He defines Matter as anything that has mass and occupies physical space, and he argues that Mass and Energy are equivalent as one can be converted into the other.
2.We discuss the concept of recurrence in greater detail in the next section when we discuss the Viable Systems Model.
References
Beer, S (1972) Brain of the Firm, Allen Lane, The Penguin Press, London, Herder and Herder, USA.
Beer, S (1979) The Heart of Enterprise, John Wiley, London and New York
Beer, S (1985) Diagnosing the System for Organizations, John Wiley, London and New York
Deming, W.E (2018) The New Economics for Industry, Government, Education, MIT Press
Goldratt, E.M and Cox, J (1984) The Goal: A process of ongoing improvement, Routledge
Goldratt, E.M (1990) Theory of Constraints (pp. 1–159). Croton-on-Hudson: North River
Hitchins, D.K (2003) Advanced Systems Thinking, Engineering, and Management, Artech House
Hitchins, D.K (2008) Systems Engineering: A 21st century systems methodology, John Wiley & Sons
Holliday, M and Jones, M (2015) Living systems theory and the practice of stewarding change,
LEARNING OUTCOMES
. Understand
. characteristics of living systems
. the control and communication within viable systems
. the importance of systems architecture
. how variation and worldviews may interact with and shape the behaviour of systems
. how constraints govern the performance of a system
. Able to conceptualize complex systems and explain their behaviours in terms of their control and communication systems, architecture, sources of variation, worldviews and constraints
. Identify basic improvement opportunities in these systems
4.1. Miller's Living Systems Theory
James Grier Miller (1916–2002) was an American psychologist, psychiatrist, an academic and a forerunner of systems thinking. He established and led the Mental Health Research Institute at the University of Michigan, United States. He is also the creator of the Living Systems Theory, which was intended to be a general theory about the existence of all living systems (Miller, 1978).
Miller first developed his theory of living systems in 1978 by focusing on concrete (in other words tangible) systems and then extended his theories to include conceptual and abstract systems. He organized living systems into eight nested hierarchical levels, each lower level a subsystem within the higher-level system as depicted in Figure 4.1.
According to Miller’s theory, cells represent fundamental building blocks of life, which organize themselves into organs, which in turn organize themselves into organisms. Organisms organize themselves into groups and in turn groups organize themselves into organizations. Communities include both individual organisms and groups, with different functions within the community. Societies are associations of communities and supranational systems are organizations of societies.
Core to his theory is that all nature is a continuum, and that the endless complexity of life can be organized into patterns that repeat themselves at each level of system. All eight levels of systems are considered open self-organizing systems that may be conceptualized using four dimensions—matter/energy, information, space and time—because the living systems exist in a space-time continuum and they are made of matter and energy organized by information.
Miller’s theory suggests that all eight levels of systems sustain themselves through 20 subsystems that recur at each level. Some of these 20 subsystems process both matter/energy and information; others process matter/energy or information. These subsystems, which are arranged by input-throughput-output processes, are defined in Table 4.1.
Miller, having focused on concrete systems when developing his theory, also distinguishes between concrete, abstract and conceptual systems where:
A concrete system is a system that exists in reality and is composed of tangible (physical) objects such as materials, information, plants and so on. In other words, they are hard systems.
A conceptual system is a system that exists in reality and is composed of intangible (non-physical) objects (such as ideas, knowledge, feelings). Most existing soft systems are conceptual systems even though they may have some concrete elements as they deal with people’s feelings, knowledge and thoughts.
An abstract system is a system composed of tangible and/or intangible objects, but it does not exist in reality; it exists only in thought as an idea. For example, the Star Ship Enterprise and the race of Klingons from the Star Trek programmes do not exist in reality; they exist only in imagination.
Time is also a fundamental dimension of Miller’s theory, which is captured in the definition of a living system. A living system, by itself, integrates divergent parts into a convergent whole in dynamic relationships internally and externally in an ongoing moment-by-moment process of self-organization and self-creation (Holliday and Jones, 2015), which captures several systems thinking concepts in the context of Miller’s Living Systems Theory. These are:
.Systems consist of several parts or subsystems that are divergent. Thus, left alone these divergent systems would transition into a state of chaos, i.e., entropy.
A living system integrates divergent parts into a convergent whole. Thus, a control system exists creating this convergence.
Internal and external relationships or connections that are dynamic. That is, the relationships change over time.
Moment-by-moment process of self-organization and self-creation. That is, over time there is a continuous process of self-organization and self-creation.
In this context, Miller’s 20 subsystems summarized in Table 4.1 are his attempt at generalizing the subsystems and functions that enable a living system to integrate its divergent parts into a convergent whole through a continuous process of self-organization and self-creation.
In short, we can surmise that with the 20 subsystems that sustain living systems, Miller is identifying the subsystem that controls the behavior of living systems that prevent entropy and promote homeostasis.
A living system integrates divergent parts into a convergent whole. Thus, a control system exists creating this convergence.
Internal and external relationships or connections that are dynamic. That is, the relationships change over time.
Moment-by-moment process of self-organization and self-creation. That is, over time there is a continuous process of self-organization and self-creation.
In this context, Miller’s 20 subsystems summarized in Table 4.1 are his attempt at generalizing the subsystems and functions that enable a living system to integrate its divergent parts into a convergent whole through a continuous process of self-organization and self-creation.
In short, we can surmise that with the 20 subsystems that sustain living systems, Miller is identifying the subsystem that controls the behavior of living systems that prevent entropy and promote homeostasis.
Stage Subsystem Function Processes
Input Stage Input Transducer Brings information into the System Information
Ingestor Brings matter/energy into the System Matter/Energy
Throughput Stage Internal Transducer Receives and converts information brought Information
Channel and Net Distributes information throughout the decoder Information
Decoder Prepares information for use by the system timer Information
Timer Maintains the appropriate spatial/temporal relationships Information
Associator Maintains appropriate relationships between information sources Information
Memory Stores information for system use Information
Decider Makes decisions about various system operations Information
Encoder Converts information to needed and usable forms Information
Table 4.1 (Continued)
Stage Subsystem Function Processes
Throughput Stage Reproducer Uses information to carry out reproductive functions Information
Boundary Uses information to protect the system from outside influences Information
Distributor Distributes matter/energy for use throughout the system Matter/Energy
Converter Converts matter/energy into a suitable form for use by the system Matter/Energy
Producer Synthesizes matter/energy for use within the system Matter/Energy
M/E Storage Stores matter/energy used by the system Matter/Energy
Motor Handles mobility of various parts of the system Matter/Energy
Supporter Provides physical support to the system Matter/Energy
Output Stage Output Transducer Handles information output of the system Information
Extruder Handles matter/energy discharged by the system Matter/Energy
4.2. Beer’s Viable Systems Model
Anthony Stafford Beer (1962–2002) was a British consultant and an academic best known for his work in management cybernetics. Cybernetics is the study of control and communication systems in animals and machines. It is concerned with understanding complex systems behaviors such as learning, cognition, and change. Management cybernetics is the application of cybernetics to management and organizations.
Stafford Beer’s definition of a system is consistent with our earlier definition, that a system is a set of connected things or parts forming a complex whole. However, he also introduced the concept of viability to systems thinking (Beer, 1972, 1979, and 1985). So, what is a viable system?
Stafford Beer defines a viable system as a system that can self-produce. So, you may ask what is self-production? This concept is best explained through an example.
Example 1 – A Human Being
Think about how old you are. If you are reading this book, your response may range anywhere between 20 and possibly more. Well, you would all be wrong! You may have lived in this world for 20 to 90 years, but throughout this period your cells have been replacing themselves at very regular intervals. In fact, the average age of a human cell is eight years. So, we could argue that over a period of about 8–12 years, most if not all of your cells have renewed themselves. But your friends, family, and colleagues still recognize you for who you are, even though every cell in your body may be different.
Example 2 – IBM
International Business Machines (IBM) started life in 1911 producing punch-card tabulators and other office products. It moved on to producing electric typewriters (1933), then computers (1952), and later entered into the personal computers market (1981). Then it divested from manufacturing to work on supercomputers, computer services, and parts (2005). Throughout this 94-year period we can imagine that most parts of IBM (buildings, equipment, people) have been renewed. They were replaced by parts that were more relevant for the context within which the company was operating. In short, although every part of IBM has changed over the years, we still recognize it for what it is.
In the above two examples we have illustrated the concept of self-production. In other words, self-production is the ability of a system to renew its parts, in some cases with improved, enhanced parts appropriately adapting to the environment within which it operates. In the wider systems thinking literature, self-production is also referred to as self-creation or autopoiesis. The term autopoiesis comes from the Greek words "auto" and "poiesis," meaning self-creation. It was borrowed from the field of cellular biology, where it refers to the capacity of living cells to reproduce and renew themselves. In a broader context of systems thinking, it describes the ability of systems to reproduce certain behaviors by repeating their own operations. At this point, it is important to emphasize that self-production is not the same as self-organization, which we discussed in Chapter 2. Whilst self-organization is about a system’s ability to organize and manage itself in the absence of a central or external organizing/managing authority (e.g., hierarchy), self-production is about the system’s ability to renew its components. In other words, not all self-organizing systems can self-produce, and similarly not all self-producing systems are self-organizing systems. This is reflected in the IBM example above; whilst IBM is not a self-organizing system, as it has a management hierarchy, it has proven itself to be a self-producing system.
It was Stafford Beer's life’s ambition to understand what makes a system viable, and as a result, he came up with the Viable Systems Model (VSM) illustrated in Figure 4.2. VSM, as illustrated in this figure, might look complicated, but it is quite simple once it is explained. According to Beer, any system comprises five subsystems, which we will call systems for simplicity.
By definition, a system comprises a number of parts. In VSM, each part is known as System 1. A system may have as many parts or System 1s as necessary. In Figure 4.2, we have shown three Systems 1s. At this stage, do not worry about what is inside these System 1s or parts, as we will come to them later.
System 2 provides the information and communication mechanisms enabling different parts, that is Systems 1s, to amongst each other. System 2 also enables System 3 to monitor and coordinate the activities within System 1s, providing a resource-sharing function for System 1s.
Systems 3, 4, and 5 together, shown on the left in Figure 4.2, represent the management function for the system. Within this management function, System 3 represents the controls that are put into place to establish the rules, resources, roles, and responsibilities of System 1s. System 3 also provides an interface with Systems 4 and 5.
System 4 is responsible for looking outward to the environment, monitoring what is happening around the system, providing essential information on how the system needs to change, and adapt to remain viable.
System 5 is responsible for setting direction and policy within the whole system, balancing demands from different parts of the system and directing the system as a whole.
It is worth spending a little more time discussing the differences between the functions of Systems 2 and 3. Whilst both are providing some form of coordination function between parts of the system, that is System 1s, System 3 works more as a planning and scheduling function whereas System 2 provides a more immediate, real-time, coordination function. For example, if you are playing tennis, imagine the ball is about to go to the left-hand side of the court. You do not stop and say, ah, the ball is about to go to the left; I need to tell my legs to go that way and my right arm with the racket to turn and lift up this way. Instead, you react immediately in real time. It is the purpose of System 2 to coordinate amongst System 1s to enable this time reaction. In this real-time scenario, System 2 achieves such a response by enabling the individual management functions of System 1s to coordinate amongst each other without the need for instructions from System 3.
At this point, it would be appropriate to introduce the concept of recursion, which we have already intimated in previous chapters when we discussed systems of systems and systems within systems. In Figure 4.2, if you examine what is inside each System 1 you will see the recursion of the VSM described above. Further to this, if you investigate the System 1s in the next level, you will see the same VSM structure repeating once again. According to VSM, all viable systems are recursive. In other words, the structure (Systems 1, 2, 3, 4 and 5 described above) will repeat in subsystems, sub-subsystems and so on.
In describing this sort of control structure, what Beer does is describe the cybernetic control system, which exists within many systems. You can apply this to an amoeba, a shoal of fish, an organization or even the governance structure of a nation. Let us look at two examples:
Example 1: A Shoal of Fish
A shoal comprises thousands of fish with each fish representing a system in its own right (System 1), but the whole shoal operates to some common, genetically coded rules such as staying within six inches of each other (System 5)
A simple rule makes this collection of fish behave as a shoal. This shoal of fish has many eyes that enable it to monitor what is happening in the outside world (System 4). When one fish detects a threat, such as a dolphin coming towards them, it responds immediately (System 3) by breaking away from the shoal, which causes other fish to also break away because of the six-inch rule, even though they may have not detected the threat (System 2). So, the shoal splits in two, trying to avoid the dolphin. Once the threat is gone, the shoal comes back together again because of the six-inch rule.
Example 2: A Manufacturing Organization
Let us assume that a manufacturing organization has two business units (System 1s). One business unit designs, manufactures, and sells custom-made pumping solutions, and the other one manufactures and sells standard industrial pumps that you can buy from a catalogue. The custom-made pumping solution business competes primarily on the company’s reputation for engineering excellence and designing reliable pumping solutions that meet customers' expectations. The standard pumps business competes mainly on price. Essentially, these are two different businesses founded on the same product, i.e., a pump, that are serving two completely different markets on a different competitive basis.
If we examine each business unit, we will see that both of them have the same processes, such as Develop Product, Get Order, Fulfill Order, Support Product. These are System 1s at the next level of recursion. We will also see that, although each business unit may comprise similar subsystems or processes, the functions of these subsystems will be significantly different. In one business, the focus of the Get Order process (System 1) is to sell a standard product at a competitive price, while in the other business unit the focus of the Get Order process is to convince the customer that they offer the best engineering capability and solution.
Naturally, at the business unit level, these two different businesses compete differently in their respective markets (System 4); they would need to have different strategies and business plans (System 5), different management structures and roles (System 3) and different coordination mechanisms that would coordinate the processes or subsystems within each business unit (System 2). Of course, higher-level Systems 3, 4, and 5 would also exist to ensure cohesion and optimization of the overall business.
Having defined the five systems that underpin the capability of a system to self-produce and renew itself, Beer takes his model further and includes a system of performance measures. He characterizes performance as:
.Actuality: what the system is able to do now with existing resources and under existing constraints.
.Capability: what the system is able to do now with existing resources and under existing .constraints, if it really worked at it.
Potentiality: what the system should be achieving if it developed its resources and removed its constraints.
An example of this could be that the manufacturing company is delivering 65 per cent of its orders on time (Actuality), but based on past performance, it knows that with current resources and constraints, it can deliver 80 per cent of its orders on time (Capability). However, if we remove all the constraints and develop all the resources, its potential is 100 per cent on-time delivery (Potentiality).
In this context, Systems 4 and 5 are jointly responsible for realizing the potentiality of the system. To enable this, Beer offers three further performance measures based on the above concepts, which are somewhat different but complementary to our discussions on systems performance in the previous chapter.
Productivity: the ratio of actuality to capability.
Latency: the ratio of capability to potentiality.
Performance: the ratio of actuality to potentiality.
A key feature of the VSM and the performance measures discussed so far is that the performance measures enable aledgonic signals to be communicated throughout the system to interrupt conscious thought to provoke reflex action. In cybernetics, an algedonic signal is a pre-emptive message concerning pleasure or pain that provides an important survival mechanism to a system by alerting it to an imminent threat. For example, when crossing a road, we might see a car coming towards us, but the car is far enough away that our conscious thinking mechanism makes us walk faster to get across the road safely. This is conscious thought. In contrast, an algedonic signal is when we accidentally put our hands near a very hot surface and our reflex reaction kicks in and we immediately pull back our hand without conscious thought. In living systems, an algedonic signal is an essential component that.
enables the living system to respond to immediate threats, in effect avoiding disastrous consequences.
In fact, our earlier example of a tennis player’s reflex reaction is also a response to an algedonic signal. In sports, training helps to develop the person’s ability to recognize threats and opportunities and gain an ability to respond through reflex reaction rather than conscious thought. In organizations, the algedonic signal works the same way. When someone somewhere does something extraordinarily badly, an algedonic signal is sent, which may initiate immediate action with all relevant Systems enabled by System 2 coordinating a response in a timely manner. In common terminology we may recognize this as a corrective action. However, the signal is also sent to management (Systems 3, 4, and 5), which may, after conscious thought, take a preventative action so that the same pain does not happen again.
From a business and management perspective, a good example of an algedonic signal is the stop-the-line or stop-the-process signal, which is akin to pulling the emergency handle to stop a bus or train. A stop-the-line/process signal is commonly used to enable people working in the production line or a business process to stop the line or process when they detect a significant problem. For example, in a manufacturing line, if a faulty part is detected but the manufacturing process continues using these faulty parts, all the products with the faulty parts would have to be reworked or thrown to waste. Thus, stopping the line/process and dealing with the problem immediately saves a lot of pain down the line.
Based on the discussion so far, we can surmise that Beer, like Miller, in developing the Viable Systems Model, identifies the subsystem, controls, and signals that are essential for systems to respond to their external environment, govern their performance, see opportunities and threats, and respond to these in a timely manner. As a result, viable systems self-produce/self-create themselves, changing and adapting to new emerging conditions within their operating environment. In short, Beer provides insight into the mechanisms that govern the behaviour of systems that can sustain themselves, preventing entropy and promoting homeostasis.
4.3 Hitchens’ Systems Architecture
Derek Hitchins (1935–) is a British systems engineer and a professor of engineering management. Having had a career in the Royal Air Force followed...
by systems engineering and senior management roles in various organizations, he has worked all his life with complex systems. His academic work focuses on developing a better understanding of the architecture that underpins these complex systems (Hitchins, 2003 and 2008). In this context, systems architecture is defined as the pattern made by all the subsystems and their interconnections to support the function, purpose and performance of the system. Hitchins’ Systems Architecture, although developed from a perspective of technical or engineered systems, is equally relevant to living and soft systems such as organizations, as will be discussed in this section.
An architecture is not the same as the structure of the five systems in the viable systems model or the 20 subsystems identified in Miller’s Living Systems Theory. Rather, an architecture assumed in this framework relates to how various parts of the system are organized in relation to each other. Unlike Miller’s theory and Beer’s VSM, Hitchins’ Systems Architecture is not universal; it is specific to the purpose and function of a system. For example, if we look at different living systems such as an amoeba, a worm, a snake, a cheetah, and a human, the architecture of each of them is quite different. An amoeba is a single-cell animal so its architecture is very simple: it is a single cell. A worm is a rather more complex system; its architecture is a tube with its vital parts (organs) organized within this tube. A snake, although similar to a worm, is bigger, so it needs a skeleton to support and organize its parts. A cheetah’s architecture is different again. It has parts (arms and legs) that a snake does not have. In terms of parts, there is little difference between a cheetah and a human. If we assume that the purpose of both is to survive and reproduce, their functions are quite different. A cheetah’s architecture is optimized to give it speed in short pursuits whereas a human’s architecture is more upright, enabling them to walk long distances more efficiently.
At this point it would be worth reflecting on the whisky company case study presented at the end of the previous chapter, where we concluded that the whisky company comprises two systems – the high-value products business and the low-value products business. You may also recall that at the end of the case study we asked a reflective question about the organizational structure. In fact, this question is really about what the appropriate architecture would be to support the purpose and function of these two systems. Do we create two separate architectures, or do we create a single architecture that would support both systems? In practice there is no single answer to this question. Different.
Companies have addressed this same issue in different ways. The important point here is to understand that there are different systems that we are trying to manage, and that they may require different architectures to support their purpose and function.
Further to this, considering the evolution of humans as systems we will observe that their architecture has evolved over time, and it is continuing to do so, to support the changes in the function of the system. Consequently, we can surmise that as the system’s purpose or function changes, the architecture needs to evolve to support its new purpose or function.
In this context, the model illustrated defines how a system’s architecture supports its purpose and function and ensures its viability through maintenance and evolution (self-production/creation).
According to Hitchins’ model, storing transient information and the knowledge of information location strongly contributes toward acquisition and sharing of knowledge regarding the performance of the system and the fitness of the architecture that supports it. This in turn enables the architecture to adapt, evolve, and self-maintain, thus enabling the architecture to provide a framework for system cohesion. For example, different parts of a building and their functions are integrated into the whole building through civil engineering architecture. This framework in turn enables the system to reconfigure its assets in anticipation of internal and external threats, ensuring the availability of the system. For example, rooms in a building can be converted to serve a different function when the main purpose of the building needs to change.
Reconfiguring assets within the system enables groups within and outside the system to be linked so that synergy is achieved in coordinating and controlling these assets. These linkages serve to identify closed linked parts within the system so that they can be grouped or located together to ease communication and relationships, thus reducing system complexity. This in turn provides a resilient framework enabling the system to recover from any disruption while also supporting progressive development. It provides the foundation for external entities (systems or parts of systems) that intermittently connect to or disconnect from the system. For example, parts of the building might be temporarily sealed in the event of fire. This ultimately supports the system’s mission or purpose.
From this discussion one could see that Hitchins' approach to understanding the system's architecture is quite different from Miller's Living Systems Theory or Beer's Viable Systems Model discussed in the previous sections. Apart from having a strong engineering and technical systems bias, Hitchins' main focus is on understanding the characteristics that enable a system to maintain an effective architecture, i.e., a structure that organizes various parts of the system in relation to each other. Hitchins does not present us with a single unified model of an architecture; rather, he presents a model that explains how a system's architecture is maintained and evolves over time. In the previous paragraphs when discussing the meaning of architecture we have.
already given examples of living systems such as an amoeba, worm, snake, cheetah and so on. From these examples, it is evident that as living systems evolve so does their architecture, albeit over very long periods of time. Have a look at the biological evolution in Figure 4.4 and think about the architecture of various creatures, i.e., different parts and how they are organized. You will see how their architecture has evolved over time.
The architecture of organizations also evolves over time, usually in response to adapting to changes in the operating environment. Some of these changes could be over long periods of time and some of them may be much faster as the organization responds to a disruptive event, such as the Covid-19 pandemic. For example, in response to the pandemic we have seen the shape of working changing in a very short period of time. Previously, for most organizations working from home was a luxury offered to a limited number of employees, and online meetings, although technically possible, were much less common. During the pandemic and the immediate aftermath (as during the writing of this book) we have seen a massive change in the attitude of the employees and the organizations. Many organizations and people found that they can be as effective, if not more so, working from home. As a result.
number of organizations downsized their office buildings and instead invested in systems and technologies that make their people more effective when working from home – for example Twitter, Square and Coinbase (Kelly, 2020). As one CEO of a technology company put it:
… since the pandemic we have vacated our building which we own and rented it out to another company. Today we are a virtual organization. All our people work from home, which saves them about two to three hours of travelling every day, we do not have to heat and maintain a building including a canteen, which saves us money, employees get more work done, they spend more time with their families... We still have meetings but people can meet anywhere convenient, it does not have to be an office building. So far it is working better than we could have imagined.
In essence, they have changed their architecture. Previously, parts of the system (people) were connected together by being in the same office building. Today they are still connected together but the mechanisms that support this connection are no longer physical buildings and offices, they are technologies that underpin the internet and modern communications such as email, shared workspaces and video conferencing. In short, the purpose and performance expectations of the system are the same, but the function (i.e., how the system operates) has changed, and thus the architecture has adapted to support the new function.
4.4 Deming's System of Profound Knowledge
William Edwards Deming (1900–1993) was an American engineer, statistician, academic and consultant. He is recognized as the father of modern quality management and is often quoted as the influential force behind the Japanese industrial revolution and their reputation for high-quality products.
During his work on quality management and continuous improvement he widely used statistics where he observed variations in the systems, which became a core concept that underpins his System of Profound Knowledge (Deming, 2018). Deming suggests that in order to understand and predict the behaviour of any system, and particularly a human activity system such as the organizations we all work in, we need to focus on four elements as illustrated in Figure 4.5, namely The System, Theories, Human Behaviour and Variation.
Consistent with the definition of a system, Deming's model also defines a system as a number of parts that are connected together, and as they interact over time that results in the emergent outcome or behaviour of the system. He conceptualizes each part of the system as a variable and suggests that the output or behaviour of each part is never consistent and is subject to at least some natural variations. He argues that these variations within each part of the system (represented by Xs in Figure 4.5) interact with each other either directly or indirectly over time (t in Figure 4.5), thus making the outcome or behaviour of the system (represented by Y in Figure 4.5) difficult to predict.
In essence, with this approach Deming captures the complexity of everyday life. For example, think about your commute to work or school and how long it takes. Let us say your commute to the office is about 15 minutes on a good day, but sometimes it takes 45 minutes. If you leave home at 8:30 am the time it would take you to commute to work is a lot less predictable because of the rush hour and it can sometimes take you one hour to get to your office. But if you leave at 9:30 am you can usually get to the office within 15 to 20 minutes. So over time, interactions between the variables make the outcome more or less predictable. In this system the commuters and the traffic control system (e.g. traffic lights) represent different parts of the system that interact over time.
Deming suggests that this complexity in systems can be overcome by creating a lens that integrates our understanding of:
the system and its parts
the variations inherent within each part of the system and their interactions
the theories or worldviews of different people about the system that ultimately shape human behaviour
So far in this book we have already devoted considerable time to discussing what a system is, how its parts are interconnected and so on. However, this is the first point where we have introduced the concept of variation. Although we discussed worldviews in Chapter 2, we have not delved into how different people’s theories or worldviews serve to shape their behaviour within the system. Below we will devote some space to expand on these points.
Variation
Variation is a well-known phenomenon in statistics as well as in quality and process management. In short, variation is defined as a change or difference in form, condition, position or amount. Examples of variation include:
.If a machine produces 30 pieces per hour on average, then sometimes the machine will produce 25 pieces per hour, while other times it might produce 35 pieces per hour. These variations will average at 30 pieces per hour over time. For a given hour we can never be certain how many pieces the machine will produce, but we can be more confident that it will be somewhere between 25 and 35 pieces. Here the variation refers to the quantity produced over a given unit of time.
A light bulb's lifespan is measured in hours. Typically, a modern LED lightbulb will have a lifespan of about 50,000 hours as specified by a manufacturer. This is an average estimated lifespan of a lightbulb – some will last a lot longer and others a lot less. Here the variation refers to the time over which a product performs its function.
Statistically an average American male is 175.4 cm tall. This does not mean that the next American male to walk through the door will be this height. Most certainly he will be shorter or taller. Here variation refers to a measure of height.
In a workplace the people skills of the leaders are likely to be quite different. Even the leadership style and people skills of a single leader may vary depending on their familiarity with and experience of a particular situation. In short, variation is not just applicable to things we can measure objectively, as in the examples above. It can also be in softer elements of the organization, such as management style, knowledge, information. policies and so on . The reality is that variation in any of these factors will impact on the overall behavior and performance of a system.
In general, variation has two causes: natural causes and assignable causes. In the first example of a machine producing parts, if everything is working normally, the machine will produce 30 pieces per hour with a minimum of 25 pieces per hour and maximum of 35 pieces per hour. This is a natural variation; in other words there is no assignable cause that would explain the variation. However, if the machine breaks down and it takes 30 minutes to fix it, and then there are only 14 pieces produced during that hour, the reason behind this variation is due to assignable causes, i.e., due to the machine breakdown. Similarly, for other examples assignable causes for variation could be a manufacturing fault reducing the lifespan of an LED light bulb to 1,000 hours, or a medical condition that would affect the height of a person.
Variation has a profound impact on systems as it impacts both stocks and flows in complex ways. To demonstrate this, let us consider a simple system perfectly designed and balanced, comprising five activities with each activity taking an average of 10 pieces of work per hour (Figure 4.6). This could be a manufacturing process in a factory, a tax returns handling process or a student registration process in a school. What would the average output of the entire system be?
TEAM EXERCISE: IMPACT OF VARIATION ON PROCESS PERFORMANCE
This is a team exercise to help participants to better understand the concept of variation in a simulation of a simple process. To conduct the exercise, you will need seven people. Five people will work for five work centres, one person will represent a customer, and one person will be an analyst. If you have more people, you can have observers.
Set up a production line with five work centres, as illustrated in Figure 4.6.
Use coins as work pieces.
Give each workstation a pair of dice.
Start with six work pieces in each stock location with the first work centre having an unlimited supply of coins.
The customer calls for product at regular intervals (say every 15 or 20 seconds).
On the call of the customer, each work centre:
Throws the pair of dice.
Picks up the number of work pieces indicated by the dice (between 2 and 12) from the left-hand-side stock location and passes them on to the right-hand-side stock location.
If there is no sufficient quantity of pieces on the left-hand-side stock, then the work centre will use the available pieces.
Let the game run for about 20 calls.
During the game, the analyst records the number of work pieces received from work centre 5 for each call.
Each work centre should record the dice value and the actual number of pieces processed at each call.
At the end of the exercise discuss:
What does the overall output profile look like? If you ensure that for each call the output coins are piled separately, you will see the output as a histogram.
What do you think the average output from the system is? How did different parts of the system interact to deliver this behaviour?
Personal reflections
Having completed the exercise, reflect upon your own organization. Can you think of areas of the organization where variations (of time, practice, knowledge, skill, etc.) are causing problems? Can you identify sources of these variations? Can you eliminate or reduce some of these variations?
In statistics, standard deviation is used to measure the amount of variation or deviation from the average. The normal distribution curves in Figure 4.7 illustrate two systems with wider and tighter standard deviations whist having the same average values. A smaller variation suggests a tighter deviation with outputs closer to the average value. The widely popularized Six-Sigma process improvement approach focuses on understanding and reducing variations in a process.
Contrary to general belief that this line of thinking is only useful for repetitive manufacturing processes, this phenomenon becomes even more significant when the system increases in complexity.
Theories/worldviews and human behaviour
What Deming calls theories in his System of Profound Knowledge actually refers to the worldview of individuals, i.e., how they see and conceptualize.
system, as their views inevitably impact on what they do and how they do things within the system – their behaviour. Consider this – have you ever come across someone in your organization who appears to be behaving irrationally? If you have worked in organizations of moderate complexity, you are certain that you will have encountered irrational behaviour, but have you ever wondered why?
In many cases, a behaviour that may appear irrational to you, from a different perspective would look perfectly rational. I have personally experienced this ‘light bulb’ moment many years ago (in 2005) when we were working on a multidisciplinary project to improve the performance of UK manufacturing companies. Some team members were talking about dysfunctional behaviours by people in organizations. A colleague, who was a psychologist and a behavioural scientist, interrupted and said, ‘there is no such thing as dysfunctional behaviour, all behaviour is functional’, which resulted in a heated discussion. The ‘light bulb’ moment came when another colleague commented that ‘... how would you describe what terrorists do, don’t tell me they are functional’. And the response from the behavioural scientist was, ‘of course it is functional behaviour, they would not do what they do if they thought it was dysfunctional’. At this point the coin dropped. Of course everyone has a different perspective on life, what is right and what is wrong, and we see this in organizations all the time.
In this context, in Chapter 2 when we asked ‘What is a system?’, we discussed how different students conceptualized their university differently. Also, at the end of Chapter 2, when we asked you to reflect upon your organization by asking a number of colleagues to draw a picture of the organization and narrate each picture to one another, you would have seen different worldviews of different colleagues emerging from their pictures and stories. These worldviews become critical to help us understand the rationales behind the behaviours of people in organizations. If you have not done this exercise, we strongly recommend that you do it with your classmates, friends or colleagues, members of your sports club or people from within the system you are trying to understand better.
In summary, Deming’s System of Profound Knowledge provides further tools to help us think in systems. In particular, it helps us conceptualize and understand variation inherent within systems and their parts. It also brings individuals’ worldviews, which he calls theories, into view to help us make sense of how variation in different parts of the systems may interact with different worldviews of individuals and can result in system-wide behaviour.
4.5 Goldratt’s Theory of Constraints
Eliyahu Moshe Goldratt (1947–2011) was an Israeli business and management guru. He was the originator of the Optimized Production Technique, first published in the form of a business novel, The Goal (Goldratt and Cox, 1984), from which he developed the Theory of Constraints (Goldratt, 1990).
The exercise with coins described in the previous section should also be illustrative of the constraints within the system that determine the performance of the whole system. We have also talked about systems constraints earlier in this chapter when we introduced the Viable Systems Model and characterizations of different performance levels, i.e., actuality, capability and potentiality. In this context, a constraint is defined as anything that prevents the system from achieving its full potential. In the Theory of Constraints, a constraint is defined in a similar way as anything that prevents the system from achieving its goal.
The Theory of Constraints is a structured set of guidelines that helps us understand and manage the constraints in a system. The principle that underpins the Theory of Constraints is that organizational goals can be managed by controlling the variation in three measures: throughput, inventory and operating expenses:
Throughput is the rate at which the system processes work, i.e., the rate at which the work flows through the system. For a commercial entity, throughput would also equate to the speed at which the system generates income through sales: the faster the throughput the greater the sales.
Inventory is the accumulation of work through the system. In a commercial entity, this would be the money that the organization has invested in purchasing things it intends to sell.
Operating expenses are the resources (money in commercial terms) the system consumes to turn inventory into throughput.
According to the Theory of Constraints, a system’s performance can be maximized by carefully managing and balancing these three performance measures by:
maximizing throughput by getting as much work through the system as possible that meets the expectations of the system (i.e., effectiveness), whilst.
EXAMPLE: CHAIN AS A SYSTEM
Imagine a length of chain, how would you strengthen it? Essentially, a length of chain is a simple linear system with each link in the chain representing one part of the system. Each link is connected to the adjacent links. Through these links the force is transmitted through the entire length of the chain.
To strengthen the chain, we do not need to upgrade each link, because the strength of a length of chain is determined by its weakest link, i.e., the constraint. If we can identify the weakest link and strengthen just this one link, we would strengthen the entire chain.
Of course, with this intervention, although the entire length of the chain is a little stronger, the constraint will move to the next weakest link. To strengthen the chain further we would need to identify the next weakest link and strengthen that particular link and so on.
Just like in the chain example above, the Theory of Constraints provides a set of guidelines or rules to help us attain the goals of the system based around resource relationships to focus our attention on the constraints that slow down or prevent the system from attaining its goals.
These rules are based on simple resource relationships as illustrated in Figure 4.9. The figure contains three simple systems. All three systems comprise two parts, or resources, as referred to in the Theory of Constraints. In all three systems, Part A is capable of going through 100 pieces of work per hour and Part B is capable of going through 50 pieces of work per hour.
In System 1, the two resources are not connected to one another. Thus, we can consider parts A and B as constraint resources because the capacity of each part constrains the overall output of the entire system, which is 150 pieces of work per hour. In this case, improving the capacity of either Part B or Part A will provide a net gain to the overall throughput of the system.
In System 2, the two resources are connected, with Part A, the non-constraint resource, serving Part B, the constraint resource. If both parts work at full capacity, the overall system output will be 50 pieces per hour, but because Part A is working at a rate of 100 pieces per hour and Part B can only use 50 of the pieces produced by Part A, inventory (shown by the dark triangle) will build up between the two parts at a rate of 50 parts per hour. In this system, to improve the system's throughput we would need to explore how we can improve the capacity of Part B. Any improvement in the capacity of part B would be a net gain to the whole system.
In System 3, the two resources are similarly connected, but this time Part B, the constraint resource, is serving Part A, the non-constraint resource. If both parts work at full capacity, the overall system output will still be 50 pieces per hour but because Part A is capable of working at a rate of 100 pieces per hour and Part B can only supply 50 of the pieces per hour, Part A will not be utilized for half of its time. In this system, like System 2, to improve the system’s throughput we would need to explore how we can improve the capacity of Part B. As before, any improvement in the capacity of Part B would be a net gain to the whole system.
The Theory of Constraints provides us with the following rules to help us understand and manage constraints within systems:
Rule 1: The level of utilization of a non-constraint resource is determined not by its own potential but by some other constraint in the system. As we have seen in Systems 2 and 3, the system performance was constrained not by its own potential but by the capacity of Part B.
Rule 2: Utilization and activation of a resource are not the same. In System 3, it does not matter how hard we work Part A (activation); it can only be useful 50 percent of its time (utilization).
Rule 3: An hour lost at a constraint is an hour lost for the whole system. In Systems 2 and 3 (Figure 4.9), if we lose one hour of productive time due to unavailability of Part B, which is the constraint, this hour of productive time will be lost to the entire system. Whereas an hour lost due to unavailability of Part A is not so critical because Part A can catch up as it has spare capacity.
Rule 4: An hour saved at a non-constraint is a mirage. In Systems 2 and 3 (Figure 4.9), if we save an hour in Part A, a non-constraint resource, it will have no effect on the overall system’s performance.
Rule 5: Constraints govern both the throughput and inventory. In Systems 2 and 3 (Figure 4.9) we can observe that the nature of the relationship between Part B, the constraint resource, and Part A, the non-constraint resource, governs the level of inventory in the system. In System 2, the non-constraint resource serves the constraining resource, and as a result the inventory builds up between Parts A and B. Whereas when the relationship is reversed as in System 3, there is no inventory build-up, and instead the utilization of Part A is affected.
Rule 6: The sum of the local optima is not equal to the optimum of the whole. In Systems 2 and 3 (Figure 4.9), optimizing the performance of each part, i.e., Parts A and B, is not the same as optimizing the performance of the system as a whole. If we optimize the performance of each part, i.e., working Part A and B to their full potential, we create either excess inventory (System 2) or an unnecessary operating expense by activating Part A for longer than necessary (System 3). Within these constraints the system’s performance would be better optimized if we activated Part A 50 per cent of its time, thus avoiding building up unnecessary inventory or operating expense.
Rule 7: Balance the flow, not capacity. As throughput governs the rate at which the system attains its goal, balancing the resources to optimize the flow through the system is more important than trying to achieve a balanced capacity in the system. This is due to variations in different parts of the system. Over time it is virtually impossible to balance the capacity to enable the flow in the system. Thus, instead of trying to balance the capacity, you should focus on balancing the flow by building appropriate control mechanisms.
The Drum-Buffer-Rope Principle, illustrated in Figure 4.10, is a control mechanism that enables us to operationalize several of the above rules to balance the flow, rather than the capacity, in a system. In this production system, we have a simple process that is flowing from left to right. The system comprises five operations (illustrated by circles), four of which are non-constraint resources (dark grey circles) with the constraint resource positioned in the middle (light grey circle). Based on the above rules, the constraint operation, the Drum, regulates the flow through the entire system (Rule 1), while the drumbeat represents the work rate of the constraint operation and ensures that all other operations are synchronized to the drumbeat. In other words, the non-constraint operations will be working at less than full capacity (Rules 2 and 4), but that is OK because working them any harder will not have any impact on the throughput of the whole system. The Rope represents the communication signal to synchronize the work rate of all the non-constraint operations with the work rate of the constraint operation.
The Buffer is a mitigation strategy against two risks. The constraint buffer mitigates against the risk of constraint operation running out of work due to a breakdown in upstream operations (Rule 3). The shipping buffer mitigates against the risk of downstream operations breaking down, thus negatively impacting on system’s throughput (Rule 5). In principle, these buffers are a measure of time, as the amount of inventory will mitigate against these risks for a given amount of time, i.e., the longer these breakdowns can potentially last, the higher the inventory in these buffers would need to be. Thus, knowing the probability of breakdowns and the mean length of breakdowns would provide essential information to help us decide on the optimum size of these buffers.
Finally, based on the above rules, the Theory of Constraints provides us with a simple approach to improving a system’s performance. The process consists of five steps:
Step 1: Identify the system’s constraint. There is no hard and fast science for doing this. Indeed we can model the flow of work through the system for often there are simpler and quicker indicators, such as looking for the machine with all the work piled up in front of it or looking for the person everyone needs to get a decision from. Remember sometimes the constraint can also be outside the system, in the market or even in the supply chain. Talking to as many people as possible within the system about what constrains their work, or what is it they are regularly waiting for, is an equally good way of identifying the system’s constraint.
Step 2: Exploit the system’s constraint. Spending a lot of money, buying a bigger machine or hiring more people is not always the answer. Usually, the constraint has capacity that we do not use. For example, a machine that is a constraint may be working only six hours a day from 9 am to 5 pm excluding lunch and coffee breaks. Staggering lunch and coffee breaks would enable the system to work for eight hours, increasing the capacity of the constraint by 30 percent. Remember, an hour gained at the constraint is an hour gained for the whole system. As another example, there may be someone else who can take some of the workload of the constraint, even if it is lower-level, less-skilled work. This would alleviate the load on the constraint. In short, before spending money we should look to get everything out of the bottleneck by finding innovative ways of exploiting the constraint.
Step 3: Subordinate everything else to the above constraint. This is where we synchronize all other parts of the system to work at the same rate as the constraint resource by finding innovative ways of signaling the drumbeat to all other parts of the system. In this step, we would also be looking at how we could create buffers in order to mitigate risks to the overall system performance.
Step 4: Elevate the system’s constraint. This is where we would start considering additional investments to elevate the performance level of the constraint. However, it should be borne in mind that this step may not be totally necessary, as by exploiting the system’s constraint we may have already moved the constraint to another part of the system.
Step 5: Go back to step 1. Just like the chain example at the start of this section, once we have elevated the performance of the system’s constraint, it is likely that the constraint has moved elsewhere in the system.
In his book, The Goal, Eli Goldratt tells the story of Alex Rogo, a factory manager who has been given the ultimatum to turn the factory around or risk closure. The following is the precis of the story, which demonstrates the application of the Theory of Constraints in practice. The book is well worth a read and it is still available in print as well as in an audiobook format. A film telling the story is also available on various online outlets.
The Goal
The story starts with Alex Rogo, a fictitious factory manager, being given an ultimatum to turn the factory's performance around or risk closure. At first, Alex feels lost and does not know what to do. He then bumps into an old college professor at an airport, who asks him some questions about his challenges and questions whether he is using the right performance measure. The professor introduces Alex to the three measures of The Theory of Constraints, i.e., throughput, inventory and operating expenses, and advises him to look for his bottleneck (the constraint).
At first Alex is a bit confused about how the concept of a constraint could apply to his factory. But the coin drops when he is taking his son and his friend for a hike in the forest. One child, Herbie, is the slowest; he keeps falling behind and everybody ends up waiting for Herbie. Progress is slow. Clearly Herbie is the constraint. So, Alex puts Herbie to the front of the group—they no longer have to wait for Herbie to catch up, but they are making very slow progress and everyone is bored. Alex asks to see what is in Herbie's backpack. He has frying pans, cans of beans and many more things that are weighing him down. Alex asks everyone else to take one item from Herbie to lighten his load. The rest of the hike is completed in good time.
With this new insight, back in the factory, Alex shares his thoughts with his management team and after a bit of head scratching, they identify the constraint as one of the machines, the NCX10. They stagger the lunch and coffee breaks; they even recommission an ancient machine which was being scrapped to relieve some of the workload off the NCX10 and the factory performance improves in only a few weeks. But Alex is informed that the improvement is not good enough to keep the factory open. Alex goes back to his management team and they go through the same process once again, but this time the constraint is in the market – they need more orders. Alex visits the marketing team at the company’s headquarters and does a deal with one of the sales teams to alleviate this constraint. They get more orders, the factory performance improves further, and the factory becomes the best-performing plant in the group. Alex gets promoted and everyone lives happily ever after.
In short, the Theory of Constraints provides us with a set of thinking processes and guidelines to help us identify and manage constraints in systems.
CASE STUDY
A systems thinking case study
In this case study we will look at how the models covered in this chapter could be used to help us understand an organization as a system. For consistency we will continue with the whisky company example we used at the end of the previous chapter as we are already somewhat familiar with the case.
As a reminder, the company manufactures and sells whisky in a global market; they have two business units, high-value products and low-value products, comprising about 45 and 170 specific products respectively. The manufacturing process consists of distilling the product, maturing it for a period of 5 to 25 years and then bottling it, including boxing and palletizing for transportation. The pallets are then shipped all over the world through various distributors to retail chains and specialist shops where customers can buy the product.
Using Miller’s Living Systems Theory, we can start conceptualizing that the company system is a concrete system as well as a conceptual system. It is a concrete system because it exists in reality and consists of tangible objects (products, buildings, manufacturing equipment, etc.). It is a conceptual system because the two business units are based on knowledge and ideas. The company could have also easily been categorized into large products and small products according to their bottle sizes, or as US products, EU products and Rest of The World products depending on the markets the products are being sold in.
The company clearly processes materials (matter) and information. Based on Figure 4.1, at the input stage, the marketing and customer services functions can be conceptualized as the input transducer as they bring information into the system in the form of forecasts and customer orders. The purchasing function can be conceptualized as an investor as it brings materials into the system. At the throughput stage, the planning function can be conceptualized as both the internal transducer as it receives and converts the information received by the system, and channel and net as it also distributes this information... and so on.
Using Beer’s Viable Systems Model and Figure 4.2 we can conceptualize the customers, suppliers, competitors, regulators, society and environment the company operates within. The company then has two System 1s (business units), high-value and low-value products. The manufacturing planning function can be conceptualized as the System 2 as it interprets the overall planning and production plans and coordinates the activities of two business units (System 1s). All parts of the company that are connected to the external environment, such as marketing, customer services, purchasing, accounts payable and receivable, can be conceptualized as System 4, as collectively these functions can bring in intelligence to what is happening in the outside world. The strategic management and business planning function can be conceptualized as System 5, and the sales and operations planning together with sales, purchasing, marketing and operations management functions can be conceptualized as System 3 as they collectively manage the operation of the two System 1s.
Using Hitchins’ Systems Architecture we have already conceptualized the systems architecture as the organizational structure and the infrastructure to support the purpose, function and performance of the system. It can include things such as the IT systems or proximity of buildings. For example, if raw materials need to travel long distances between operations, that would be counterproductive to the performance, particularly for low-value products.
Using Goldratt's Theory of Constraints we can analyse the end-to-end process from sales forecasts, customer orders and materials coming in from one end and products being shipped to customers at the other end. Through this analysis we may find that forecast accuracy together with the worldviews of the marketing people and how they prepare the sales forecasts are the key constraint to the performance of the high-value products business unit.
Using Deming’s System of Profound Knowledge we can start identifying the differences in world views between marketing and operations and understand the key sources of variation. We can then analyse whether eliminating variation would significantly improve performance. We may also find that to reduce variation in the performance we need to formalize the planning process using hard system modelling tools (see next chapter), or maybe we need to co-locate marketing, planning and manufacturing functions to improve daily communications, thus improving the system’s architecture.
Reflective questions
In the above case study, we started describing how various functions of the company may map on to Miller’s 20 subsystems (Figure 4.1) but we did not complete all 20 subsystems. Try completing the rest of the analysis and see if you can map the remaining subsystems and to company functions.
4.6 Summary
In this chapter, our objective was to build upon various concepts and definitions we covered in the previous chapters and introduce you, the reader, to various systems thinking models and frameworks. The models and frameworks we have selected to include in this chapter, whilst not an exhaustive list of all the models in the field, are intended to provide different perspectives to systems and systems thinking.
On the one hand, Miller’s Living Systems Theory and Beer’s Viable Systems Model both look at complex systems, recognize the recursive nature of systems, i.e. systems exist within systems, and offer models of the mechanisms that enable living and viable systems to self-manage and self-create themselves. Even though the two models are quite different, there are a lot of complementarities between the two models. Hitchins on the other hand takes a different perspective by helping us understand the architectures that underpin systems and how these architectures evolve as systems change and adapt to their environments.
Deming and Goldratt provide different perspectives yet again. Deming identifies two fundamental factors that make complex systems difficult to predict. He identifies that when variation occurs in the performance of different parts of the system, their interactions are naturally going to produce at least partially unpredictable outcomes. He also identifies that this unpredictability is further exacerbated by the differences in the theories (worldviews) that different participants (people, animals, organizations, societies, etc.) may have about the system. He argues that we cannot begin to understand the behaviour of such complex systems without first understanding these variations in participants’ theories or worldviews.
Goldratt takes a different perspective by focusing our minds upon the goal of the system and the constraints that are preventing the system from achieving its goal. He offers a process for analysing the system and improving its performance by identifying and eliminating a constraint.
These models and frameworks individually would not be sufficient to provide a comprehensive understanding of a complex system. However, in...
Comments
Post a Comment