Essay

Separation of Concerns in Systematic Design 09.2020

Introduction

This the second essay about systematism. The first part of this series examines what kind of tasks are most suitable to be systematized, and how abstraction benefits a system’s longevity.
I have also analyzed the conditions that will allow systematic constructs to thrive, and I have discussed the specific economies created by a thoroughly optimized system.

In the second part I want to build on this foundation and introduce an additional concept which is fundamental to ensure the operational efficiency of a system: separation of concerns.

Separation of concerns

Separation of concerns is an architectural engineering concept which describes the separation of a program into a discrete set of operations which are semantically self-contained.
In relation to systems, separating concerns is a powerful construct that ensures predictability and maintainability, and it is applicable to many different contexts.

The main goal of separation of concerns is to clearly define the scope of each part that constitutes the system. Distinct boundaries are fundamental for concerns to be truly separated, and the criteria by which the scope is defined can vary depending on the goals of the system and how each part within relates to the others. A popular approach for scope definition is the single responsibility principle , which dictates that every discreet component of a system is responsible for a unique aspect of functionality provided by the construct it belongs to, and that responsibility should be entirely self-contained. In the context of an organization, assigning each team a specific set of responsibilities allows to reduce duplication of efforts and streamline the workforce.

The earliest definition of concerns separation known in system theory literature appears in the essay On the Role of Scientific Thought , where the computer scientist Edsger W. Dijkstra describes separation of concerns as:

[...] It is, that one is willing to study in depth an aspect of one's subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects.[...] It is what I sometimes have called "the separation of concerns", which, even if not perfectly possible, is yet the only available technique for effective ordering of one's thoughts, that I know of [...]
Separation of concerns according to Dijkstra, who puts the academic at the center of his definition.

What Dijkstra describes is the correlation between clear scope and the level of focus for each area. When teams have clear responsibilities assigned to them, they are able to analyze the problem space in greater depth and develop expertise around the subject, maximizing their chances of success.

Horizontal separation

Depending on the desired system’s architecture, separation of concerns can follow different models and different levels of scale designed to maintain the system within its optimal output range.
The most common model for separation of concerns is called horizontal and it refers to the logical separation of modules as individual layers of functionality. In software engineering, a common example of horizontal separation is the MVC model . Let's imagine a car dealership which enables customers to sell and purchase vehicles through an app: if the application's architecture is separated horizontally, the distinction between presentation, logic and data guarantees there will be no functional overlap between any of the three parts. Horizontal separation of concerns can be also referred to as local, since its combined scope is usually limited to a singular system.

Vertical separation

Orthogonal to horizontal concerns, vertical separation of concerns follows the structure of business organizational units and is grouped by feature set. This type of separation is often paired with other uncoupling strategies to achieve a more nuanced application architecture.

For example, the same car dealership can organize itself around teams that cater to the specific needs of a customer during each step of their lifecycle. This type of structure could result in a single unit responsible for customer leads, a second in charge of consumer acquisition, a third accountable for conversion and so on, all working in coordination, but independently from one another. Each team can also separate further horizontally, dividing business logic, presentation and data management for each feature they are responsible for. This distribution of concerns is particularly helpful with systems that are too large or complex to be overseen by a single entity and require their scope to be defined at multiple levels.

Transversal separation

The last type of separation of concerns is transversal (or aspect-oriented). Transversal concerns are identified as interspersed across the boundaries of various areas within an application. In software development, this particular approach is referred to as aspect-oriented programming , a practice aimed at abstracting methods that are used throughout the application so that they can be managed as a centralized function. Let's go back to our car dealership example and imagine that workers on different teams share requirments of managing documentation and debugging their code. Documentation and logging become cross-cutting concerns because they require a shared solution but are incorporated locally at different levels of the codebase.
A transversal concern can be incorporated within existing separations, for example even within a model which already separates concerns horizontally, therefore cross-cutting concerns can be utilized across the boundaries of the application silos to provide common patterns for implementation of reusable features.

Scale

A common detracting argument for separation of concerns is that breaking a monolithic toolchain into a series of smaller workflows may cause too much fragmentation, resulting in a system that produces sub-optimal results. While we should not be dogmatic about any of these models, in most cases separating concerns is a useful approach when building systems that are expected to scale.

When designing a new system, system’s architects need to balance today's constraints with tomorrow's ambitions: the system needs to be flexible enough to scale as the business grows and sufficiently nimble not to require any unnecessary overhead. So what does scaling a system exactly mean?

We can think of the required scale of a system as the difference in reach or speed required to complete a job and the output ability of a single worker.
While human-centric, this definition applies to the output ability of machines or computers alike.

When the required output of a system goes beyond what a single worker or machine can individually support, the system needs to grow. Once a system reaches this inflection point, system administrators generally have two choices: they can seek to optimize the system further so that the output of a single worker can be increased, or they can add additional workers to improve the capacity of the system.

Let’s examine a system that is already running at optimal capacity and can only support additional output by providing additional workers. In a system with a single worker, the first operator has to complete all the tasks required from the system by themselves. The introduction of a second worker allows both operators to divide the required labor, enabling each individual to gain speed and output volume by perfecting the execution of a smaller set of steps, which ultimately contribute to boosting the overall operational efficiency of the system.

In the example above separating concerns between workers provides a pathway towards maximizing efficiency by ring-fencing the operators' responsibilities to a set of processes that are manageable for the individual and that can be kept in memory . Neurological studies have demonstrated that the ability to execute a task is stored in our procedural memory and that there is a tax we pay every time we switch between tasks that require a different set of instructions. This means that if a system requires workers to learn and perform tasks beyond what they can comfortably recall and complete, it will incur in context switching debt and it’s output will begin to degrade.

In systems operated by people, the dimension of what a single human can sustainably deliver becomes a guardrail to help creating infrastructures that are well managed and long-lived. The same concept rose in popularity in Britain during the industrial revolution, and resulted in limiting the daily work hours to what is known today as the 8-hour work day. In the following section I will discuss how this principle can also serve systems operated by machines, following a succinct axiom that recites:

The fewer moving parts, the better. Exactly. No truer words were ever spoken in the context of engineering.
Software developer Christian Cantrell's quote encapsulates a similar sentiment to Murphy's law.

Interoperability

Now that we understand how separation of concerns can help to efficiently scale our systems, we can discuss how they should be architected so that they can complement each other and work together as a network. We define interoperability as:

the ability of different information technology systems and software applications to communicate, exchange data, and use the information that has been exchanged.
A definition of interoperability by the Healthcare Information and Management Systems Society.

In recent years a number of European governments have started to streamline their bureaucratic burden by offering access to digital services to their citizens. These e-governments are setting new standards on how systems should be constructed, focusing on portability of data which is often collected once, stored in singular repositories, and shared across many different services and providers.
To guarantee the transferability of information across multiple systems, governments and institutions need to address two different obstacles: technical and semantic interoperability.

Technical interoperability addresses the technological requirements two systems must satisfy to communicate securely across a network. It establishes the foundational syntax and protocols of data sharing that need to be fulfilled when a connection is made and it is responsible for developing standards around exchange solutions so that data can be accessed by compliant systems. For example, when transferring data from an API endpoint to a client, both resources need to satistfy the same security requirements for the data transfer to be successful. Technical interoperability ensures that data transfer is possible and that all the required infrastructure is operational.

Semantic interoperability defines the underlying data model which enables mapping between source data sets and its destination endpoints. For example when a patient is transferred from an hospital to another location, institutions share patient data so that doctors can provide continuous treatment. Semantic interoperability ensures that each data point shared between two systems has the same meaning regardless of the entity that stores it (e.g. the patient's "first name" recorded by hospital A is interpreted as such and not as their "middle name" when its received hospital B), and it is crucial in preventing transcription errors and keeping the data readable by humans.

While interoperability may seem a simple concept to grasp, its application requires a conspicuous amount of planning and it is a crucial part of a system's design. As they stand up new systems, architects need to make sure they are not only able to exchange information amongst each other, but can also reference pre-existing data, which might be stored in a different infrastructure. Interoperability considerations become more relevant for larger networks: when a group of systems is able to communicate and coordinate the execution of multiple processes across a network, we define that architecture as a distributed system.

Distributed systems

A system is considered distributed when it is composed by a series of interoperable entities, which work on a discreet set of responsibilities, and are able to share information efficiently over a network.
The main characteristics of distributed systems are: scale, interoperability, concurrence, and fault tolerance .I have discussed the first two attributes earlier in this essay, so I'll focus on the remaining concepts.

Concurrence is a property of a system that describes the ability of individual concerns to operate at the same time, or in parallel. It is particularly useful in reducing dependencies between tasks that are part of the same system. Workers are not bound to perform tasks sequentially any longer, but can operate in parallel, drastically reducing the execution time of the job the system is hired to perform. As we know from exemplary cases like Amazon Prime delivery or on-demand content, reducing the time it takes to deliver a job can be an incredible advantage over a competitor's system.

Fault tolerance enables a system to continue operating efficiently even in the event of failure of some of its modules. Fault-resistant systems are designed to continue their operations even in case of catastrophic failure. There are many approaches to fault tolerance, and the most common of all is redundancy.
Redundancy consists of ensuring that critical parts of the system (e.g. its data storage location, or the tires of a car) are duplicated and backed up by other parts of the infrastructure (and preferably hosted in different locations, like a cloud backup or a spare tire in the trunk), so that in case of failure of the main data source, the system can sustain itself by automatically switching its provisioning to the backup copy (network storage drives or gas-powered generators are other common examples of data and power redundancy).

Distributed systems can also be diversely located. In this essay I have considered distribution primarily from the point of view of separation of concerns, but it is important to acknowledge that another advantage of distributed systems is that they are not bound to a specific geography and do not need to be co-located to operate. A distributed geography can be particularly advantageous in the context of a global economy, allowing different units of a single system to work around the clock across different timezones and to recruit the best talent for a specific job without being constrained by local market rates.

Conclusion

As technology advancements allow for more efficient distribution of labor across different units and locales, newly created systems are paving the way for an emerging systematic model which favors smaller, interoperable (and replaceable) modules to larger monolithic constructs, in favor of risk mitigation, scalability and supercharged production lifecycles.
These modern system's architectures are designed to be more flexible and less dependent on each other and allow for a greater speed of deployment and iteration. For a deeper dive on how to optimize a system for continuous delivery and how to boost its distribution strategy, you can reference the following essay.

References

  1. Wikipedia, Separtion of Concerns. ↩︎
  2. Wikipedia, Single Responsibily Principle. ↩︎
  3. On the Role of Scientific Thought, 1974, Edsger W. Dijkstra. ↩︎
  4. The Art of Separation of Concerns, January, 2008. ↩︎
  5. Wikipedia, Model–view–controller. ↩︎
  6. Wikipedia, Aspect-oriented programming. ↩︎
  7. How The Brain Learns, July, 2011. ↩︎
  8. The Role of Working Memory Gating in Task Switching: A Procedural Version of the Reference-Back Paradigm, Yoav Kessle, Frontiers in Psychology, Dec, 2017. ↩︎
  9. Wikipedia, E-goverment. ↩︎
  10. Distributed Systems - The Complete Guide. ↩︎

Newsletter

Subscribe daily or weekly to the most relevant industry links across design, product, and development. Published via magur.news.