One often (and ironically) repeated rule in programming is: don’t repeat yourself. We repeat it so much we even have an abbreviation: DRY. There are good reasons for this advice. Duplicating and modifying code can be a quick and easy way to get a feature done, but it can also lead to problems over time. It’s harder to understand code when the valuable logic is mixed with reams of low-value boilerplate. Subtle differences can also sneak in, leading to inconsistent behaviour across the application. Things get even more difficult if you want to make a change.

Cross-cutting concerns are the requirements that span multiple features. This includes things such as error handling, logging, profiling, security policy, and so on. We often want these things to be consistent, and we occasionally want to make systematic adjustments to them. Unfortunately, because of their nature, they can be challenging to centralize.

There are a bunch of tools and techniques for dealing with cross-cutting concerns, and I’ve made a list of them. I recommend thinking carefully about which approach(es) you want to use. It may be challenging to switch approaches once one has become widely used in your code base.

Once you abstract away some of these big requirements, you start to think about similar techniques every time you repeat a pattern. This is a deep rabbit hole to go down, but I can’t say it’s entirely wrong to plunge the depths. Things like classes that report changes, hash functions, even writing property getters and setters can be convenient to automate. This kind of code is often not fun to write, and because we often put little thought into it, honest mistakes can creep in. Worse, it’s eve more tedious to test these sorts of things, and the bugs they cause can be tough ones to reproduce and find.

I am focusing on the broader-scoped cross-cutting concerns in this post, but the same tools and techniques can be applied to many places where you need to implement the same behaviour across multiple bits of code.

Aspect Oriented Programming (AOP) tools

Aspect Oriented Programming is an approach that tackles cross-cutting concerns head on. The way they work varies by framework, but essentially they inject behaviour by either wrapping functions or modifying compiled code. You attach behaviours either using language features (ex: Attributes, Annotations), or through pattern matching (ex: namespace, class name, function name, function signature). There are multiple tools available in many common languages, so you should have no trouble finding something for your project.

When they work, these tools can be magical. Slapping a security attribute on a function is far easier than interrogating some security service and writing your own error messages. It also helps us keep our code cleaner: the account balance class does need to be secure, but security isn’t really a core part of the logic, and it’s nice to keep them untangled.

The problem with this approach is that it can be a little too magical. A developer joining the team could find it difficult to understand why some behaviour is happening where there is no apparent code causing it. It is also not always straightforward to debug this kind of code when it doesn’t work (but some tools perform better than others). It can be even more confusing when some shared logic is executed where it’s not supposed to.

If you’re going this route, there are two flavours to consider: some tools modify the compiled code, and others wrap objects at runtime (often in cooperation with your dependency injection system).

Wrapping objects at runtime is helpful because they can sometimes wrap code you didn’t write. They may also make it possible to dynamically add or change behaviours, either via configuration files or your own code. For example, you could inject a policy that wraps all database calls with performance counters, or do this only in a test environment.

Working at (or near) compile time can be a good choice too. Behaviour inserted this way has the benefit of access to the internal bits of your classes, making more powerful policies possible.

Extract into a helper function

If you don’t want magical code, you can still centralize repeated logic with your own helper functions. This gives you one place to make changes, and is easier for new developers to follow. For logic that needs to wrap your code (ex: profiling), you can write a helper function that takes a lambda or delegate. For example (implementation simplified for illustration):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
public static void Main()
{
Profile("Application Startup", () =>
{
// startup code here
});
}

public static void Profile(string description, Action profiledFunction)
{
var executionTime = Stopwatch.StartNew();
try
{
profiledFunction();
}
finally
{
Log($"{description} took {executionTime.ElapsedMilliseconds}ms");
}
}

The difficulty with this approach is that it can be deceptively challenging to design a good helper function. The implementation can be changed easily so long as you don’t need to change the function signature. For example, if you are writing your own logging helper, what arguments do you require? Do you want the class name? Do you want a stack trace? Do you want to support string formatting inline? Every additional argument adds more effort for developers using the helper function, however, if you don’t require enough information, you may not be able to make that desired change to your centralized logic without making expensive sweeping changes anyway.

This technique may be best for cases where you have a bit of duplication in a handful of areas. If your helper function is being called from hundreds or thousands of places, you may want a technique that is a bit more robust.

Use an established library

Picking a library and using it can be a good approach for common concerns. Logging is an example where this makes sense. There are lots of fantastic, well-established libraries available. They have mature APIs, tested and refined in thousands of apps. Every framework I’ve seen has been configurable for most common use cases. Many also support plugins for more specialized needs.

Logging is also the kind of thing that developers want to interact with: it can be helpful to throw a bit of extra information into the logs when you’re trying to track down an elusive bug. For comparison, if you are wrapping your code with logging using an aspect oriented framework, you may not have a natural way to pass specific information in to a log message.

Compared to aspect oriented programming, however, you do have to invoke the library whenever you want to use it. If you wanted to log all calls to your service layer, you would have to call your logging framework for all your service methods (or use one of the other automatic techniques in this list).

Another drawback to this approach is that you can become stuck with the library you choose because the cost of replacing it will be prohibitive. This may or may not be the kind of thing you have to worry about though. I have seen more than zero teams switch logging frameworks in an established project, but I’ve ever seen an example where it was necessary.

Use a common interface library

If you want the stability of a mature API, but really might need the ability to change your implementation, you could look for a common interface intermediary. There are lots of examples of community-maintained interface projects, and even some built into base frameworks. With these, most of your code gets implemented referencing the common interface, but the actual implementation is wired up through a plugin. Java’s Apache Commons Logging is a great example of this.

There are also more advanced interfaces that do more than wrap another implementation. OpenTelemetry is a fantastic library for recording telemetry, traces, and logs. Its greatest benefit is that you can add one or more plugins to expose this data to your various monitoring tools such as a privately hosted Prometheus system, or a commercial observability platform.

A common interface is a particularly useful tool when they are so widely used that common libraries start to use them too. For example, parts of .Net Core are now instrumented using Open Telemetry, so you can pick up valuable data in many helpful areas just by adding a plugin.

When choosing one of these interfaces, make sure you’re picking something that’s widely used in your component ecosystem. If you are forced to use two (either because you have components that use difference ones, or you decide to change), you might be able to make a simple bridge to stick them together. It is far nicer to pick the right library the first time though.

Another thing to be careful about is that these abstractions can sometimes be complicated. Using an abstract authorization interface could have a lot of bells and whistles you don’t need, or aren’t even supported in your implementation. The more complicated the interface is, the more likely you’ll find diverging usage patterns across your code, and they may not universally work. This erodes some of the benefits of using a common library in the first place.

Put it in a base class

Inheritance is the classic object-oriented way to share logic, and it can be very effective. Features like abstracts and virtual functions make it easy to deal with exceptions, which many approaches to centralizing logic have trouble with. It’s also quite helpful having the compiler to enforce the rules for you, making it safer to change the structure in the future.

The problem with inheritance is that where some is good, more is definitely not better. It is a powerful tool, but you can hit its limits pretty fast. Most languages only support single inheritance, and the ones that support multiple inheritance regret it. This means that you can only inject one set of behaviours into a class. If you want logging on some classes, but logging and authorization on others, you might need multiple layers of base classes to pile the features on or off. If you have a handfull of concerns, you could find yourself making lots of base classes to pick and choose where they go. This makes this technique generally unsuitable for the classic cross-cutting concerns. Another limitation of inheritance is sometimes it isn’t available to you: you might need your classes to derive from another type to participate in some other system.

Another challenge with using base classes for shared code appears when you have dependencies. You either need to include your dependencies in every constructor, or do some tricky stuff with statics to gain access to them. For example, if you are using a base class for authorization, you may need to pass a reference to your authorization service from every implementation of the base class. If you ever wanted to make a change that required a new dependency, you’d have to go and change the constructor for every implementation. This isn’t fun if it’s widely used.

This approach can, however, work well for small-scale repeating concerns. One example that comes to mind is as a foundation for a plugin architecture. Abstract methods are a great way to ensure implementations provide the entry points you require. If you go this route, I suggest keeping your utilities out of the base class to avoid versioning problems.

Put the logic higher or lower in the stack

Sometimes you can find a single place higher or lower in the stack to insert some shared logic. For example, for a common requirement to log errors in a web service, your web framework might support adding some code that intercepts every request. This code can check for and log errors before responses are returned. I generally prefer this kind of approach for error handling: logging general errors all over the place takes a lot of effort. It’s much nicer when you can let most errors bubble up the stack and know that the application will record and report them correctly.

You can also sometimes use this technique lower in the stack. For example, you might be able to wrap a database connection to add some generic profiling logic for your database interactions.

Use code generation tools

Some advanced IDEs have tools for generating code. This makes it easy to insert the exact same code over and over. It also makes it possible to share the patterns across a team.

The code that’s generated will be duplicated. This will make some kinds of changes to the pattern harder, but for some cross-cutting concerns that may not be as important. Even so, if the code is identical, you may be able to get pretty far into a sweeping change with carefully crafted compare-and-replace commands.

This approach may be best in places where you want some common code, but you can’t use one of the other more automatic approaches for whatever reason.

Generate boilerplate with AI-powered tools

We can quickly generate code with one of the new AI-powered tools, and they should have no trouble generating boilerplate for us along with everything else. This could seem like an attractive option for dealing with cross-cutting concerns, but I am skeptical.

Generative AIs can generate boilerplate, but they will be making it up every time they write code. You would need to provide appropriate instructions for the kinds of boilerplate you want, and remembering to do this is not that different from adding it yourself every time anyway. Even if you do, there is no guarantee that the generated code will be consistent. It may be mostly cosmetic differences, but the threat of a functional difference is always looming.

If you ever needed to change the pattern across your code base, you would have all the duplication, but subtle differences (even cosmetic ones) could make more automatic approaches impossible. Scanning all your code manually to change your error handling approach would not be fun.

It’s hard to say how these tools will perform in the future. Maybe they’ll get much better at including required boilerplate in a particular way. For now, I recommend avoiding this approach to this particular problem.

Use automated tests to enforce consistency

Sometimes there is no good way to share the implementation of a cross-cutting concern. In these cases, it may make sense to write an automated test that makes sure that the policy is present. It may sound a bit daunting, but it can actually be pretty easy to write a test that finds every class in your project and checks it. You may be able to use reflection to find and call methods. For some scenarios, it may be easier to write a test that looks at the source files directly.

Some concerns are so important that it may be worthwhile to write these sweeping tests even if you can implement the implementation automatically using one of the other techniques in this list. Authorization is a great example: you might want to test that all of your service classes fail with the right error if no permissions are present.

When writing these kinds of tests, I find it helpful to put a list of exceptions in a constant nearby. For example, you would want your login service to work for a user that hasn’t logged in yet. There may not be any other exceptions to the rule, but you can easily add them to the list if they appear.

Do nothing

Depending on the dynamics of your team and the kind of concerns your dealing with, it may not be possible or even worthwhile to use one of these techniques. For example, an open source project may not want to enforce strict coding requirements on unpaid volunteers. Automatic handling of a concern may be too complicated for most contributors to understand, or make it too difficult to set up a development environment.

So what happens in this worst case scenario? You get inconsistent implementation of a requirement, inconsistent implementations when the requirement is implemented, and it becomes harder to change the behaviour across the system. On the other hand, you don’t have the overhead of these approaches either. You don’t have any magic code, or any unexpected tricks hidden in middleware.

For something like a prototype, or for a small internal project that’s unlikely to grow into something bigger, maybe this is good enough.