Understanding Begins with Expressive Names

In 2018, I joined a large project halfway through its development. The original engineers had moved on leaving behind convoluted and undocumented code. Working with this type of code is challenging because you can’t differentiate the plumbing from the business domain. This makes debugging difficult and changes unpredictable because you don’t know the impact. It’s like trying to edit a book without understanding the words.

Many engineers believe the measure of success is when the code compiles. I believe it’s when another engineer (or you in six months) understands the ”why” of your code. The original engineers handicapped the future engineers by not documenting and using obtuse names. The names are sometimes the only window into the previous engineer’s thought process.

Donald Knuth famously said:

Programs are meant to be read by humans and only incidentally for computers to execute. – Donald Knuth

Naming

Naming is hard because it requires labeling and defining where and how a piece fits in an application.

Phil Karlton, while at Netscape observed:

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

We see our code through the lens of words and names we use. Names create a language for the next engineer to comprehend. This language paints a picture of how the author bridged the business domain and the programming language.

Lugwid Wittgenstein, a philosopher in the first half of the 20th century, said:

The limits of my language mean the limits of my world. – Ludwig Wittgenstein

The language of our software is only as descriptive as the names we use and using vague names blur the software’s purpose; using descriptive names bring clarity and understanding.

Imagine visiting a country where you don’t speak the language. A simple request such as asking to use the bathroom brings bewildered looks. The inability to communicate is frustrating maybe even scary. An engineer feels the same when confronted with confusing, unclear, or even worse, misleading names.

This feeling is best experienced.

Experience

Examine the first snippet of code, what does this code do? What’s the why?

Take your time.

public class StringHelper
{
    public string Get(string input1, string input2)
    {
        var result = string.Emtpy;
        if(!string.IsNullOrEmtpy(input1) && !string.IsNullOrEmtpy(input2))
        {
            result = $"{input1} {input2}";
        }
        return result;
    }
}

The above code is a simple concatenation of two strings. What the code doesn’t tell you is the “why.” The “why” is so important, without it, it’s difficult to change behavior without understanding the impact. Of course, investigating the code’s usage will likely reveal it’s “why,” but that’s the point. You shouldn’t have to discover the code’s purpose, instead, the author should have left clues, it’s their responsiblity to do so.

Let’s revisit the code, but with a little “why” sprinkled in.

Again, take your time, observe the difference you feel when reading this code.

    public class FirstAndLastNameFormatter
    {
        public string Concatenate(string firstName, string lastName)
        {
            var fullName = string.Emtpy;
            if(!string.IsNullOrEmtpy(firstName) && !string.IsNullOrEmtpy(lastName))
            {
                fullName = $"{firstName} {lastName}";
            }
            return fullName;
        }
    }

The “why” brings the code to life, there’s a story to read.

Communicate

Communicating the intent and the design to the next engineer allows software to live and to grow because if engineers can’t modify the software, it dies. This is a tragedy, even more so when it’s a result of poor design and lack of expressiveness — each is preventable with knowledge.

Do the next engineer a favor and be expressive in your code. Use descriptive names and capture the “why” because who knows, the next engineer might be you.

Codifying the Secret Sauce

Each application has its secret sauce, it’s reason for existing. Codifying the secret sauce is instrumental in writing maintainable and successful applications.

Wait. What is codifying? Patience my friend, we’ll get there.

First let’s hypothesize:

You’ve just been promoted to Lead Software Engineer (Congratulations!). Your CEO’s first task is creating a new product for the company. It’s a ground-up accounting application. The executives feel having a custom accounting solution will give them an edge on the competition.

A few months have passed, most of the cross-cutting concerns are developed (yay you!). The team is now focused on the yummy goodness of the application: the business domain (the secret sauce). This is where codifying the secret sauce begins.

Codifying, is putting structure around an essential concept in the business domain.

In accounting, the price (P) earning (E) ratio (P/E Ratio) is a measurement of earnings for a company. A high P/E ratio suggests high earnings growth in the future. The P/E ratio is calculated by taking the market value per share (share price) divided by the earnings per share (profit – dividends / # of outstanding shares).

A simple, and I argue, naive implementation:

public class Metric
{
    public string Name { get; set; }
    public decimal Value {get; set}
    public int Order {get; set;}
}
public class AccountingSummary
{
    public Metric[] GetMetrics(decimal price, decimal earnings)
    {
        var priceEarningsRatio = price/earnings;
        
        var priceEarningsRatioMetric = new Metric 
        {
            Name = "P/E Ratio",
            Value = priceEarningsRatio,
            Order = 0
        }
        return new [] {priceEarningsRatioMetric};
    }
}

If this is only used one place, this is fine. What if you use the P/E ratio in other areas?

Like here in PriceEarnings.cs

var priceEarningsRatio = price/earnings;

And here in AccountSummary.cs

var priceEarningsRatio = price/earnings;

And over here in StockSummary.cs

var priceEarningsRatio = price/earnings;

The P/E Ratio is core to this application, but how it’s implemented, hardcoded in various places, makes the importance of the P/E Ratio lost in a sea of code. It’s just another tree in the forest.

You also open to the risk of changing the ratio in one place but not the other. This may throw off downstream calculations. These types of errors are notoriously difficult to find.

Often testers will assume if it works in one area, it’s correct in all areas. Why wouldn’t the application use the same code to generate the P/E Ratio for the entire application? Isn’t this the point of Object Oriented Programming?

I can imagine an error like this making it into production and not discovered until a visit from your executive who demands to know why the SEC’s P/E Ratio calculations are different from what the company filed. That’s not a great place to be.

Let’s revisit our P/E ratio implementation and to see how we can improve upon our first attempt.

In accounting systems formulas are a thing, let’s put structure around formulas by adding an interface:

public interface IFormula
{
    decimal Calculate<T>(T model);
}

Each formula now is implemented with this interface giving us consistency and predictability.

Here’s our improved P/E Ratio after implementing our interface:

We’ve added a PriceEarningsModel to pass the needed data into our Calculate method.

public class PriceEarningsModel
{
    public decimal Price {get; set;}
    public decimal Earnings {get; set;}
}

Using our PriceEarningsModel, we’ve created an implementation of the IFormula interface for the P/E Ratio.

public class PriceEarningsRatioFormula : IFormula
{
    public decimal Calculate<PriceEarningsModel>(PriceEarningsModel model)
    {
        return model.Price / model.Earnings;
    }
}

We’ve now codified the P/E Ratio. It’s a first-class concept in our application. We can use it anywhere. It’s testable, and a change impacts the entire application.

As a reminder, here’s the implementation we began with:

public class Metric
{
    public string Name { get; set; }
    public decimal Value {get; set}
    public int Order {get; set;}
}
public class AccountingSummary
{
    public Metric[] GetMetrics(decimal price, decimal earnings)
    {
        var priceEarningsRatio = price/earnings;
        
        var priceEarningsRatioMetric = new Metric 
        {
            Name = "P/E Ratio",
            Value = priceEarningsRatio,
            Order = 0
        }
        return new [] {priceEarningsRatioMetric};
    }
}

It’s simple and gets the job done. The problem is the P/E Ratio, which is a core concept in our accounting application doesn’t stand out. Engineers not familiar with the application or the business domain won’t understand its importance.

Our improved implementation uses our new P/E Ratio class. We inject the PriceEarningsRatioFormula class into our AccountSummary class.

We replace our hardcoded P/E Ratio with our new `PriceEarningsRatioFormula` class.

public class AccountingSummary
{
    private PriceEarningsRatioFormula _peRatio;
    
    public AccountingSummary(PriceEarningsRatioFormula peRatio)
    {
        _peRatio = peRatio;
    }
    
    public Metric[] GetMetrics(decimal price, decimal earnings)
    {
        var priceEarningsRatio = _peRatio.Calculate(new PriceEarningsModel 
        { 
            Price = price, 
            Earnings = earnings
        });
        
        var priceEarningsRatioMetric = new Metric 
        {
            Name = "P/E Ratio",
            Value = priceEarningsRatio,
            Order = 0
        }
        return new [] {priceEarningsRatioMetric};
    }
}

One could argue there is a bit more lifting with the PriceEarningsRationFormula over the previous implementation and I’d agree. There is a bit more ceremony but the benefits are well worth the small increase in code and in ceremony.

First, we gain the ability to change the P/E ratio for the entire application. We also have a single implementation to debug if defects arise.

Lastly, we’ve codified the concept of the PriceEarningsRatioFormula in the application. When a new engineer joins the team, they’ll know formulas are essential to the application and the business domain.

There are other methods of codifying (encapsulate) the domain, such as microservices and assemblies. Each approach has its pros and cons, you and your team will have to decide what’s best for your application.

Ensconcing key domain concepts in classes and interfaces creates reusable components and conceptual boundaries. Making an application easier to reason with, reducing defects and lowering the barriers to onboarding new engineers.

Garbage Collection Types in .Net Core

Memory management in modern languages is often an afterthought. For all intents and purposes, we write software without nary a thought about memory. This serves us well but there are always exceptions…

In California, there are extensive financial reporting requirements for Local Education Agencies (LEA), an LEA can be a county, a district, a charter or a single school. Most LEAs create their own financial reports which are usually centered around Excel, it’s no surprise when each report is different. To solve this problem the California Board of Education commissioned software to generate financial reports. 

I was a part of the development team. 

My first stop was the testing logs, Ed-Pro’s logs pointed to high memory usage, perhaps there was a memory leak? An engineer observed that Ed-Pro’s calculations used a large amount of short-lived memory. If the memory wasn’t cleaned up quickly, it could appear like a memory leak.

Ed-Pro is built on top of .Net Core, Microsoft’s multi-platform framework. In .Net Core, memory is divided into three categories: Short-lived (Gen0), medium lived (Gen1), and long-lived (Gen2). Gen0 is for short-lived data that quickly goes out of scope, Gen1 is for medium lived memory that hangs around for a bit longer, it too also eventually goes out of scope and Gen2 is long-lived memory that may live for the life of the application. Gen0 memory is constantly reclaimed, Gen1 is reclaimed less frequently than Gen0, and Gen2 is reclaimed even less frequently than Gen1.

The only sure way to understand the memory usage of Ed-Pro was to profile it, below is a screenshot using dotMemory by JetBrains.

As suspected, we found large amounts of Gen0 memory (the blue), so much so, it appeared that Garbage Collection couldn’t keep up. A strategy to compensate for a large amount of memory, caused Garbage Collection to oscillated between increasing memory space (adding more memory for the application’s use) and cleaning it up. During the cleanup cycles, the application is unresponsive.

At first, we were stumped, isn’t the purpose of the GC to keep memory tidy? Two articles were instrumental in our understanding of how Garbage Collection works in .Net: Mark Vincze’s article Troubleshooting high memory usage with ASP.Net Core on Kubernetes and Fundamentals of Garbage Collection by Microsoft. Both are great reads and brought clarity to the memory usage in Ed-Pro. 

Here’s a summary of what we learned, there are two types of Garbage Collection in .Net: Server Garbage Collection and Workstation Garbage Collection.

Server Garbage Collection makes a couple of assumptions: First, there is ample memory available and second, the processors are multi-core and are fast. Both can be true, but we live in a world of virtual machines and Docker where it’s more likely that both assumptions are false.

Server Garbage Collection allows memory to build, at some point, it does one of two things: it either increases the memory space allowing memory to grow or it frees up orphaned memory. When it chooses to free memory, the Garbage Collection starts the process on a high priority thread. The high priority thread is a higher priority than the application; if the machine is fast, the clean up shouldn’t be noticed. However, if it’s not, it’ll cause the application to halt until the clean up is completed.

Workstation Garbage Collection operations differently. It continuously runs reclaiming memory on a thread with the same priority as the application. This means it’s also competing for resources with the application which can cause application slowness. The upside is the application’s memory usage can stay quite low, primarily when it uses large amounts of Gen0.

As a default, if .Net Core detects a server, it runs the Server Garbage Collection type, which was the case with our application. To run the Workstation Garbage Collection type add the following snippet to your project file:

  <PropertyGroup> 
    <ServerGarbageCollection>false</ServerGarbageCollection>
  </PropertyGroup>

We made this configuration change to Ed-Pro, using dotMemory, we profiled Ed-Pro’s memory with Workstation Garbage Collection enabled and loaded the same screens as in the previous test. Here are the results:

The memory usage is significantly decreased. The Gen0 allocations are virtually non-existent. Beyond the differences in the graph, the Server Garbage Collection memory usage topped 1 gig while the Workstation Garbage Collection topped at roughly 200 megs.

Every application is different. Our application used a ton of temporary data and thus uses a ton of Gen0 memory. Your application may leverage longer lived memory such as Gen1 or Gen2 in which Server Garbage Collection makes a whole lot of sense. My advice is to profile your memory under different conditions for an idea of how memory is used and then decide which mode is best for you application.